Skip to content
Snippets Groups Projects
  1. Jul 13, 2018
  2. Jul 05, 2018
  3. Jul 03, 2018
  4. Jul 02, 2018
  5. Jun 19, 2018
  6. Jun 18, 2018
  7. May 25, 2018
  8. May 24, 2018
    • Noah Zentzis's avatar
      Recycle space when reserving from Vec-backed Bytes (#197) · dfce95b8
      Noah Zentzis authored
      * Recycle space when reserving from Vec-backed Bytes
      
      BytesMut::reserve, when called on a BytesMut instance which is backed by
      a non-shared Vec<u8>, would previously just delegate to Vec::reserve
      regardless of the current location in the buffer. If the Bytes is
      actually the trailing component of a larger Vec, then the unused space
      won't be recycled. In applications which continually move the pointer
      forward to consume data as it comes in, this can cause the underlying
      buffer to get extremely large.
      
      This commit checks whether there's extra space at the start of the
      backing Vec in this case, and reuses the unused space if possible
      instead of allocating.
      
      * Avoid excessive copying when reusing Vec space
      
      Only reuse space in a Vec-backed Bytes when doing so would gain back
      more than half of the current capacity. This avoids excessive copy
      operations when a large buffer is almost (but not completely) full.
      Unverified
      dfce95b8
    • Noah Zentzis's avatar
      Recycle space when reserving from Vec-backed Bytes (#197) · 2d95683b
      Noah Zentzis authored
      * Recycle space when reserving from Vec-backed Bytes
      
      BytesMut::reserve, when called on a BytesMut instance which is backed by
      a non-shared Vec<u8>, would previously just delegate to Vec::reserve
      regardless of the current location in the buffer. If the Bytes is
      actually the trailing component of a larger Vec, then the unused space
      won't be recycled. In applications which continually move the pointer
      forward to consume data as it comes in, this can cause the underlying
      buffer to get extremely large.
      
      This commit checks whether there's extra space at the start of the
      backing Vec in this case, and reuses the unused space if possible
      instead of allocating.
      
      * Avoid excessive copying when reusing Vec space
      
      Only reuse space in a Vec-backed Bytes when doing so would gain back
      more than half of the current capacity. This avoids excessive copy
      operations when a large buffer is almost (but not completely) full.
      2d95683b
  9. May 11, 2018
  10. Apr 27, 2018
    • Carl Lerche's avatar
      Bump version to v0.4.7 · ef09e98f
      Carl Lerche authored
      Unverified
      ef09e98f
    • Carl Lerche's avatar
      Merge branch 'v0.4.x' · d656d371
      Carl Lerche authored
      Unverified
      d656d371
    • kohensu's avatar
      Improve performance of Buf::get_*() (#195) · 51e435b7
      kohensu authored
      The new implementation tries to get the data directly from bytes() (this is
      possible most of the time) and if there is not enough data in bytes() use the
      previous code: copy the needed bytes in a temporary buffer before returning
      the data
      
      Here the bench results:
                                     Before                After           x-faster
      get_f32::cursor             64 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.2
      get_f32::tbuf_1             77 ns/iter (+/- 1)    34 ns/iter (+/- 0)    2.3
      get_f32::tbuf_1_costly      87 ns/iter (+/- 0)    62 ns/iter (+/- 0)    1.4
      get_f32::tbuf_2            151 ns/iter (+/- 18)  160 ns/iter (+/- 1)    0.9
      get_f32::tbuf_2_costly     180 ns/iter (+/- 2)   187 ns/iter (+/- 2)    1.0
      
      get_f64::cursor             67 ns/iter (+/- 0)    21 ns/iter (+/- 0)    3.2
      get_f64::tbuf_1             80 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.3
      get_f64::tbuf_1_costly      82 ns/iter (+/- 3)    60 ns/iter (+/- 0)    1.4
      get_f64::tbuf_2            154 ns/iter (+/- 1)   164 ns/iter (+/- 0)    0.9
      get_f64::tbuf_2_costly     170 ns/iter (+/- 2)   187 ns/iter (+/- 1)    0.9
      
      get_u16::cursor             66 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.3
      get_u16::tbuf_1             77 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.2
      get_u16::tbuf_1_costly      85 ns/iter (+/- 2)    62 ns/iter (+/- 0)    1.4
      get_u16::tbuf_2            147 ns/iter (+/- 0)   154 ns/iter (+/- 0)    1.0
      get_u16::tbuf_2_costly     160 ns/iter (+/- 1)   177 ns/iter (+/- 0)    0.9
      
      get_u32::cursor             64 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.2
      get_u32::tbuf_1             77 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.2
      get_u32::tbuf_1_costly      91 ns/iter (+/- 2)    63 ns/iter (+/- 0)    1.4
      get_u32::tbuf_2            151 ns/iter (+/- 40)  157 ns/iter (+/- 0)    1.0
      get_u32::tbuf_2_costly     162 ns/iter (+/- 0)   180 ns/iter (+/- 0)    0.9
      
      get_u64::cursor             67 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.4
      get_u64::tbuf_1             78 ns/iter (+/- 0)    35 ns/iter (+/- 1)    2.2
      get_u64::tbuf_1_costly      87 ns/iter (+/- 1)    59 ns/iter (+/- 1)    1.5
      get_u64::tbuf_2            154 ns/iter (+/- 0)   160 ns/iter (+/- 0)    1.0
      get_u64::tbuf_2_costly     168 ns/iter (+/- 0)   184 ns/iter (+/- 0)    0.9
      
      get_u8::cursor              64 ns/iter (+/- 0)    19 ns/iter (+/- 0)    3.4
      get_u8::tbuf_1              77 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.2
      get_u8::tbuf_1_costly       68 ns/iter (+/- 0)    51 ns/iter (+/- 0)    1.3
      get_u8::tbuf_2              85 ns/iter (+/- 0)    43 ns/iter (+/- 0)    2.0
      get_u8::tbuf_2_costly       75 ns/iter (+/- 0)    61 ns/iter (+/- 0)    1.2
      get_u8::option              77 ns/iter (+/- 0)    59 ns/iter (+/- 0)    1.3
      
      Improvement on the basic std::Cursor implementation are clearly visible.
      
      Other implementations are specific to the bench tests and just map a static
      slice. Different variant are:
       - tbuf_1: only one call of 'bytes()' is needed.
       - tbuf_2: two calls of 'bytes()' is needed to read more than one byte.
       - _costly version are implemented with #[inline(never)] on 'bytes()',
         'remaining()' and 'advance()'.
      
      The cases that are slower (slightly) correspond to implementations that are not
      really realistic: more than one byte is never possible in one time
      Unverified
      51e435b7
    • Alan Somers's avatar
      impl BorrowMut for BytesMut (#185) (#192) · 15050b1d
      Alan Somers authored
      15050b1d
    • kohensu's avatar
      Improve performance of Buf::get_*() (#195) · e4447220
      kohensu authored
      The new implementation tries to get the data directly from bytes() (this is
      possible most of the time) and if there is not enough data in bytes() use the
      previous code: copy the needed bytes in a temporary buffer before returning
      the data
      
      Here the bench results:
                                     Before                After           x-faster
      get_f32::cursor             64 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.2
      get_f32::tbuf_1             77 ns/iter (+/- 1)    34 ns/iter (+/- 0)    2.3
      get_f32::tbuf_1_costly      87 ns/iter (+/- 0)    62 ns/iter (+/- 0)    1.4
      get_f32::tbuf_2            151 ns/iter (+/- 18)  160 ns/iter (+/- 1)    0.9
      get_f32::tbuf_2_costly     180 ns/iter (+/- 2)   187 ns/iter (+/- 2)    1.0
      
      get_f64::cursor             67 ns/iter (+/- 0)    21 ns/iter (+/- 0)    3.2
      get_f64::tbuf_1             80 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.3
      get_f64::tbuf_1_costly      82 ns/iter (+/- 3)    60 ns/iter (+/- 0)    1.4
      get_f64::tbuf_2            154 ns/iter (+/- 1)   164 ns/iter (+/- 0)    0.9
      get_f64::tbuf_2_costly     170 ns/iter (+/- 2)   187 ns/iter (+/- 1)    0.9
      
      get_u16::cursor             66 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.3
      get_u16::tbuf_1             77 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.2
      get_u16::tbuf_1_costly      85 ns/iter (+/- 2)    62 ns/iter (+/- 0)    1.4
      get_u16::tbuf_2            147 ns/iter (+/- 0)   154 ns/iter (+/- 0)    1.0
      get_u16::tbuf_2_costly     160 ns/iter (+/- 1)   177 ns/iter (+/- 0)    0.9
      
      get_u32::cursor             64 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.2
      get_u32::tbuf_1             77 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.2
      get_u32::tbuf_1_costly      91 ns/iter (+/- 2)    63 ns/iter (+/- 0)    1.4
      get_u32::tbuf_2            151 ns/iter (+/- 40)  157 ns/iter (+/- 0)    1.0
      get_u32::tbuf_2_costly     162 ns/iter (+/- 0)   180 ns/iter (+/- 0)    0.9
      
      get_u64::cursor             67 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.4
      get_u64::tbuf_1             78 ns/iter (+/- 0)    35 ns/iter (+/- 1)    2.2
      get_u64::tbuf_1_costly      87 ns/iter (+/- 1)    59 ns/iter (+/- 1)    1.5
      get_u64::tbuf_2            154 ns/iter (+/- 0)   160 ns/iter (+/- 0)    1.0
      get_u64::tbuf_2_costly     168 ns/iter (+/- 0)   184 ns/iter (+/- 0)    0.9
      
      get_u8::cursor              64 ns/iter (+/- 0)    19 ns/iter (+/- 0)    3.4
      get_u8::tbuf_1              77 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.2
      get_u8::tbuf_1_costly       68 ns/iter (+/- 0)    51 ns/iter (+/- 0)    1.3
      get_u8::tbuf_2              85 ns/iter (+/- 0)    43 ns/iter (+/- 0)    2.0
      get_u8::tbuf_2_costly       75 ns/iter (+/- 0)    61 ns/iter (+/- 0)    1.2
      get_u8::option              77 ns/iter (+/- 0)    59 ns/iter (+/- 0)    1.3
      
      Improvement on the basic std::Cursor implementation are clearly visible.
      
      Other implementations are specific to the bench tests and just map a static
      slice. Different variant are:
       - tbuf_1: only one call of 'bytes()' is needed.
       - tbuf_2: two calls of 'bytes()' is needed to read more than one byte.
       - _costly version are implemented with #[inline(never)] on 'bytes()',
         'remaining()' and 'advance()'.
      
      The cases that are slower (slightly) correspond to implementations that are not
      really realistic: more than one byte is never possible in one time
      e4447220
    • kohensu's avatar
      Add a bench for Buf::get_*() (#194) · d5610062
      kohensu authored
      d5610062
  11. Mar 12, 2018
    • Anthony Ramine's avatar
      Introduce Bytes::to_mut (#188) · 2c27ddaf
      Anthony Ramine authored
      2c27ddaf
    • Alan Somers's avatar
      impl BorrowMut for BytesMut (#185) · ae1b4549
      Alan Somers authored
      ae1b4549
    • Carl Lerche's avatar
      Fix `copy_to_slice` to use correct increment var · ebe52273
      Carl Lerche authored
      This patch fixes the `copy_to_slice` function, rectifying the logic.
      However, the incorrect code does not result in incorrect behavior as the
      only case `cnt != src.len()` is during the final iteration, and since
      `src.len()` is greater than `cnt` in that case, `off` will be
      incremented by too much, but this will still trigger the `off <
      dst.len()` condition.
      
      The only danger is `src.len()` could cause an overflow.
      Unverified
      ebe52273
    • Carl Lerche's avatar
      Unverified
      bd4630a3
    • Sean McArthur's avatar
      Remove ByteOrder generic methods from Buf and BufMut (#187) · 025d5334
      Sean McArthur authored
      * make Buf and BufMut usable as trait objects
      
      - All the `get_*` and `put_*` methods that take `T: ByteOrder` have
        a `where Self: Sized` bound added, so that they are only usable from
        sized types. It was impossible to make `Buf` or `BufMut` into trait
        objects before, so this change doesn't break anyone.
      - Add `get_n_be`/`get_n_le`/`put_n_be`/`put_n_le` methods that can be
        used on trait objects.
      - Deprecate the export of `ByteOrder` and methods generic on it.
      
      * remove deprecated ByteOrder methods
      
      Removes the `_be` suffix from all methods, implying that the default
      people should use is network endian.
      025d5334
    • Sean McArthur's avatar
      Make Buf and BufMut usable as trait objects (#186) · ce79f0a2
      Sean McArthur authored
      - All the `get_*` and `put_*` methods that take `T: ByteOrder` have
        a `where Self: Sized` bound added, so that they are only usable from
        sized types. It was impossible to make `Buf` or `BufMut` into trait
        objects before, so this change doesn't break anyone.
      - Add `get_n_be`/`get_n_le`/`put_n_be`/`put_n_le` methods that can be
        used on trait objects.
      - Deprecate the export of `ByteOrder` and methods generic on it.
      
      Fixes #163 
      ce79f0a2
  12. Feb 26, 2018
  13. Jan 29, 2018
Loading