Skip to content
Snippets Groups Projects
  1. Nov 17, 2018
  2. Sep 04, 2018
  3. Sep 03, 2018
  4. Sep 02, 2018
  5. Jul 23, 2018
  6. Jul 13, 2018
  7. Jul 03, 2018
  8. Jul 02, 2018
  9. Jun 19, 2018
  10. Jun 18, 2018
    • Carl Lerche's avatar
      Clarify license as MIT (#216) · a6b98442
      Carl Lerche authored
      The intent of the license was to dual license MIT & Apache 2.0. However,
      the messaging was copy / pasted from rust-lang.
      
      Clarify the license as exclusively MIT.
      
      Fixes #215
      Unverified
      a6b98442
  11. May 25, 2018
  12. May 24, 2018
    • Noah Zentzis's avatar
      Recycle space when reserving from Vec-backed Bytes (#197) · dfce95b8
      Noah Zentzis authored
      * Recycle space when reserving from Vec-backed Bytes
      
      BytesMut::reserve, when called on a BytesMut instance which is backed by
      a non-shared Vec<u8>, would previously just delegate to Vec::reserve
      regardless of the current location in the buffer. If the Bytes is
      actually the trailing component of a larger Vec, then the unused space
      won't be recycled. In applications which continually move the pointer
      forward to consume data as it comes in, this can cause the underlying
      buffer to get extremely large.
      
      This commit checks whether there's extra space at the start of the
      backing Vec in this case, and reuses the unused space if possible
      instead of allocating.
      
      * Avoid excessive copying when reusing Vec space
      
      Only reuse space in a Vec-backed Bytes when doing so would gain back
      more than half of the current capacity. This avoids excessive copy
      operations when a large buffer is almost (but not completely) full.
      Unverified
      dfce95b8
  13. May 11, 2018
  14. Apr 27, 2018
    • Carl Lerche's avatar
      Bump version to v0.4.7 · ef09e98f
      Carl Lerche authored
      Unverified
      ef09e98f
    • kohensu's avatar
      Improve performance of Buf::get_*() (#195) · 51e435b7
      kohensu authored
      The new implementation tries to get the data directly from bytes() (this is
      possible most of the time) and if there is not enough data in bytes() use the
      previous code: copy the needed bytes in a temporary buffer before returning
      the data
      
      Here the bench results:
                                     Before                After           x-faster
      get_f32::cursor             64 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.2
      get_f32::tbuf_1             77 ns/iter (+/- 1)    34 ns/iter (+/- 0)    2.3
      get_f32::tbuf_1_costly      87 ns/iter (+/- 0)    62 ns/iter (+/- 0)    1.4
      get_f32::tbuf_2            151 ns/iter (+/- 18)  160 ns/iter (+/- 1)    0.9
      get_f32::tbuf_2_costly     180 ns/iter (+/- 2)   187 ns/iter (+/- 2)    1.0
      
      get_f64::cursor             67 ns/iter (+/- 0)    21 ns/iter (+/- 0)    3.2
      get_f64::tbuf_1             80 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.3
      get_f64::tbuf_1_costly      82 ns/iter (+/- 3)    60 ns/iter (+/- 0)    1.4
      get_f64::tbuf_2            154 ns/iter (+/- 1)   164 ns/iter (+/- 0)    0.9
      get_f64::tbuf_2_costly     170 ns/iter (+/- 2)   187 ns/iter (+/- 1)    0.9
      
      get_u16::cursor             66 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.3
      get_u16::tbuf_1             77 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.2
      get_u16::tbuf_1_costly      85 ns/iter (+/- 2)    62 ns/iter (+/- 0)    1.4
      get_u16::tbuf_2            147 ns/iter (+/- 0)   154 ns/iter (+/- 0)    1.0
      get_u16::tbuf_2_costly     160 ns/iter (+/- 1)   177 ns/iter (+/- 0)    0.9
      
      get_u32::cursor             64 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.2
      get_u32::tbuf_1             77 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.2
      get_u32::tbuf_1_costly      91 ns/iter (+/- 2)    63 ns/iter (+/- 0)    1.4
      get_u32::tbuf_2            151 ns/iter (+/- 40)  157 ns/iter (+/- 0)    1.0
      get_u32::tbuf_2_costly     162 ns/iter (+/- 0)   180 ns/iter (+/- 0)    0.9
      
      get_u64::cursor             67 ns/iter (+/- 0)    20 ns/iter (+/- 0)    3.4
      get_u64::tbuf_1             78 ns/iter (+/- 0)    35 ns/iter (+/- 1)    2.2
      get_u64::tbuf_1_costly      87 ns/iter (+/- 1)    59 ns/iter (+/- 1)    1.5
      get_u64::tbuf_2            154 ns/iter (+/- 0)   160 ns/iter (+/- 0)    1.0
      get_u64::tbuf_2_costly     168 ns/iter (+/- 0)   184 ns/iter (+/- 0)    0.9
      
      get_u8::cursor              64 ns/iter (+/- 0)    19 ns/iter (+/- 0)    3.4
      get_u8::tbuf_1              77 ns/iter (+/- 0)    35 ns/iter (+/- 0)    2.2
      get_u8::tbuf_1_costly       68 ns/iter (+/- 0)    51 ns/iter (+/- 0)    1.3
      get_u8::tbuf_2              85 ns/iter (+/- 0)    43 ns/iter (+/- 0)    2.0
      get_u8::tbuf_2_costly       75 ns/iter (+/- 0)    61 ns/iter (+/- 0)    1.2
      get_u8::option              77 ns/iter (+/- 0)    59 ns/iter (+/- 0)    1.3
      
      Improvement on the basic std::Cursor implementation are clearly visible.
      
      Other implementations are specific to the bench tests and just map a static
      slice. Different variant are:
       - tbuf_1: only one call of 'bytes()' is needed.
       - tbuf_2: two calls of 'bytes()' is needed to read more than one byte.
       - _costly version are implemented with #[inline(never)] on 'bytes()',
         'remaining()' and 'advance()'.
      
      The cases that are slower (slightly) correspond to implementations that are not
      really realistic: more than one byte is never possible in one time
      Unverified
      51e435b7
    • Alan Somers's avatar
      impl BorrowMut for BytesMut (#185) (#192) · 15050b1d
      Alan Somers authored
      15050b1d
  15. Mar 12, 2018
    • Sean McArthur's avatar
      Make Buf and BufMut usable as trait objects (#186) · ce79f0a2
      Sean McArthur authored
      - All the `get_*` and `put_*` methods that take `T: ByteOrder` have
        a `where Self: Sized` bound added, so that they are only usable from
        sized types. It was impossible to make `Buf` or `BufMut` into trait
        objects before, so this change doesn't break anyone.
      - Add `get_n_be`/`get_n_le`/`put_n_be`/`put_n_le` methods that can be
        used on trait objects.
      - Deprecate the export of `ByteOrder` and methods generic on it.
      
      Fixes #163 
      ce79f0a2
  16. Jan 29, 2018
  17. Jan 08, 2018
  18. Jan 06, 2018
  19. Jan 03, 2018
    • Stepan Koltsov's avatar
      Optimize shallow_clone for Bytes::split_{off,to} (#92) · 6a3d20bb
      Stepan Koltsov authored
      If `shallow_clone` is called with `&mut self`, and `Bytes` contains
      `Vec`, then expensive CAS can be avoided, because no other thread
      have references to this `Bytes` object.
      
      Bench `split_off_and_drop` difference:
      
      Before the diff:
      
      ```
      test split_off_and_drop             ... bench:      91,858 ns/iter (+/- 17,401)
      ```
      
      With the diff:
      
      ```
      test split_off_and_drop             ... bench:      81,162 ns/iter (+/- 17,603)
      ```
      6a3d20bb
    • jq-rs's avatar
      Add support for unsplit() to BytesMut (#162) · 2ca61d88
      jq-rs authored
      Add support for unsplit() to BytesMut which combines splitted contiguous memory blocks efficiently.
      2ca61d88
  20. Dec 16, 2017
  21. Dec 13, 2017
    • Carl Lerche's avatar
      Add `advance` on `Bytes` and `BytesMut` (#166) · 02891144
      Carl Lerche authored
      * Compact Bytes original capacity representation
      
      In order to avoid unnecessary allocations, a `Bytes` structure remembers
      the capacity with which it was first created. When a reserve operation
      is issued, this original capacity value is used to as a baseline for
      reallocating new storage.
      
      Previously, this original capacity value was stored in its raw form. In
      other words, the original capacity `usize` was stored as is. In order to
      reclaim some `Bytes` internal storage space for additional features,
      this original capacity value is compressed from requiring 16 bits to 3.
      
      To do this, instead of storing the exact original capacity. The original
      capacity is rounded down to the nearest power of two. If the original
      capacity is less than 1024, then it is rounded down to zero. This
      roughly means that the original capacity is now stored as a table:
      
      0 => 0
      1 => 1k
      2 => 2k
      3 => 4k
      4 => 8k
      5 => 16k
      6 => 32k
      7 => 64k
      
      For the purposes that the original capacity feature was introduced, this
      is sufficient granularity.
      
      * Provide `advance` on Bytes and BytesMut
      
      This is the `advance` function that would be part of a `Buf`
      implementation. However, `Bytes` and `BytesMut` cannot impl `Buf` until
      the next breaking release.
      
      The implementation uses the additional storage made available by the
      previous commit to store the number of bytes that the view was advanced.
      The `ptr` pointer will point to the start of the window, avoiding any
      pointer arithmetic when dereferencing the `Bytes` handle.
      Unverified
      02891144
  22. Oct 21, 2017
  23. Aug 18, 2017
    • Dan Burkert's avatar
      small fixups in bytes.rs (#145) · 03d501b1
      Dan Burkert authored
      * Inner: make uninitialized construction explicit
      * Remove Inner2
      * Remove unnecessary transmutes
      * Use AtomicPtr::get_mut where possible
      * Some minor tweaks
      03d501b1
  24. Aug 17, 2017
  25. Aug 12, 2017
  26. Aug 06, 2017
  27. Jul 02, 2017
  28. Jul 01, 2017
    • Stepan Koltsov's avatar
      Optimize Bytes::slice for short slices (#136) · b9ccd2a8
      Stepan Koltsov authored
      Slice operation should return inline when possible
      
      It is cheaper than atomic increment/decrement.
      
      Before this patch:
      
      ```
      test slice_avg_le_inline_from_arc   ... bench:      28,582 ns/iter (+/- 3,880)
      test slice_empty                    ... bench:       8,797 ns/iter (+/- 1,325)
      test slice_large_le_inline_from_arc ... bench:      27,684 ns/iter (+/- 5,920)
      test slice_short_from_arc           ... bench:      27,439 ns/iter (+/- 5,783)
      ```
      
      After this patch:
      
      ```
      test slice_avg_le_inline_from_arc   ... bench:      18,872 ns/iter (+/- 2,937)
      test slice_empty                    ... bench:       9,136 ns/iter (+/- 1,908)
      test slice_large_le_inline_from_arc ... bench:      18,052 ns/iter (+/- 2,981)
      test slice_short_from_arc           ... bench:      18,200 ns/iter (+/- 2,534)
      ```
      b9ccd2a8
    • Georg Brandl's avatar
      54499795
    • Stepan Koltsov's avatar
      Bytes::with_capacity (#137) · d315d00a
      Stepan Koltsov authored
      d315d00a
Loading