- Dec 21, 2018
-
-
Ralf Jung authored
-
- Nov 26, 2018
-
-
Ralf Jung authored
-
- Nov 17, 2018
-
-
Ralf Jung authored
Shared references assert immutability, so any concurrent access would be UB disregarding data race concerns.
-
- Sep 02, 2018
-
-
Federico Mena Quintero authored
This lets us take Bytes and a &[u8] slice that is contained in it, and create a new Bytes that corresponds to that subset slice. Closes #198
-
- Jul 13, 2018
-
-
Sean McArthur authored
-
Roman authored
-
- Jul 03, 2018
-
-
Sean McArthur authored
- Clones when the kind is INLINE or STATIC are sped up by over double. - Clones when the kind is ARC are spec up by about 1/3.
-
- May 25, 2018
-
-
Luke Horsley authored
-
- May 24, 2018
-
-
Noah Zentzis authored
* Recycle space when reserving from Vec-backed Bytes BytesMut::reserve, when called on a BytesMut instance which is backed by a non-shared Vec<u8>, would previously just delegate to Vec::reserve regardless of the current location in the buffer. If the Bytes is actually the trailing component of a larger Vec, then the unused space won't be recycled. In applications which continually move the pointer forward to consume data as it comes in, this can cause the underlying buffer to get extremely large. This commit checks whether there's extra space at the start of the backing Vec in this case, and reuses the unused space if possible instead of allocating. * Avoid excessive copying when reusing Vec space Only reuse space in a Vec-backed Bytes when doing so would gain back more than half of the current capacity. This avoids excessive copy operations when a large buffer is almost (but not completely) full.
-
- May 11, 2018
-
-
Carl Lerche authored
-
- Apr 27, 2018
-
-
Alan Somers authored
-
- Jan 06, 2018
-
-
jq-rs authored
* Handle empty self and other for unsplit. * Change extend() to extend_from_slice().
-
- Jan 03, 2018
-
-
Stepan Koltsov authored
If `shallow_clone` is called with `&mut self`, and `Bytes` contains `Vec`, then expensive CAS can be avoided, because no other thread have references to this `Bytes` object. Bench `split_off_and_drop` difference: Before the diff: ``` test split_off_and_drop ... bench: 91,858 ns/iter (+/- 17,401) ``` With the diff: ``` test split_off_and_drop ... bench: 81,162 ns/iter (+/- 17,603) ```
-
jq-rs authored
Add support for unsplit() to BytesMut which combines splitted contiguous memory blocks efficiently.
-
- Dec 16, 2017
-
-
Carl Lerche authored
Fixes #164
-
- Dec 13, 2017
-
-
Carl Lerche authored
* Compact Bytes original capacity representation In order to avoid unnecessary allocations, a `Bytes` structure remembers the capacity with which it was first created. When a reserve operation is issued, this original capacity value is used to as a baseline for reallocating new storage. Previously, this original capacity value was stored in its raw form. In other words, the original capacity `usize` was stored as is. In order to reclaim some `Bytes` internal storage space for additional features, this original capacity value is compressed from requiring 16 bits to 3. To do this, instead of storing the exact original capacity. The original capacity is rounded down to the nearest power of two. If the original capacity is less than 1024, then it is rounded down to zero. This roughly means that the original capacity is now stored as a table: 0 => 0 1 => 1k 2 => 2k 3 => 4k 4 => 8k 5 => 16k 6 => 32k 7 => 64k For the purposes that the original capacity feature was introduced, this is sufficient granularity. * Provide `advance` on Bytes and BytesMut This is the `advance` function that would be part of a `Buf` implementation. However, `Bytes` and `BytesMut` cannot impl `Buf` until the next breaking release. The implementation uses the additional storage made available by the previous commit to store the number of bytes that the view was advanced. The `ptr` pointer will point to the start of the window, avoiding any pointer arithmetic when dereferencing the `Bytes` handle.
-
- Aug 18, 2017
-
-
Dan Burkert authored
* Inner: make uninitialized construction explicit * Remove Inner2 * Remove unnecessary transmutes * Use AtomicPtr::get_mut where possible * Some minor tweaks
-
- Aug 17, 2017
-
-
Jef authored
-
- Aug 06, 2017
-
-
Alex Crichton authored
-
- Jul 02, 2017
-
-
Paul Collier authored
-
- Jul 01, 2017
-
-
Stepan Koltsov authored
Slice operation should return inline when possible It is cheaper than atomic increment/decrement. Before this patch: ``` test slice_avg_le_inline_from_arc ... bench: 28,582 ns/iter (+/- 3,880) test slice_empty ... bench: 8,797 ns/iter (+/- 1,325) test slice_large_le_inline_from_arc ... bench: 27,684 ns/iter (+/- 5,920) test slice_short_from_arc ... bench: 27,439 ns/iter (+/- 5,783) ``` After this patch: ``` test slice_avg_le_inline_from_arc ... bench: 18,872 ns/iter (+/- 2,937) test slice_empty ... bench: 9,136 ns/iter (+/- 1,908) test slice_large_le_inline_from_arc ... bench: 18,052 ns/iter (+/- 2,981) test slice_short_from_arc ... bench: 18,200 ns/iter (+/- 2,534) ```
-
Georg Brandl authored
-
Stepan Koltsov authored
-
Clint Byrum authored
Saves the cognitive load of having to wrap them in slices to compare them when that seems like what one would expect. Signed-off-by:
Clint Byrum <clint@fewbar.com>
-
- Jun 15, 2017
-
-
Sean McArthur authored
* use slice.to_vec instead of buf.put in From<[u8]> * don't panic in fmt::Write for BytesMut
-
- May 26, 2017
-
-
Stepan Koltsov authored
-
- May 22, 2017
-
-
Stepan Koltsov authored
Return empty `Bytes` object Bench for `slice_empty` difference is ``` 55 ns/iter (+/- 1) # before this patch 17 ns/iter (+/- 5) # with this patch ``` Bench for `slice_not_empty` is ``` 25,058 ns/iter (+/- 1,099) # before this patch 25,072 ns/iter (+/- 1,593) # with this patch ```
-
Stepan Koltsov authored
-
- May 15, 2017
-
-
Stepan Koltsov authored
Round up to power of 2 is not necessary, because `reserve` already doubles previous capacity in ``` new_cap = cmp::max( cmp::max(v.capacity() << 1, new_cap), original_capacity); ``` which makes `reserve` calls constant in average. Avoiding rounding up prevents `reserve` from wasting space when caller knows exactly what space they need. Patch adds three tests which would fail before this test. The most important is this: ``` #[test] fn reserve_in_arc_unique_does_not_overallocate() { let mut bytes = BytesMut::with_capacity(1000); bytes.take(); // now bytes is Arc and refcount == 1 assert_eq!(1000, bytes.capacity()); bytes.reserve(2001); assert_eq!(2001, bytes.capacity()); } ``` It asserts that when user requests more than double of current capacity, exactly the requested amount of memory is allocated and is not wasted to next power of two.
-
Stepan Koltsov authored
`extend_with_slice` is super-convenient operation on `Bytes`. While `put_u8` would be expensive on `Bytes`, `extend_from_slice` is OK, because it is batch, and it checks for kind only once. Patch also adds `impl Extend for Bytes`. cc #116
-
- May 02, 2017
-
-
Stepan Koltsov authored
Similar to `Vec::extend_from_slice`: it a reserve followed by memcopy.
-
Stepan Koltsov authored
-
- May 01, 2017
-
-
Sean McArthur authored
-
- Apr 24, 2017
-
-
Phil Ruffwind authored
-
- Apr 14, 2017
-
-
jaystrictor authored
-
- Apr 06, 2017
-
-
Sean McArthur authored
-
- Mar 30, 2017
-
-
Carl Lerche authored
The shared debug_assert is to ensure that the internal Bytes representation is such that offset views are supported. The only representation that does not support offset views is vec. Fixes #97
-
- Mar 28, 2017
-
-
Stepan Koltsov authored
-
Stepan Koltsov authored
Before this commit `Bytes::split_{off,to}` always created a shallow copy if `self` is arc or vec. However, in certain cases `split_off` or `split_to` is called with `len` or `0` parameter. E. g. if you are reading a frame from buffered stream, it is likely that buffer contains exactly the frame size bytes, so `split_to` will be called with `len` param. Although, `split_off` and `split_to` functions are `O(1)`, shallow copy have downsides: * shallow copy on vector does malloc and atomic cmpxchg * after shallow copy, following operations (e. g. `drop`) on both `bytes` objects require atomics * memory will be probably released to the system later * `try_mut` will fail * [into_vec](https://github.com/carllerche/bytes/issues/86) will copy
-
- Mar 24, 2017
-
-
Alex Crichton authored
-