- Jul 13, 2018
-
-
Rafael Ávila de Espíndola authored
With this if foo is a mutable slice, it is possible to do foo.into_buf().put_u32_le(42); Before this patch into_buf would create a Cursor<&'a [u8]> and it would not be possible to write into it.
-
Rafael Ávila de Espíndola authored
With this if foo is a mutable slice, it is possible to do foo.into_buf().put_u32_le(42); Before this patch into_buf would create a Cursor<&'a [u8]> and it would not be possible to write into it.
-
- May 24, 2018
-
-
Noah Zentzis authored
* Recycle space when reserving from Vec-backed Bytes BytesMut::reserve, when called on a BytesMut instance which is backed by a non-shared Vec<u8>, would previously just delegate to Vec::reserve regardless of the current location in the buffer. If the Bytes is actually the trailing component of a larger Vec, then the unused space won't be recycled. In applications which continually move the pointer forward to consume data as it comes in, this can cause the underlying buffer to get extremely large. This commit checks whether there's extra space at the start of the backing Vec in this case, and reuses the unused space if possible instead of allocating. * Avoid excessive copying when reusing Vec space Only reuse space in a Vec-backed Bytes when doing so would gain back more than half of the current capacity. This avoids excessive copy operations when a large buffer is almost (but not completely) full.
-
Noah Zentzis authored
* Recycle space when reserving from Vec-backed Bytes BytesMut::reserve, when called on a BytesMut instance which is backed by a non-shared Vec<u8>, would previously just delegate to Vec::reserve regardless of the current location in the buffer. If the Bytes is actually the trailing component of a larger Vec, then the unused space won't be recycled. In applications which continually move the pointer forward to consume data as it comes in, this can cause the underlying buffer to get extremely large. This commit checks whether there's extra space at the start of the backing Vec in this case, and reuses the unused space if possible instead of allocating. * Avoid excessive copying when reusing Vec space Only reuse space in a Vec-backed Bytes when doing so would gain back more than half of the current capacity. This avoids excessive copy operations when a large buffer is almost (but not completely) full.
-
- May 11, 2018
-
-
Carl Lerche authored
-
- Mar 12, 2018
-
-
Sean McArthur authored
* make Buf and BufMut usable as trait objects - All the `get_*` and `put_*` methods that take `T: ByteOrder` have a `where Self: Sized` bound added, so that they are only usable from sized types. It was impossible to make `Buf` or `BufMut` into trait objects before, so this change doesn't break anyone. - Add `get_n_be`/`get_n_le`/`put_n_be`/`put_n_le` methods that can be used on trait objects. - Deprecate the export of `ByteOrder` and methods generic on it. * remove deprecated ByteOrder methods Removes the `_be` suffix from all methods, implying that the default people should use is network endian.
-
Sean McArthur authored
- All the `get_*` and `put_*` methods that take `T: ByteOrder` have a `where Self: Sized` bound added, so that they are only usable from sized types. It was impossible to make `Buf` or `BufMut` into trait objects before, so this change doesn't break anyone. - Add `get_n_be`/`get_n_le`/`put_n_be`/`put_n_le` methods that can be used on trait objects. - Deprecate the export of `ByteOrder` and methods generic on it. Fixes #163
-
- Feb 26, 2018
-
-
Alan Somers authored
Add `Bytes::unsplit`, analogous to `BytesMut::unsplit`.
-
- Jan 26, 2018
-
-
Carl Lerche authored
Update to match master version of IoVec (0.2.0?), using IoVec/IoVecMut instead of &IoVec and &mut IoVec.
-
- Jan 06, 2018
-
-
jq-rs authored
* Handle empty self and other for unsplit. * Change extend() to extend_from_slice().
-
- Jan 03, 2018
-
-
jq-rs authored
Add support for unsplit() to BytesMut which combines splitted contiguous memory blocks efficiently.
-
- Dec 13, 2017
-
-
Carl Lerche authored
* Compact Bytes original capacity representation In order to avoid unnecessary allocations, a `Bytes` structure remembers the capacity with which it was first created. When a reserve operation is issued, this original capacity value is used to as a baseline for reallocating new storage. Previously, this original capacity value was stored in its raw form. In other words, the original capacity `usize` was stored as is. In order to reclaim some `Bytes` internal storage space for additional features, this original capacity value is compressed from requiring 16 bits to 3. To do this, instead of storing the exact original capacity. The original capacity is rounded down to the nearest power of two. If the original capacity is less than 1024, then it is rounded down to zero. This roughly means that the original capacity is now stored as a table: 0 => 0 1 => 1k 2 => 2k 3 => 4k 4 => 8k 5 => 16k 6 => 32k 7 => 64k For the purposes that the original capacity feature was introduced, this is sufficient granularity. * Provide `advance` on Bytes and BytesMut This is the `advance` function that would be part of a `Buf` implementation. However, `Bytes` and `BytesMut` cannot impl `Buf` until the next breaking release. The implementation uses the additional storage made available by the previous commit to store the number of bytes that the view was advanced. The `ptr` pointer will point to the start of the window, avoiding any pointer arithmetic when dereferencing the `Bytes` handle.
-
- Oct 21, 2017
-
-
Carl Lerche authored
-
- Aug 17, 2017
-
-
Sean McArthur authored
-
- Jul 01, 2017
-
-
Clint Byrum authored
Saves the cognitive load of having to wrap them in slices to compare them when that seems like what one would expect. Signed-off-by:
Clint Byrum <clint@fewbar.com>
-
- Jun 27, 2017
-
-
Dan Burkert authored
The panic happens when `inner.bytes()` returns a slice smaller than the limit.
-
- Jun 15, 2017
-
-
Sean McArthur authored
* use slice.to_vec instead of buf.put in From<[u8]> * don't panic in fmt::Write for BytesMut
-
- May 24, 2017
-
-
brianwp authored
-
- May 22, 2017
-
-
Stepan Koltsov authored
Return empty `Bytes` object Bench for `slice_empty` difference is ``` 55 ns/iter (+/- 1) # before this patch 17 ns/iter (+/- 5) # with this patch ``` Bench for `slice_not_empty` is ``` 25,058 ns/iter (+/- 1,099) # before this patch 25,072 ns/iter (+/- 1,593) # with this patch ```
-
- May 15, 2017
-
-
Stepan Koltsov authored
Round up to power of 2 is not necessary, because `reserve` already doubles previous capacity in ``` new_cap = cmp::max( cmp::max(v.capacity() << 1, new_cap), original_capacity); ``` which makes `reserve` calls constant in average. Avoiding rounding up prevents `reserve` from wasting space when caller knows exactly what space they need. Patch adds three tests which would fail before this test. The most important is this: ``` #[test] fn reserve_in_arc_unique_does_not_overallocate() { let mut bytes = BytesMut::with_capacity(1000); bytes.take(); // now bytes is Arc and refcount == 1 assert_eq!(1000, bytes.capacity()); bytes.reserve(2001); assert_eq!(2001, bytes.capacity()); } ``` It asserts that when user requests more than double of current capacity, exactly the requested amount of memory is allocated and is not wasted to next power of two.
-
Stepan Koltsov authored
`extend_with_slice` is super-convenient operation on `Bytes`. While `put_u8` would be expensive on `Bytes`, `extend_from_slice` is OK, because it is batch, and it checks for kind only once. Patch also adds `impl Extend for Bytes`. cc #116
-
- May 02, 2017
-
-
Arthur Silva authored
-
- Apr 30, 2017
-
-
Dan Burkert authored
-
- Mar 30, 2017
-
-
Carl Lerche authored
The shared debug_assert is to ensure that the internal Bytes representation is such that offset views are supported. The only representation that does not support offset views is vec. Fixes #97
-
- Mar 28, 2017
-
-
Stepan Koltsov authored
-
Stepan Koltsov authored
Before this commit `Bytes::split_{off,to}` always created a shallow copy if `self` is arc or vec. However, in certain cases `split_off` or `split_to` is called with `len` or `0` parameter. E. g. if you are reading a frame from buffered stream, it is likely that buffer contains exactly the frame size bytes, so `split_to` will be called with `len` param. Although, `split_off` and `split_to` functions are `O(1)`, shallow copy have downsides: * shallow copy on vector does malloc and atomic cmpxchg * after shallow copy, following operations (e. g. `drop`) on both `bytes` objects require atomics * memory will be probably released to the system later * `try_mut` will fail * [into_vec](https://github.com/carllerche/bytes/issues/86) will copy
-
- Mar 21, 2017
-
-
Stepan Koltsov authored
Standard `Debug` implementation for `[u8]` is comma separated list of numbers. Since large amount of byte strings are in fact ASCII strings or contain a lot of ASCII strings (e. g. HTTP), it is convenient to print strings as ASCII when possible.
-
- Mar 20, 2017
-
-
Carl Lerche authored
Limit the number of threads when using qemu to 1. Also, don't bother running the stress test as this will trigger qemu bugs. Finally, also make the stress test actually stress test.
-
- Mar 19, 2017
-
-
Carl Lerche authored
Closes #83
-
- Mar 07, 2017
-
-
Carl Lerche authored
-
- Mar 03, 2017
-
-
Carl Lerche authored
This change tracks the original capacity requested when `BytesMut` is first created. This capacity is used when a `reserve` needs to allocate due to the current view being too small. The newly allocated buffer will be sized the same as the original allocation.
-
- Mar 02, 2017
-
-
Carl Lerche authored
-
- Mar 01, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
Enables collecting the contents of a `Buf` value into a relevant concrete buffer implementation.
-
Carl Lerche authored
-
Carl Lerche authored
-
- Feb 21, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
Instead of providing a separate `try_reclaim` function, `reserve` will attempt to reclaim the existing buffer before allocating.
-
- Feb 20, 2017
-
-
Carl Lerche authored
The previous implementation didn't factor in a single `Bytes` handle being stored in an `Arc`. This new implementation correctly impelments both `Bytes` and `BytesMut` such that both are `Sync`. The rewrite also increases the number of bytes that can be stored inline.
-