You need to sign in or sign up before continuing.
- Aug 06, 2017
-
-
Alex Crichton authored
-
- Jul 02, 2017
-
-
Paul Collier authored
-
- Jul 01, 2017
-
-
Stepan Koltsov authored
Slice operation should return inline when possible It is cheaper than atomic increment/decrement. Before this patch: ``` test slice_avg_le_inline_from_arc ... bench: 28,582 ns/iter (+/- 3,880) test slice_empty ... bench: 8,797 ns/iter (+/- 1,325) test slice_large_le_inline_from_arc ... bench: 27,684 ns/iter (+/- 5,920) test slice_short_from_arc ... bench: 27,439 ns/iter (+/- 5,783) ``` After this patch: ``` test slice_avg_le_inline_from_arc ... bench: 18,872 ns/iter (+/- 2,937) test slice_empty ... bench: 9,136 ns/iter (+/- 1,908) test slice_large_le_inline_from_arc ... bench: 18,052 ns/iter (+/- 2,981) test slice_short_from_arc ... bench: 18,200 ns/iter (+/- 2,534) ```
-
Georg Brandl authored
-
Stepan Koltsov authored
-
Clint Byrum authored
Saves the cognitive load of having to wrap them in slices to compare them when that seems like what one would expect. Signed-off-by:
Clint Byrum <clint@fewbar.com>
-
- Jun 27, 2017
-
-
Dan Burkert authored
The panic happens when `inner.bytes()` returns a slice smaller than the limit.
-
- Jun 15, 2017
-
-
Sean McArthur authored
* use slice.to_vec instead of buf.put in From<[u8]> * don't panic in fmt::Write for BytesMut
-
- May 26, 2017
-
-
Stepan Koltsov authored
-
- May 24, 2017
-
-
brianwp authored
-
- May 22, 2017
-
-
Stepan Koltsov authored
Return empty `Bytes` object Bench for `slice_empty` difference is ``` 55 ns/iter (+/- 1) # before this patch 17 ns/iter (+/- 5) # with this patch ``` Bench for `slice_not_empty` is ``` 25,058 ns/iter (+/- 1,099) # before this patch 25,072 ns/iter (+/- 1,593) # with this patch ```
-
Stepan Koltsov authored
-
- May 15, 2017
-
-
Stepan Koltsov authored
Round up to power of 2 is not necessary, because `reserve` already doubles previous capacity in ``` new_cap = cmp::max( cmp::max(v.capacity() << 1, new_cap), original_capacity); ``` which makes `reserve` calls constant in average. Avoiding rounding up prevents `reserve` from wasting space when caller knows exactly what space they need. Patch adds three tests which would fail before this test. The most important is this: ``` #[test] fn reserve_in_arc_unique_does_not_overallocate() { let mut bytes = BytesMut::with_capacity(1000); bytes.take(); // now bytes is Arc and refcount == 1 assert_eq!(1000, bytes.capacity()); bytes.reserve(2001); assert_eq!(2001, bytes.capacity()); } ``` It asserts that when user requests more than double of current capacity, exactly the requested amount of memory is allocated and is not wasted to next power of two.
-
Stepan Koltsov authored
`extend_with_slice` is super-convenient operation on `Bytes`. While `put_u8` would be expensive on `Bytes`, `extend_from_slice` is OK, because it is batch, and it checks for kind only once. Patch also adds `impl Extend for Bytes`. cc #116
-
- May 02, 2017
-
-
Jack O'Connor authored
-
Stepan Koltsov authored
Similar to `Vec::extend_from_slice`: it a reserve followed by memcopy.
-
Stepan Koltsov authored
-
Arthur Silva authored
-
- May 01, 2017
-
-
Sean McArthur authored
-
- Apr 30, 2017
-
-
Dan Burkert authored
-
- Apr 24, 2017
-
-
Phil Ruffwind authored
-
- Apr 14, 2017
-
-
jaystrictor authored
-
- Apr 06, 2017
-
-
Sean McArthur authored
-
- Mar 30, 2017
-
-
Carl Lerche authored
The shared debug_assert is to ensure that the internal Bytes representation is such that offset views are supported. The only representation that does not support offset views is vec. Fixes #97
-
- Mar 28, 2017
-
-
Stepan Koltsov authored
-
Stepan Koltsov authored
Before this commit `Bytes::split_{off,to}` always created a shallow copy if `self` is arc or vec. However, in certain cases `split_off` or `split_to` is called with `len` or `0` parameter. E. g. if you are reading a frame from buffered stream, it is likely that buffer contains exactly the frame size bytes, so `split_to` will be called with `len` param. Although, `split_off` and `split_to` functions are `O(1)`, shallow copy have downsides: * shallow copy on vector does malloc and atomic cmpxchg * after shallow copy, following operations (e. g. `drop`) on both `bytes` objects require atomics * memory will be probably released to the system later * `try_mut` will fail * [into_vec](https://github.com/carllerche/bytes/issues/86) will copy
-
- Mar 24, 2017
-
-
Alex Crichton authored
-
- Mar 21, 2017
-
-
Stepan Koltsov authored
Standard `Debug` implementation for `[u8]` is comma separated list of numbers. Since large amount of byte strings are in fact ASCII strings or contain a lot of ASCII strings (e. g. HTTP), it is convenient to print strings as ASCII when possible.
-
- Mar 19, 2017
-
-
Carl Lerche authored
Closes #79
-
Dan Burkert authored
I found this significantly improved a [benchmark](https://gist.github.com/danburkert/34a7d6680d97bc86dca7f396eb8d0abf) which calls `bytes_mut`, writes 1 byte, and advances the pointer with `advance_mut` in a pretty tight loop. In particular, it seems to be the inline annotation on `bytes_mut` which had the most effect. I also took the opportunity to simplify the bounds checking in advance_mut. before: ``` test encode_varint_small ... bench: 540 ns/iter (+/- 85) = 1481 MB/s ``` after: ``` test encode_varint_small ... bench: 422 ns/iter (+/- 24) = 1895 MB/s ``` As you can see, the variance is also significantly improved. Interestingly, I tried to change the last statement in `bytes_mut` from ``` &mut slice::from_raw_parts_mut(ptr, cap)[len..] ``` to ``` slice::from_raw_parts_mut(ptr.offset(len as isize), cap - len) ``` but, this caused a very measurable perf regression (almost completely negating the gains from marking bytes_mut inline).
-
Dan Burkert authored
Also fixes an issue with a line wrap in the middle of an inline code block.
-
- Mar 16, 2017
-
-
Carl Lerche authored
-
- Mar 07, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
The `Source` trait was essentially covering the same case as `IntoBuf`, so remove it. While technically a breaking change, this should not have any impact due to: 1) There are no reverse dependencies that currently depend on `bytes` 2) Source was not supposed to be implemented externally 3) IntoBuf provides the same implementations as `Source` Given these points, the change should be safe to apply.
-
Carl Lerche authored
-
- Mar 03, 2017
-
-
Carl Lerche authored
This change tracks the original capacity requested when `BytesMut` is first created. This capacity is used when a `reserve` needs to allocate due to the current view being too small. The newly allocated buffer will be sized the same as the original allocation.
-
- Mar 02, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
-
- Mar 01, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
-