- Mar 28, 2017
-
-
Stepan Koltsov authored
Before this commit `Bytes::split_{off,to}` always created a shallow copy if `self` is arc or vec. However, in certain cases `split_off` or `split_to` is called with `len` or `0` parameter. E. g. if you are reading a frame from buffered stream, it is likely that buffer contains exactly the frame size bytes, so `split_to` will be called with `len` param. Although, `split_off` and `split_to` functions are `O(1)`, shallow copy have downsides: * shallow copy on vector does malloc and atomic cmpxchg * after shallow copy, following operations (e. g. `drop`) on both `bytes` objects require atomics * memory will be probably released to the system later * `try_mut` will fail * [into_vec](https://github.com/carllerche/bytes/issues/86) will copy
-
- Mar 24, 2017
-
-
Alex Crichton authored
-
- Mar 21, 2017
-
-
Stepan Koltsov authored
Standard `Debug` implementation for `[u8]` is comma separated list of numbers. Since large amount of byte strings are in fact ASCII strings or contain a lot of ASCII strings (e. g. HTTP), it is convenient to print strings as ASCII when possible.
-
- Mar 20, 2017
-
-
Carl Lerche authored
Limit the number of threads when using qemu to 1. Also, don't bother running the stress test as this will trigger qemu bugs. Finally, also make the stress test actually stress test.
-
- Mar 19, 2017
-
-
Carl Lerche authored
Closes #79
-
Dan Burkert authored
I found this significantly improved a [benchmark](https://gist.github.com/danburkert/34a7d6680d97bc86dca7f396eb8d0abf) which calls `bytes_mut`, writes 1 byte, and advances the pointer with `advance_mut` in a pretty tight loop. In particular, it seems to be the inline annotation on `bytes_mut` which had the most effect. I also took the opportunity to simplify the bounds checking in advance_mut. before: ``` test encode_varint_small ... bench: 540 ns/iter (+/- 85) = 1481 MB/s ``` after: ``` test encode_varint_small ... bench: 422 ns/iter (+/- 24) = 1895 MB/s ``` As you can see, the variance is also significantly improved. Interestingly, I tried to change the last statement in `bytes_mut` from ``` &mut slice::from_raw_parts_mut(ptr, cap)[len..] ``` to ``` slice::from_raw_parts_mut(ptr.offset(len as isize), cap - len) ``` but, this caused a very measurable perf regression (almost completely negating the gains from marking bytes_mut inline).
-
Dan Burkert authored
Also fixes an issue with a line wrap in the middle of an inline code block.
-
Carl Lerche authored
Closes #83
-
- Mar 16, 2017
-
-
Carl Lerche authored
-
- Mar 15, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-
- Mar 07, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
The `Source` trait was essentially covering the same case as `IntoBuf`, so remove it. While technically a breaking change, this should not have any impact due to: 1) There are no reverse dependencies that currently depend on `bytes` 2) Source was not supposed to be implemented externally 3) IntoBuf provides the same implementations as `Source` Given these points, the change should be safe to apply.
-
Carl Lerche authored
-
- Mar 03, 2017
-
-
Carl Lerche authored
This change tracks the original capacity requested when `BytesMut` is first created. This capacity is used when a `reserve` needs to allocate due to the current view being too small. The newly allocated buffer will be sized the same as the original allocation.
-
- Mar 02, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
-
- Mar 01, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-
Alex Crichton authored
Add `?Sized` bounds to work for DST objects and also add impls for `Box` as well as `&mut`
-
Carl Lerche authored
-
Carl Lerche authored
Enables collecting the contents of a `Buf` value into a relevant concrete buffer implementation.
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-
Alex Crichton authored
Allows links in other crates to link to crates.io docs of bytes itself.
-
- Feb 28, 2017
-
-
Carl Lerche authored
-
- Feb 24, 2017
-
-
Carl Lerche authored
-
- Feb 21, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
Instead of providing a separate `try_reclaim` function, `reserve` will attempt to reclaim the existing buffer before allocating.
-
- Feb 20, 2017
-
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
The previous implementation didn't factor in a single `Bytes` handle being stored in an `Arc`. This new implementation correctly impelments both `Bytes` and `BytesMut` such that both are `Sync`. The rewrite also increases the number of bytes that can be stored inline.
-
- Feb 17, 2017
-
-
Carl Lerche authored
-