- Nov 17, 2018
-
-
Carl Lerche authored
-
Ralf Jung authored
Shared references assert immutability, so any concurrent access would be UB disregarding data race concerns.
-
Michal 'vorner' Vaner authored
There's no reason the user should be forced to wrap it in BufReader in case the trait is needed, because the Reader has all the bits for supporting it naturally.
-
Michal 'vorner' Vaner authored
The property the Buff and BuffMut can return shorter slice is quite an important detail. Nevertheless, while it is mentioned in the documentation, the wording makes it relatively easy to overlook. This tries to bring more attention to it.
-
Carl Lerche authored
-
- Sep 04, 2018
-
-
Carl Lerche authored
-
- Sep 03, 2018
-
-
Carl Lerche authored
-
Carl Lerche authored
-
- Sep 02, 2018
-
-
Federico Mena Quintero authored
This lets us take Bytes and a &[u8] slice that is contained in it, and create a new Bytes that corresponds to that subset slice. Closes #198
-
- Jul 23, 2018
-
-
Carl Lerche authored
-
- Jul 13, 2018
-
-
Sean McArthur authored
-
Rafael Ávila de Espíndola authored
With this if foo is a mutable slice, it is possible to do foo.into_buf().put_u32_le(42); Before this patch into_buf would create a Cursor<&'a [u8]> and it would not be possible to write into it.
-
Roman authored
-
- Jul 03, 2018
-
-
Sean McArthur authored
- Clones when the kind is INLINE or STATIC are sped up by over double. - Clones when the kind is ARC are spec up by about 1/3.
-
- Jul 02, 2018
-
-
luben karavelov authored
-
- Jun 19, 2018
-
-
Ashley Mannix authored
-
- Jun 18, 2018
-
-
Carl Lerche authored
The intent of the license was to dual license MIT & Apache 2.0. However, the messaging was copy / pasted from rust-lang. Clarify the license as exclusively MIT. Fixes #215
-
- May 25, 2018
-
-
Carl Lerche authored
-
Carl Lerche authored
-
Luke Horsley authored
-
Carl Lerche authored
-
- May 24, 2018
-
-
Noah Zentzis authored
* Recycle space when reserving from Vec-backed Bytes BytesMut::reserve, when called on a BytesMut instance which is backed by a non-shared Vec<u8>, would previously just delegate to Vec::reserve regardless of the current location in the buffer. If the Bytes is actually the trailing component of a larger Vec, then the unused space won't be recycled. In applications which continually move the pointer forward to consume data as it comes in, this can cause the underlying buffer to get extremely large. This commit checks whether there's extra space at the start of the backing Vec in this case, and reuses the unused space if possible instead of allocating. * Avoid excessive copying when reusing Vec space Only reuse space in a Vec-backed Bytes when doing so would gain back more than half of the current capacity. This avoids excessive copy operations when a large buffer is almost (but not completely) full.
-
- May 11, 2018
-
-
Carl Lerche authored
-
- Apr 27, 2018
-
-
Carl Lerche authored
-
kohensu authored
The new implementation tries to get the data directly from bytes() (this is possible most of the time) and if there is not enough data in bytes() use the previous code: copy the needed bytes in a temporary buffer before returning the data Here the bench results: Before After x-faster get_f32::cursor 64 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.2 get_f32::tbuf_1 77 ns/iter (+/- 1) 34 ns/iter (+/- 0) 2.3 get_f32::tbuf_1_costly 87 ns/iter (+/- 0) 62 ns/iter (+/- 0) 1.4 get_f32::tbuf_2 151 ns/iter (+/- 18) 160 ns/iter (+/- 1) 0.9 get_f32::tbuf_2_costly 180 ns/iter (+/- 2) 187 ns/iter (+/- 2) 1.0 get_f64::cursor 67 ns/iter (+/- 0) 21 ns/iter (+/- 0) 3.2 get_f64::tbuf_1 80 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.3 get_f64::tbuf_1_costly 82 ns/iter (+/- 3) 60 ns/iter (+/- 0) 1.4 get_f64::tbuf_2 154 ns/iter (+/- 1) 164 ns/iter (+/- 0) 0.9 get_f64::tbuf_2_costly 170 ns/iter (+/- 2) 187 ns/iter (+/- 1) 0.9 get_u16::cursor 66 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.3 get_u16::tbuf_1 77 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.2 get_u16::tbuf_1_costly 85 ns/iter (+/- 2) 62 ns/iter (+/- 0) 1.4 get_u16::tbuf_2 147 ns/iter (+/- 0) 154 ns/iter (+/- 0) 1.0 get_u16::tbuf_2_costly 160 ns/iter (+/- 1) 177 ns/iter (+/- 0) 0.9 get_u32::cursor 64 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.2 get_u32::tbuf_1 77 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.2 get_u32::tbuf_1_costly 91 ns/iter (+/- 2) 63 ns/iter (+/- 0) 1.4 get_u32::tbuf_2 151 ns/iter (+/- 40) 157 ns/iter (+/- 0) 1.0 get_u32::tbuf_2_costly 162 ns/iter (+/- 0) 180 ns/iter (+/- 0) 0.9 get_u64::cursor 67 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.4 get_u64::tbuf_1 78 ns/iter (+/- 0) 35 ns/iter (+/- 1) 2.2 get_u64::tbuf_1_costly 87 ns/iter (+/- 1) 59 ns/iter (+/- 1) 1.5 get_u64::tbuf_2 154 ns/iter (+/- 0) 160 ns/iter (+/- 0) 1.0 get_u64::tbuf_2_costly 168 ns/iter (+/- 0) 184 ns/iter (+/- 0) 0.9 get_u8::cursor 64 ns/iter (+/- 0) 19 ns/iter (+/- 0) 3.4 get_u8::tbuf_1 77 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.2 get_u8::tbuf_1_costly 68 ns/iter (+/- 0) 51 ns/iter (+/- 0) 1.3 get_u8::tbuf_2 85 ns/iter (+/- 0) 43 ns/iter (+/- 0) 2.0 get_u8::tbuf_2_costly 75 ns/iter (+/- 0) 61 ns/iter (+/- 0) 1.2 get_u8::option 77 ns/iter (+/- 0) 59 ns/iter (+/- 0) 1.3 Improvement on the basic std::Cursor implementation are clearly visible. Other implementations are specific to the bench tests and just map a static slice. Different variant are: - tbuf_1: only one call of 'bytes()' is needed. - tbuf_2: two calls of 'bytes()' is needed to read more than one byte. - _costly version are implemented with #[inline(never)] on 'bytes()', 'remaining()' and 'advance()'. The cases that are slower (slightly) correspond to implementations that are not really realistic: more than one byte is never possible in one time
-
Alan Somers authored
-
- Mar 12, 2018
-
-
Sean McArthur authored
- All the `get_*` and `put_*` methods that take `T: ByteOrder` have a `where Self: Sized` bound added, so that they are only usable from sized types. It was impossible to make `Buf` or `BufMut` into trait objects before, so this change doesn't break anyone. - Add `get_n_be`/`get_n_le`/`put_n_be`/`put_n_le` methods that can be used on trait objects. - Deprecate the export of `ByteOrder` and methods generic on it. Fixes #163
-
- Jan 29, 2018
-
-
Carl Lerche authored
-
- Jan 08, 2018
-
-
Carl Lerche authored
-
- Jan 06, 2018
-
-
jq-rs authored
* Handle empty self and other for unsplit. * Change extend() to extend_from_slice().
-
- Jan 03, 2018
-
-
Stepan Koltsov authored
If `shallow_clone` is called with `&mut self`, and `Bytes` contains `Vec`, then expensive CAS can be avoided, because no other thread have references to this `Bytes` object. Bench `split_off_and_drop` difference: Before the diff: ``` test split_off_and_drop ... bench: 91,858 ns/iter (+/- 17,401) ``` With the diff: ``` test split_off_and_drop ... bench: 81,162 ns/iter (+/- 17,603) ```
-
jq-rs authored
Add support for unsplit() to BytesMut which combines splitted contiguous memory blocks efficiently.
-
- Dec 16, 2017
-
-
Carl Lerche authored
Fixes #164
-
- Dec 13, 2017
-
-
Carl Lerche authored
* Compact Bytes original capacity representation In order to avoid unnecessary allocations, a `Bytes` structure remembers the capacity with which it was first created. When a reserve operation is issued, this original capacity value is used to as a baseline for reallocating new storage. Previously, this original capacity value was stored in its raw form. In other words, the original capacity `usize` was stored as is. In order to reclaim some `Bytes` internal storage space for additional features, this original capacity value is compressed from requiring 16 bits to 3. To do this, instead of storing the exact original capacity. The original capacity is rounded down to the nearest power of two. If the original capacity is less than 1024, then it is rounded down to zero. This roughly means that the original capacity is now stored as a table: 0 => 0 1 => 1k 2 => 2k 3 => 4k 4 => 8k 5 => 16k 6 => 32k 7 => 64k For the purposes that the original capacity feature was introduced, this is sufficient granularity. * Provide `advance` on Bytes and BytesMut This is the `advance` function that would be part of a `Buf` implementation. However, `Bytes` and `BytesMut` cannot impl `Buf` until the next breaking release. The implementation uses the additional storage made available by the previous commit to store the number of bytes that the view was advanced. The `ptr` pointer will point to the start of the window, avoiding any pointer arithmetic when dereferencing the `Bytes` handle.
-
- Oct 21, 2017
-
-
Carl Lerche authored
-
- Aug 18, 2017
-
-
Dan Burkert authored
* Inner: make uninitialized construction explicit * Remove Inner2 * Remove unnecessary transmutes * Use AtomicPtr::get_mut where possible * Some minor tweaks
-
- Aug 17, 2017
-
-
Jef authored
-
Sean McArthur authored
-
- Aug 12, 2017
-
-
Carl Lerche authored
-
- Aug 06, 2017
-
-
Alex Crichton authored
-