- Jul 13, 2018
-
-
Carl Lerche authored
-
Sean McArthur authored
-
Rafael Ávila de Espíndola authored
With this if foo is a mutable slice, it is possible to do foo.into_buf().put_u32_le(42); Before this patch into_buf would create a Cursor<&'a [u8]> and it would not be possible to write into it.
-
Roman authored
-
Carl Lerche authored
-
Sean McArthur authored
-
Geoffry Song authored
I noticed that the bare `[u8]` made rustdoc nightly unhappy.
-
Rafael Ávila de Espíndola authored
With this if foo is a mutable slice, it is possible to do foo.into_buf().put_u32_le(42); Before this patch into_buf would create a Cursor<&'a [u8]> and it would not be possible to write into it.
-
- Jul 05, 2018
-
-
Roman authored
-
- Jul 03, 2018
-
-
Sean McArthur authored
- Clones when the kind is INLINE or STATIC are sped up by over double. - Clones when the kind is ARC are spec up by about 1/3.
-
- Jul 02, 2018
-
-
luben karavelov authored
-
- Jun 19, 2018
-
-
Ashley Mannix authored
-
- Jun 18, 2018
-
-
Carl Lerche authored
-
Carl Lerche authored
The intent of the license was to dual license MIT & Apache 2.0. However, the messaging was copy / pasted from rust-lang. Clarify the license as exclusively MIT. Fixes #215
-
- May 25, 2018
-
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-
Luke Horsley authored
-
Carl Lerche authored
-
- May 24, 2018
-
-
Noah Zentzis authored
* Recycle space when reserving from Vec-backed Bytes BytesMut::reserve, when called on a BytesMut instance which is backed by a non-shared Vec<u8>, would previously just delegate to Vec::reserve regardless of the current location in the buffer. If the Bytes is actually the trailing component of a larger Vec, then the unused space won't be recycled. In applications which continually move the pointer forward to consume data as it comes in, this can cause the underlying buffer to get extremely large. This commit checks whether there's extra space at the start of the backing Vec in this case, and reuses the unused space if possible instead of allocating. * Avoid excessive copying when reusing Vec space Only reuse space in a Vec-backed Bytes when doing so would gain back more than half of the current capacity. This avoids excessive copy operations when a large buffer is almost (but not completely) full.
-
Noah Zentzis authored
* Recycle space when reserving from Vec-backed Bytes BytesMut::reserve, when called on a BytesMut instance which is backed by a non-shared Vec<u8>, would previously just delegate to Vec::reserve regardless of the current location in the buffer. If the Bytes is actually the trailing component of a larger Vec, then the unused space won't be recycled. In applications which continually move the pointer forward to consume data as it comes in, this can cause the underlying buffer to get extremely large. This commit checks whether there's extra space at the start of the backing Vec in this case, and reuses the unused space if possible instead of allocating. * Avoid excessive copying when reusing Vec space Only reuse space in a Vec-backed Bytes when doing so would gain back more than half of the current capacity. This avoids excessive copy operations when a large buffer is almost (but not completely) full.
-
- May 11, 2018
-
-
Carl Lerche authored
-
- Apr 27, 2018
-
-
Carl Lerche authored
-
Carl Lerche authored
-
kohensu authored
The new implementation tries to get the data directly from bytes() (this is possible most of the time) and if there is not enough data in bytes() use the previous code: copy the needed bytes in a temporary buffer before returning the data Here the bench results: Before After x-faster get_f32::cursor 64 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.2 get_f32::tbuf_1 77 ns/iter (+/- 1) 34 ns/iter (+/- 0) 2.3 get_f32::tbuf_1_costly 87 ns/iter (+/- 0) 62 ns/iter (+/- 0) 1.4 get_f32::tbuf_2 151 ns/iter (+/- 18) 160 ns/iter (+/- 1) 0.9 get_f32::tbuf_2_costly 180 ns/iter (+/- 2) 187 ns/iter (+/- 2) 1.0 get_f64::cursor 67 ns/iter (+/- 0) 21 ns/iter (+/- 0) 3.2 get_f64::tbuf_1 80 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.3 get_f64::tbuf_1_costly 82 ns/iter (+/- 3) 60 ns/iter (+/- 0) 1.4 get_f64::tbuf_2 154 ns/iter (+/- 1) 164 ns/iter (+/- 0) 0.9 get_f64::tbuf_2_costly 170 ns/iter (+/- 2) 187 ns/iter (+/- 1) 0.9 get_u16::cursor 66 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.3 get_u16::tbuf_1 77 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.2 get_u16::tbuf_1_costly 85 ns/iter (+/- 2) 62 ns/iter (+/- 0) 1.4 get_u16::tbuf_2 147 ns/iter (+/- 0) 154 ns/iter (+/- 0) 1.0 get_u16::tbuf_2_costly 160 ns/iter (+/- 1) 177 ns/iter (+/- 0) 0.9 get_u32::cursor 64 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.2 get_u32::tbuf_1 77 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.2 get_u32::tbuf_1_costly 91 ns/iter (+/- 2) 63 ns/iter (+/- 0) 1.4 get_u32::tbuf_2 151 ns/iter (+/- 40) 157 ns/iter (+/- 0) 1.0 get_u32::tbuf_2_costly 162 ns/iter (+/- 0) 180 ns/iter (+/- 0) 0.9 get_u64::cursor 67 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.4 get_u64::tbuf_1 78 ns/iter (+/- 0) 35 ns/iter (+/- 1) 2.2 get_u64::tbuf_1_costly 87 ns/iter (+/- 1) 59 ns/iter (+/- 1) 1.5 get_u64::tbuf_2 154 ns/iter (+/- 0) 160 ns/iter (+/- 0) 1.0 get_u64::tbuf_2_costly 168 ns/iter (+/- 0) 184 ns/iter (+/- 0) 0.9 get_u8::cursor 64 ns/iter (+/- 0) 19 ns/iter (+/- 0) 3.4 get_u8::tbuf_1 77 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.2 get_u8::tbuf_1_costly 68 ns/iter (+/- 0) 51 ns/iter (+/- 0) 1.3 get_u8::tbuf_2 85 ns/iter (+/- 0) 43 ns/iter (+/- 0) 2.0 get_u8::tbuf_2_costly 75 ns/iter (+/- 0) 61 ns/iter (+/- 0) 1.2 get_u8::option 77 ns/iter (+/- 0) 59 ns/iter (+/- 0) 1.3 Improvement on the basic std::Cursor implementation are clearly visible. Other implementations are specific to the bench tests and just map a static slice. Different variant are: - tbuf_1: only one call of 'bytes()' is needed. - tbuf_2: two calls of 'bytes()' is needed to read more than one byte. - _costly version are implemented with #[inline(never)] on 'bytes()', 'remaining()' and 'advance()'. The cases that are slower (slightly) correspond to implementations that are not really realistic: more than one byte is never possible in one time
-
Alan Somers authored
-
kohensu authored
The new implementation tries to get the data directly from bytes() (this is possible most of the time) and if there is not enough data in bytes() use the previous code: copy the needed bytes in a temporary buffer before returning the data Here the bench results: Before After x-faster get_f32::cursor 64 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.2 get_f32::tbuf_1 77 ns/iter (+/- 1) 34 ns/iter (+/- 0) 2.3 get_f32::tbuf_1_costly 87 ns/iter (+/- 0) 62 ns/iter (+/- 0) 1.4 get_f32::tbuf_2 151 ns/iter (+/- 18) 160 ns/iter (+/- 1) 0.9 get_f32::tbuf_2_costly 180 ns/iter (+/- 2) 187 ns/iter (+/- 2) 1.0 get_f64::cursor 67 ns/iter (+/- 0) 21 ns/iter (+/- 0) 3.2 get_f64::tbuf_1 80 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.3 get_f64::tbuf_1_costly 82 ns/iter (+/- 3) 60 ns/iter (+/- 0) 1.4 get_f64::tbuf_2 154 ns/iter (+/- 1) 164 ns/iter (+/- 0) 0.9 get_f64::tbuf_2_costly 170 ns/iter (+/- 2) 187 ns/iter (+/- 1) 0.9 get_u16::cursor 66 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.3 get_u16::tbuf_1 77 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.2 get_u16::tbuf_1_costly 85 ns/iter (+/- 2) 62 ns/iter (+/- 0) 1.4 get_u16::tbuf_2 147 ns/iter (+/- 0) 154 ns/iter (+/- 0) 1.0 get_u16::tbuf_2_costly 160 ns/iter (+/- 1) 177 ns/iter (+/- 0) 0.9 get_u32::cursor 64 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.2 get_u32::tbuf_1 77 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.2 get_u32::tbuf_1_costly 91 ns/iter (+/- 2) 63 ns/iter (+/- 0) 1.4 get_u32::tbuf_2 151 ns/iter (+/- 40) 157 ns/iter (+/- 0) 1.0 get_u32::tbuf_2_costly 162 ns/iter (+/- 0) 180 ns/iter (+/- 0) 0.9 get_u64::cursor 67 ns/iter (+/- 0) 20 ns/iter (+/- 0) 3.4 get_u64::tbuf_1 78 ns/iter (+/- 0) 35 ns/iter (+/- 1) 2.2 get_u64::tbuf_1_costly 87 ns/iter (+/- 1) 59 ns/iter (+/- 1) 1.5 get_u64::tbuf_2 154 ns/iter (+/- 0) 160 ns/iter (+/- 0) 1.0 get_u64::tbuf_2_costly 168 ns/iter (+/- 0) 184 ns/iter (+/- 0) 0.9 get_u8::cursor 64 ns/iter (+/- 0) 19 ns/iter (+/- 0) 3.4 get_u8::tbuf_1 77 ns/iter (+/- 0) 35 ns/iter (+/- 0) 2.2 get_u8::tbuf_1_costly 68 ns/iter (+/- 0) 51 ns/iter (+/- 0) 1.3 get_u8::tbuf_2 85 ns/iter (+/- 0) 43 ns/iter (+/- 0) 2.0 get_u8::tbuf_2_costly 75 ns/iter (+/- 0) 61 ns/iter (+/- 0) 1.2 get_u8::option 77 ns/iter (+/- 0) 59 ns/iter (+/- 0) 1.3 Improvement on the basic std::Cursor implementation are clearly visible. Other implementations are specific to the bench tests and just map a static slice. Different variant are: - tbuf_1: only one call of 'bytes()' is needed. - tbuf_2: two calls of 'bytes()' is needed to read more than one byte. - _costly version are implemented with #[inline(never)] on 'bytes()', 'remaining()' and 'advance()'. The cases that are slower (slightly) correspond to implementations that are not really realistic: more than one byte is never possible in one time
-
kohensu authored
-
- Mar 12, 2018
-
-
Anthony Ramine authored
-
Alan Somers authored
-
Carl Lerche authored
This patch fixes the `copy_to_slice` function, rectifying the logic. However, the incorrect code does not result in incorrect behavior as the only case `cnt != src.len()` is during the final iteration, and since `src.len()` is greater than `cnt` in that case, `off` will be incremented by too much, but this will still trigger the `off < dst.len()` condition. The only danger is `src.len()` could cause an overflow.
-
Carl Lerche authored
-
Sean McArthur authored
* make Buf and BufMut usable as trait objects - All the `get_*` and `put_*` methods that take `T: ByteOrder` have a `where Self: Sized` bound added, so that they are only usable from sized types. It was impossible to make `Buf` or `BufMut` into trait objects before, so this change doesn't break anyone. - Add `get_n_be`/`get_n_le`/`put_n_be`/`put_n_le` methods that can be used on trait objects. - Deprecate the export of `ByteOrder` and methods generic on it. * remove deprecated ByteOrder methods Removes the `_be` suffix from all methods, implying that the default people should use is network endian.
-
Sean McArthur authored
- All the `get_*` and `put_*` methods that take `T: ByteOrder` have a `where Self: Sized` bound added, so that they are only usable from sized types. It was impossible to make `Buf` or `BufMut` into trait objects before, so this change doesn't break anyone. - Add `get_n_be`/`get_n_le`/`put_n_be`/`put_n_le` methods that can be used on trait objects. - Deprecate the export of `ByteOrder` and methods generic on it. Fixes #163
-
- Feb 26, 2018
-
-
Alan Somers authored
Add `Bytes::unsplit`, analogous to `BytesMut::unsplit`.
-
- Jan 29, 2018
-
-
Carl Lerche authored
-
Carl Lerche authored
-
Carl Lerche authored
-