Skip to content
Snippets Groups Projects
Unverified Commit dfce95b8 authored by Noah Zentzis's avatar Noah Zentzis Committed by Carl Lerche
Browse files

Recycle space when reserving from Vec-backed Bytes (#197)

* Recycle space when reserving from Vec-backed Bytes

BytesMut::reserve, when called on a BytesMut instance which is backed by
a non-shared Vec<u8>, would previously just delegate to Vec::reserve
regardless of the current location in the buffer. If the Bytes is
actually the trailing component of a larger Vec, then the unused space
won't be recycled. In applications which continually move the pointer
forward to consume data as it comes in, this can cause the underlying
buffer to get extremely large.

This commit checks whether there's extra space at the start of the
backing Vec in this case, and reuses the unused space if possible
instead of allocating.

* Avoid excessive copying when reusing Vec space

Only reuse space in a Vec-backed Bytes when doing so would gain back
more than half of the current capacity. This avoids excessive copy
operations when a large buffer is almost (but not completely) full.
parent b68fa46e
No related branches found
No related tags found
No related merge requests found
...@@ -2165,20 +2165,42 @@ impl Inner { ...@@ -2165,20 +2165,42 @@ impl Inner {
} }
if kind == KIND_VEC { if kind == KIND_VEC {
// Currently backed by a vector, so just use `Vector::reserve`. // If there's enough free space before the start of the buffer, then
// just copy the data backwards and reuse the already-allocated
// space.
//
// Otherwise, since backed by a vector, use `Vec::reserve`
unsafe { unsafe {
let (off, _) = self.uncoordinated_get_vec_pos(); let (off, prev) = self.uncoordinated_get_vec_pos();
let mut v = rebuild_vec(self.ptr, self.len, self.cap, off);
v.reserve(additional); // Only reuse space if we stand to gain at least capacity/2
// bytes of space back
// Update the info if off >= additional && off >= (self.cap / 2) {
self.ptr = v.as_mut_ptr().offset(off as isize); // There's space - reuse it
self.len = v.len() - off; //
self.cap = v.capacity() - off; // Just move the pointer back to the start after copying
// data back.
let base_ptr = self.ptr.offset(-(off as isize));
ptr::copy(self.ptr, base_ptr, self.len);
self.ptr = base_ptr;
self.uncoordinated_set_vec_pos(0, prev);
// Length stays constant, but since we moved backwards we
// can gain capacity back.
self.cap += off;
} else {
// No space - allocate more
let mut v = rebuild_vec(self.ptr, self.len, self.cap, off);
v.reserve(additional);
// Drop the vec reference // Update the info
mem::forget(v); self.ptr = v.as_mut_ptr().offset(off as isize);
self.len = v.len() - off;
self.cap = v.capacity() - off;
// Drop the vec reference
mem::forget(v);
}
return; return;
} }
} }
......
...@@ -378,6 +378,21 @@ fn reserve_max_original_capacity_value() { ...@@ -378,6 +378,21 @@ fn reserve_max_original_capacity_value() {
assert_eq!(bytes.capacity(), 64 * 1024); assert_eq!(bytes.capacity(), 64 * 1024);
} }
// Without either looking at the internals of the BytesMut or doing weird stuff
// with the memory allocator, there's no good way to automatically verify from
// within the program that this actually recycles memory. Instead, just exercise
// the code path to ensure that the results are correct.
#[test]
fn reserve_vec_recycling() {
let mut bytes = BytesMut::from(Vec::with_capacity(16));
assert_eq!(bytes.capacity(), 16);
bytes.put("0123456789012345");
bytes.advance(10);
assert_eq!(bytes.capacity(), 6);
bytes.reserve(8);
assert_eq!(bytes.capacity(), 16);
}
#[test] #[test]
fn reserve_in_arc_unique_does_not_overallocate() { fn reserve_in_arc_unique_does_not_overallocate() {
let mut bytes = BytesMut::with_capacity(1000); let mut bytes = BytesMut::with_capacity(1000);
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment