- Dec 18, 2013
-
-
Asias He authored
Signed-off-by:
Asias He <asias@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- Dec 17, 2013
-
-
Asias He authored
Signed-off-by:
Asias He <asias@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Asias He authored
Signed-off-by:
Asias He <asias@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Asias He authored
We can skip to construct a vring::sg_node. Signed-off-by:
Asias He <asias@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- Dec 16, 2013
-
-
Avi Kivity authored
bsd defines some m_ macros, for example m_flags, to save some typing. However if you have a variable of the same name in another header, for example m_flags, have fun trying to compile your code. Expand the code in place and eliminate the macros. Signed-off-by:
Avi Kivity <avi@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Vlad authored
Switched the virtio-net driver to use if_transmit() instead of legacy if_start(). This saves us at least 2 additional lock/unlock sequences per-each mbuf since IF_ENQUEUE() and IF_DEQUEUE() take lock when pushing/removing the mbuf from the queue if ifnet in a legacy mode. Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- Dec 06, 2013
-
-
Asias He authored
Now, the tx gc thread is gonna. The gc code can only be called in one place. We do not need the lock anymore. Signed-off-by:
Asias He <asias@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Asias He authored
This unifies the code a bit: we do all the tx queue gc in one common code path. Signed-off-by:
Asias He <asias@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Asias He authored
We do tx queue gc on the tx path if there is not enough space. The tx queue gc thread is not a must. Dropping it saves us a running thread and saves a thread wakeup on every interrupt. Signed-off-by:
Asias He <asias@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- Nov 28, 2013
-
-
Pekka Enberg authored
Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- Oct 10, 2013
-
-
Avi Kivity authored
We have _KERNEL defines scattered throughout the code, which makes understanding it difficult. Define it just once, and adjust the source to build. We define it in an overridable variable, so that non-kernel imported code can undo it.
-
- Oct 02, 2013
-
-
Pekka Enberg authored
The page_size constant is not used in the code so lets just remove it. Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- Sep 15, 2013
-
-
Nadav Har'El authored
Add Cloudius copyright and license statement to drivers/*. A couple of header files were based on Linux's BSD-licensed header files (e.g., include/uapi/linux/virtio_net.h) so they included the BSD license, but not any copyright statement, so we can just replace that by our own statement of the BSD license.
-
- Aug 12, 2013
-
-
Avi Kivity authored
Without this, the networking stack will ignore TSO. Improves Java Echo test from ~2Gbps to ~5Gbps. With Dor Laor.
-
- Jul 28, 2013
-
-
Dor Laor authored
Based on FreeBSD virtio code Provides a x7 boost for rx netperf
-
Avi Kivity authored
-
- Jul 24, 2013
- Jul 11, 2013
-
-
Dor Laor authored
virtio_blk pre-allocates requests into a cache to avoid re-allocation (possibly an unneeded optimization with the current allocator). However, it doesn't take into account that requests can be completed out-of-order, and simply reuses requests in a cyclic order. Noted by Avi although I had it made using a peak into the index ring but its too complex solution. There is no performance degradation w/ smp due to the good allocator we have today.
-
- Jul 10, 2013
-
-
Dor Laor authored
virtio-vring and it's users (net/blk) were changed so no request header will be allocated on run time except for init. In order to do that, I have to change get_buf and break it into multiple parts: // Get the top item from the used ring void* get_buf_elem(u32 *len); // Let the host know we consumed the used entry // We separate that from get_buf_elem so no one // will re-cycle the request header location until // we're finished with it in the upper layer void get_buf_finalize(); // GC the used items that were already read to be emptied // within the ring. Should be called by add_buf // It was separated from the get_buf flow to allow parallelism of the two void get_buf_gc(); As a result, it was simple to get rid of the shared lock that protected _avail_head variable before. Today only the thread that calls add_buf updates this variable (add_buf calls get_buf_gc internally). There are two new locks instead: - virtio-net tx_gc lock - very rarely it can be accessed by the tx_gc thread or normally by the tx xmit thread - virtio-blk make_requests - there are parallel requests
-
Dor Laor authored
Trivial: Move code above, preparation for preventing past path allocations for the virtio request data
-
- Jul 04, 2013
-
-
Dor Laor authored
-
Dor Laor authored
Use a single instance per queue vector of sglist data. Before this patch sglist was implemented as a std::list which caused it to allocate heap memory and travel through pointers. Now we use a single vector per queue to temoprary keep the buffer data between the upper virtio layer and the lower one.
-
- Jul 03, 2013
-
-
Dor Laor authored
Instead of enabling interrupts for tx by the host when we have a single used pkt in the ring, wait until we have 1/2 ring. This improves the amount of tx irqs from one per pkt to practically zero (note that we actively call tx_gc if there is not place on the ring when doing tx). There was a 40% performance boost on the netperf rx test
-
- Jun 20, 2013
-
-
Dor Laor authored
The feature allows the hypervisor to batch several packets together as one large SG list. Once such header is received, the guest rx routine interates over the list and assembles a mega mbuf. The patch also simplifies the rx path by using a single buffer for the virtio data and its header. This shrinks the sg list from size of two into a single one. The issue is that at the moment I haven't seen packets w/ mbuf > 1 being received. Linux guest does receives such packets here and there. It may be due to the use of offload features that enalrge the packet size
-
- Jun 17, 2013
-
-
Guy Zana authored
-
- Jun 09, 2013
-
-
Guy Zana authored
-
Guy Zana authored
next patch is changing the debug function to tprintf_d, which may be implemented as do{}while(0) in case conf-logger_debug=0, in this case compilation breaks complaining about unused variables. these debug prints are not very useful today, so I remove them. Instead, they may be implemented as tracepoints.
-
- Jun 06, 2013
-
-
Guy Zana authored
we have to disable virio interrupts before msix EOI so disabling must be done in the ISR handler context. This patch adds an std::function isr to the bindings. references to the rx and tx queues are saved as well (_rx_queue and _tx_queue), so they can be used in the ISR context. this patch reduces virtio net rx interrupts by a factor of 450.
-
- May 22, 2013
- May 16, 2013
-
-
Dor Laor authored
Use size of MCLBYTES which is 2k for the mbufs instead of the previous page size. As TODO I need to add an mtu change function that will take this number and uma alloc as param
-
Dor Laor authored
Using unique_ptr and make sure we don't leak. Note that the FreeBSD convention is to let the upper protocols to free the mbuf in the receive path (without them increasing the ref count)
-
- May 14, 2013
- Apr 29, 2013
-
-
Guy Zana authored
1. use osv/ioctl.h in the netport, main.c and various tests 2. change ioctl prototype to agree with glibc. we now use the variadic prototype specified in sys/ioctl.h
-
- Apr 02, 2013
- Mar 22, 2013