- May 15, 2014
-
-
Vlad Zolotarov authored
Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Vlad Zolotarov authored
This class is a heart of a per-CPU Tx framework. Except for a constructor it has two public methods: - xmit(buff): push the packet descriptor downstream either to the HW or into the per-CPU queue if there is a contention. - poll_until(cond): this is a main function of a worker thread that will consume packet descriptors from the per-CPU queue(s) and send them to the output iterator (which is responsible to ensure their successful sending to the HW channel). Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Vlad Zolotarov authored
This class will represent a single per-CPU Tx queue. These queues will be subject to the merging by the nway_merger class in order to address the reordering issue. Therefore this class will implement the following methods/classes: - push(val) - empty() - front(), which will return the iterator that implements: - operator *() to access the underlying value - erase(it), which would pop the front element. If the producer fails to push a new element into the queue (the queue is full) then it may start "waiting for the queue": request to be woken when the queue is not full anymore (when the consumer frees some some entries from the queue): - push_new_waiter() method. Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Vlad Zolotarov authored
Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Vlad Zolotarov authored
This class allows efficiently merge n sorted containers. It allows both a single-call merging with a merge() method and the iterator-like semantincs with a pop() method. In both cases the merged stream/next element are streamed to the output iterator. Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Vlad Zolotarov authored
Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Vlad Zolotarov authored
Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Pawel Dziepak authored
Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pawel Dziepak <pdziepak@quarnos.org> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Pawel Dziepak authored
This patch implements lockfree_queue (which is used as incoming_wakeup_queue) so that it doesn't need exchange or compare_exchange operations. The idea is to use a linked list but interleave actual objects stored in the queue with helper object (lockless_queue_helper) which are just pointer to the next element. Each object in the queue owns the helper that precedes it (and they are dequeued together) while the last helper, which does not precede any object is owned by the queue itself. When a new object is enqueued it gains ownership of the last helper in the queue in exchange of the helper it owned before which now becomes the new tail of the list. Unlike the original implementation this version of lockfree_queue really requires that there is no more than one concurrent producer and no more than one concurrent consumer. The results oftests/misc-ctxs on my test machine are as follows (the values are medians of five runs): before: colocated: 332 ns apart: 590 ns after: colocated: 313 ns apart: 558 ns Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pawel Dziepak <pdziepak@quarnos.org> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 14, 2014
-
-
Tomasz Grabiec authored
This introduces a simple timer-based sampling profiler which is reusing our tracing infrastructure to collect samples. To enable sampler from run.py run it like this: $ scripts/run.py ... --sampler [frequency] Where 'frequency' is an optional parameter for overriding sampling frequency. The default is 1000 (ticks per second). The bigger the frequency the bigger sampling overhead is. Too low values will hurt profile accuracy. Ad-hoc sampler enabling is planned. The code already takes that into account. To see the profile you need to extract the trace: $ trace extract And then show it like this: $ trace prof All 'prof' options can be applied, for example you can group by CPU: $ trace prof -g cpu Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Tomasz Grabiec authored
Sampler will need to set and later restore value of this option. Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
lookup_name_demangled() lookups a symbol name, demangle it, then snprintf onto preallocated buffer. Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 13, 2014
-
-
Glauber Costa authored
While running one of the redis benchmarks, I saw around 23k calls to malloc_large. Among those, ~10 - 11k were 2-page sized. I managed to track it down to the creation of net channels. The problem here is that the net channel structure is slightly larger than half a page - the maximum size for small object pools. That will throw all allocations into malloc_large. Besides being slow, it also wastes a page for every net channel created, since malloc_large will include an extra page in the beginning of each allocation. This patch fixes this by overloading the operators new and delete for the netchannel structure so that we use the more efficient and less wasteful alloc_page. Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 12, 2014
-
-
Glauber Costa authored
Export the shrinker interface to C users. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Reviewed-by:
Raphael S. Carvalho <raphaelsc@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 08, 2014
-
-
Nadav Har'El authored
OSv is currently limited to 64 vCPUs, because we use a 64-bit bitmask for wakeups (see max_cpus in sched.cc). Having exactly 64 CPUs *should* work, but unfortunately didn't because of a bug: cpu_set::operator++ first incremented the index, and then called advance() to find the following one-bit. We had a bug when the index was 63: we then expect operator++ to return 64 (end(), signaling the end of the iteration), but what happened was that after it incremented the index to 64, advance() wrongly handled the case idx=64 (1<<64 returns 1, unexpectedly) and moved it back to idx=63. The patch fixes operator++ to not call advance when idx=64 is reached, so now it works correctly also for idx=63, and booting with 64 CPUs now works. Fixes #234. Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Jaspal Singh Dhillon authored
This patch changes the definition of __assert_fail() in api/assert.h which would allow it and other header files which include it (such as debug.hh) to be used in mgmt submodules. Fixes conflict with declaration of __assert_fail() in external/x64/glibc.bin/usr/include/assert.h Signed-off-by:
Jaspal Singh Dhillon <jaspal.iiith@gmail.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 07, 2014
-
-
Jani Kokkonen authored
the class construction of the page_table_root must happen before priority "mempool", or all the work in arch-setup will be destroyed by the class constructor. Problem noticed while working on the page fault handler for AArch64. Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
- May 05, 2014
-
-
Tomasz Grabiec authored
The synchronizer allows any thread to block on it until it is unlocked. It is unlocked once count_down() has been called given number of times. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Tomasz Grabiec authored
The current tracepoint coverage does not handle all situations well. In particular: * it does not cover link layer devices other than virtio-net. This change fixes that by tracing in more abstract layers. * it records incoming packets at enqueue time, whereas sometimes it's better to trace at handling time. This can be very useful when correlating TCP state changes with incoming packets. New tracepoint was introduced for that: net_packet_handling. * it does not record protocol of the buffer. For non-ethernet protocols we should set appropriate protocol type when reconstructing ethernet frame when dumping to PCAP. We now have the following tracepoints: * net_packet_in - for incoming packets, enqueued or handled directly. * net_packet_out - for outgoing packets hitting link layer (not loopback). * net_packet_handling - for packets which have been queued and are now being handled. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 04, 2014
-
-
Tomasz Grabiec authored
Currently tracepoint's signature string is encoded into u64 which gives 8 character limit to the signature. When signature does not fit into that limit, only the first 8 characters are preserved. This patch fixes the problem by storing the signature as a C string of arbitrary length. Fixes #288. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
- May 03, 2014
-
-
Gleb Natapov authored
Attempt to get read ARC buffer for a hole in a file results in temporary ARC buffer which is destroyed immediately after use. It means that mapping such buffer is impossible, it is unmapped before page fault handler return to application. The patch solves this by detecting that hole in a file is accessed and mapping special zero page instead. It is mapped as COW, so on write attempt new page is allocated. Signed-off-by:
Gleb Natapov <gleb@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- Apr 29, 2014
-
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
This patch implements the sigsetjmp()/siglongjmp() functions. Fixes #241. sigsetjmp() and siglongjmp() are similar to setjmp() and longjmp(), except that they also save and restore the signals mask. Signals are hardly useful in OSv, so we don't necessarily need this signal mask feature, but we still want to implement these functions, if only so that applications which use them by default could run (see issue #241). Most of the code in this patch is from Musl 1.0.0, with a few small modifications - namely, call our sigprocmask() function instead a Linux syscall. Note I copied the x64 version of sigsetjmp.s only. Musl also has this file for ARM and other architectures. Interestingly we already had in our source tree, but didn't use, block.c, and this patch starts to use it. Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
Functions to be run when a thread finishes. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
While working with blocked signals and notifications, it would be good to test what's the current state of other thread's pending signal mask. That machinery exists in sched.cc but isn't exposed. This patch exposes that, together with a more convenient helper for when we are interested in the pointer itself, without dereferencing it. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Calle Wilund authored
Also, move platform dependent fast dispatch to platform arch code tree(s) The patching code is a bit more complex than would seem immediately (or even factually) neccesary. However, depending on cpu, there might be issues with trying to code patch across cache lines (unaligned). To be safe, we do it with the old 16-bit jmp + write + finish dance. [avi: fix up build.mk] Signed-off-by:
Calle Wilund <calle@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
- Apr 28, 2014
-
-
Avi Kivity authored
phys_ptr<T>: unique_ptr<> for physical memory make_phys_ptr(): allocate and initialize a phys_ptr<> make_phys_array(): allocate a phys_ptr<> referencing an array Signed-off-by:
Avi Kivity <avi@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
- Apr 25, 2014
-
-
Tomasz Grabiec authored
There was no way to sniff packets going through OSv's loopback interface. I faced a need to debug in-guest TCP traffic. Packets are logged using tracing infrastructure. Packet data is serialized as sample data up to a limit, which is currently hardcoded to 128 bytes. To enable capturing of packets just enable tracepoints named: - net_packet_loopback - net_packet_eth Raw data can be seen in `trace list` output. Better presentation methods will be added in the following patches. This may also become useful when debugging network problems in the cloud, as we have no ability to run tcpdump on the host there. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Tomasz Grabiec authored
Tracepoint argument which extends 'blob_tag' will be interpreted as a range of byte-sized values. Storage required to serialize such object is proportional to its size. I need it to implement storage-fiendly packet capturing using tracing layer. It could be also used to capture variable length strings. Current limit (50 chars) is too short for some paths passed to vfs calls. With variable-length encoding, we could set a more generous limit. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Avi Kivity authored
We already have a facility that to indicate that a thread is a reclaimer and should be allowed to allocate reserve memory (since that memory will be used to free memory). Extend it to allow indicating that a particular code section is used to free memory, not the entire thread. Reviewed-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Gleb Natapov authored
Now vma_list_mutex is used to protect against races between ARC buffer mapping by MMU and eviction by ZFS. The problem is that MMU code calls into ZFS with vma_list_mutex held, so on that path all ZFS related locks are taken after vma_list_mutex. An attempt to acquire vma_list_mutex during ARC buffer eviction, while many of the same ZFS locks are already held, causes deadlock. It was solved by using trylock() and skipping an eviction if vma_list_mutex cannot be acquired, but it appears that some mmapped buffers are destroyed not during eviction, but after writeback and this destruction cannot be delayed. It calls for locking scheme redesign. This patch introduce arc_lock that have to be held during access to read_cache. It prevents simultaneous eviction and mapping. arc_lock should be the most inner lock held on any code path. Code is change to adhere to this rule. For that the patch replaces ARC_SHARED_BUF flag by new b_mmaped field. The reason is that access to b_flags field is guarded by hash_lock and it is impossible to guaranty same order between hash_lock and arc_lock on all code paths. Dropping the need for hash_lock is a nice solution. Signed-off-by:
Gleb Natapov <gleb@cloudius-systems.com>
-
Gleb Natapov authored
Currently page_allocator return a page to a page mapper and the later populates a pte with it. Sometimes page allocation and pte population needs to be appear atomic though. For instance in case of a pagecache we want to prevent page eviction before pte is populated since page eviction clears pte, but if allocation and mapping is not atomic pte can be populated with stale data after eviction. With current approach very wide scoped lock is needed to guaranty atomicity. Moving pte population into page_allocator allows for much simpler locking. Signed-off-by:
Gleb Natapov <gleb@cloudius-systems.com>
-
Gleb Natapov authored
Current code assumes that for the same file and same offset ZFS will always return same ARC buffer, but this appears to be not the case. ZFS may create new ARC buffer while an old one is undergoing writeback. It means that we need to track mapping between file/offset and mmapped ARC buffer by ourselves. It's exactly what this patch is about. It adds new kind of cached page that holds pointers to an ARC buffer and stores them in new read_cache map. Signed-off-by:
Gleb Natapov <gleb@cloudius-systems.com>
-
Gleb Natapov authored
Signed-off-by:
Gleb Natapov <gleb@cloudius-systems.com>
-
Gleb Natapov authored
Unmap page as soon as possible instead of waiting for max_pages to accumulate. Will allow to free pages outside of vma_list_mutex in the feature. Signed-off-by:
Gleb Natapov <gleb@cloudius-systems.com>
-
- Apr 24, 2014
-
-
Gleb Natapov authored
Write permission should not be granted to ptes that has no write permission because they are COW, but currently there is no way to distinguish between write protection due to vma permission and write protection due to COW. Use bit reserved for software use in pte as a marker for COW ptes and check it during permission changes. Signed-off-by:
Gleb Natapov <gleb@cloudius-systems.com>
-
Glauber Costa authored
The jemalloc memory allocator will make intense use of MADV_DONTNEED to flush pages it is no longer using. Respect that advice. Let's keep returning -1 for the remaining cases so we don't fool anybody Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-