- May 19, 2014
-
-
Tomasz Grabiec authored
memory_order_acquire does not prevent previous stores from moving past the barrier so if the _migration_lock_counter incrementation is split into two accesses, this is eligible: tmp = _migration_lock_counter; atomic_signal_fence(std::memory_order_acquire); // load-load, load-store <critical instructions here> _migration_lock_counter = tmp + 1; // was moved past the barrier To prevent this, we need to order previous stores with future loads and stores, which is given only by std::memory_order_seq_cst. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Vlad Zolotarov authored
Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 18, 2014
-
-
Avi Kivity authored
Take the migration lock for pinned threads instead of a separate check whether they are pinned or not. Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Avi Kivity authored
Instead of forcing a reload (and a flush) of all variables in memory, use the minimum required barrier via std::atomic_signal_fence(). Reviewed-by:
Tomasz Grabiec <tgrabiec@gmail.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Vlad Zolotarov authored
Proper memory ordering should be implied to loads and stores of _begin field. Otherwise they may be reordered with the appropriate stores and loads to/from the _ring array and in a corner case when the ring is full it may lead to ring data corruption. Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Reported-by:
Nadav Har'el <nyh@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Jani Kokkonen authored
Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Tomasz Grabiec authored
These functions are used to demark a critical section and should follow a contract which says that no operation inside the critical section should be moved before migrate_disable() and after migrate_enable(). These functions are declared inline and the compiler could theoretically move instructions across these. Spotted during code contemplation. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Avi Kivity authored
The lock has been pushed down to a helper class and cannot be accessed here, so remove the assert. Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
- May 16, 2014
-
-
Claudio Fontana authored
revert to a simple implementation which is just atomic, adding a barrier for the compiler. The store release is used without load acquire. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
we start with only a tst-hello.so. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
move driver setup and console creation to arch-setup, and ioapic init for x64 to smp_launch, so that we can remove ifdefs and increase the amount of common code. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com> Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com>
-
Claudio Fontana authored
allow execution to flow until main_cont so we can reach the backtrace. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
we need to move one instruction back from the return address to get the address we want to show. Also format the pointers as long. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Jani Kokkonen authored
Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> [claudio: renamed fixup.hh to fault-fixup.hh] Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Jani Kokkonen authored
implement fixup fault and the backtrace functionality which is its first simple user. Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> [claudio: added elf changes to allow lookup and demangling to work] Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Jani Kokkonen authored
Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Jani Kokkonen authored
generate exception frame Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
arch::init_on_cpu should initialize the gic cpu interface, but not for the boot CPU, for which this has already been done. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
we need to do that because load/store instructions behave differently with Device-nGnRnE memory. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Takuya ASADA authored
Add qemu command path argument to test different versions of qemu. Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
thread::current()->thread_clock() returns the CPU time consumed by this thread. A thread that wishes to measure the amount of CPU time consumed by some short section of code will want this clock to have high resolution, but in the existing code it was only updated on context switches, so shorter durations could not be measured with it. This patch fixes thread_clock() to also add the time that passed since the the time slice started. When running thread_clock() on *another* thread (not thread::current()), we still return a cpu time snapshot from the last context switch - even if the thread happens to be running now (on another CPU). Fixing that case is quite difficult (and will probably require additional memory-ordering guarantees), and anyway not very important: Usually we don't need a high-resolution estimate of a different thread's cpu time. Fixes #302. Reviewed-by:
Gleb Natapov <gleb@cloudius-systems.com> Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
Again, we are currently calling a function everytime we disable/enable preemption (actually a pair of functions), where simple mov instructions would do. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
We are heavily using this function to grab the address of the current thread. That means a function call will be issued every time that is done, where a simple mov instruction would do. For objects outside the main ELF, we don't want that to be inlined, since that would mean the resolution would have to go through an expensive __tls_get_addr. So what we do is that we don't present the symbol as inline for them, and make sure the symbol is always generated. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
It is sometimes useful to programatically learn if something will end up in the main ELF or in a shared library. This is the function usually performed by _KERNEL. However, using that proved quite difficult when Asias was trying to conditionally compile some zfs tools. This is because a lot of our headers expect _KERNEL to be defined and they are also included in the tools build. Too messy, unfortunately. Because of that, I am defining a new constant that will be available for every object that will end up in the main ELF, but won't be defined otherwise. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Jaspal Singh Dhillon authored
Remove the unnecessary default vnc :1 argument during make. It also does not allow make to proceed if another vm is already using :1 Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Jaspal Singh Dhillon <jaspal.iiith@gmail.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Asias He authored
$ ./scripts/run.py -e 'zpool.so list' OSv v0.08-81-gc3422a7 eth0: 192.168.122.15 NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT osv 9.94G 148M 9.79G 1% 1.00x ONLINE - $ ./scripts/run.py -e 'zfs.so list' OSv v0.08-81-gc3422a7 eth0: 192.168.122.15 NAME USED AVAIL REFER MOUNTPOINT osv 148M 9.64G 32K / osv/zfs 148M 9.64G 148M /zfs Enable it by compiling the missing functions. Reviewed-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Asias He <asias@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
KANATSU Minoru authored
Signed-off-by:
KANATSU Minoru <icc.pot.tyew272@gmail.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
KANATSU Minoru authored
add symbol for porting CRuby this function is depend on execv() Signed-off-by:
KANATSU Minoru <icc.pot.tyew272@gmail.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
KANATSU Minoru authored
add symbol for porting CRuby Signed-off-by:
KANATSU Minoru <icc.pot.tyew272@gmail.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 15, 2014
-
-
Tomasz Grabiec authored
The version built on muninn does not have the leading 'tracepoint_base::log_backtrace(trace_record*, unsigned char*&)' frame. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Tomasz Grabiec authored
They are important in some cases so we should not hide them. Lambda functions are reported like that. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
Our "Makefile" runs build.mk with "make -r", to disable all the default rules. This flag is saved by make in the MAKEFLAGS variable, which is passed on to child processes. Unfortunately, it is also passed to the makefiles of individual modules (apps/*) when we build specific modules. These makefiles are completely stand-alone, and are normally written to assume that the default rules (e.g., .c->.o) do exist. They will fail when run with MAKEFLAGS=r. We recently noticed both redis-memonly and memcached to exhibit this problem and committed workarounds (see 914bef6f75d9f3e7f268aecdb00a023b86439b85 and e2f7ba1b1d80bb5e89d7fca71b4994c7de15ed4e in apps.git). However, the right fix is for our build.mk to acknowledge that the modules' makefiles are stand-alone, and must not inherit our prefered flags like "-r". So this patch clears the MAKEFLAGS variable when calling module.py (module.py will later invoke "make module" in the module's directory). Reviewed-by:
Tomasz Grabiec <tgrabiec@gmail.com> Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Vlad Zolotarov authored
Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Dmitry Fleytman authored
RX descriptor length is 14 bit field, we overflow it by allocating 16K buffers and device treats RX descriptor buffer length as zero. Signed-off-by:
Dmitry Fleytman <dmitry@daynix.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-