- May 21, 2014
-
-
Gleb Natapov authored
Java uses accesses to PROT_NONE region to stop threads sometimes, so it worthwhile to be able to catch this as fast as possible without taking vma_list_mutex. The patch does it by setting reserved bit on all ptes in PROT_NONE VMA which causes RSVD bit to be set in a page fault error code. It is enough to check it to know that access is to a valid VMA, but permission is lacking. Signed-off-by:
Gleb Natapov <gleb@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Gleb Natapov authored
Currently address we get from pte includes reserved bits and setting zeroes available and reserved bits. Signed-off-by:
Gleb Natapov <gleb@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Gleb Natapov authored
Later patches will use reserved bit, so we want to be sure that it is available. Signed-off-by:
Gleb Natapov <gleb@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Prasad Joshi authored
At the moment, defining DEBUG_VFS fails OSv compilation. The patch ensures OSv compiles and correct debug logs are emitted. Reviewed-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Prasad Joshi <prasadjoshi.linux@gmail.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
Currently covering three cases: 1 thread allocation, n threads allocation, and n threads allocations with frees in a different cpu. We should improve with more cases. This test was designed to run on Linux as well. Let's keep it this way. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 20, 2014
-
-
Boqun Feng authored
According to http://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html , the versions of standard libraries have a strong matchup with versions of compilers, so linkage errors occur when using libstdc++ in *external* submodule when using GCC 4.9.0. As there is such a mixup in the build system, the environments of standard libraries in link time should be switchable to support GCC whose version doesn't match up with external's. *_env variables are introduced. To build a image with standard libraries in the host, run `make build_env=host'. For fine-grained settings, use gcc_lib_env and cxx_lib_env. Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Boqun Feng <boqun.feng@linux.vnet.ibm.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Linked to wiki page. Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 19, 2014
-
-
Glauber Costa authored
As Nadav pointed out during review, this macro could use a bit more work, to use a single parameter instead of one. That is what is done in this patch. Unfortunately just pasting __COUNTER__ doesn't work because of preprocessor rules, and we need some indirection to get it working. Also, visibility "hidden" can go because that is already implied by "static". The problem then becomes the fact that gcc does not really like unreferenced static variables, which is solved by the "used" attribute. From gcc docs about "used": "This attribute, attached to a variable with the static storage, means that the variable must be emitted even if it appears that the variable is not referenced." Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com> Cc: Avi Kivity <avi@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Jaspal Singh Dhillon authored
Fixes https://github.com/cloudius-systems/mgmt/issues/33 If a user runs 'java xyz', instead of throwing the stacktrace, a simple message informing the user about the missing class, should suffice. Signed-off-by:
Jaspal Singh Dhillon <jaspal.iiith@gmail.com> Reviewed-by:
Tomasz Grabiec <tgrabiec@gmail.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Tomasz Grabiec authored
If a thread which invokes flush_tlb_all() is migrated between the call to flush_tlb_local() and send_allbutself(), the CPU onto which it was migrated would not get its TLB flushed. Spotted during code inspection. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Tomasz Grabiec authored
memory_order_acquire does not prevent previous stores from moving past the barrier so if the _migration_lock_counter incrementation is split into two accesses, this is eligible: tmp = _migration_lock_counter; atomic_signal_fence(std::memory_order_acquire); // load-load, load-store <critical instructions here> _migration_lock_counter = tmp + 1; // was moved past the barrier To prevent this, we need to order previous stores with future loads and stores, which is given only by std::memory_order_seq_cst. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Vlad Zolotarov authored
Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- May 18, 2014
-
-
Avi Kivity authored
Take the migration lock for pinned threads instead of a separate check whether they are pinned or not. Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Avi Kivity authored
Instead of forcing a reload (and a flush) of all variables in memory, use the minimum required barrier via std::atomic_signal_fence(). Reviewed-by:
Tomasz Grabiec <tgrabiec@gmail.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Vlad Zolotarov authored
Proper memory ordering should be implied to loads and stores of _begin field. Otherwise they may be reordered with the appropriate stores and loads to/from the _ring array and in a corner case when the ring is full it may lead to ring data corruption. Signed-off-by:
Vlad Zolotarov <vladz@cloudius-systems.com> Reported-by:
Nadav Har'el <nyh@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Jani Kokkonen authored
Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Tomasz Grabiec authored
These functions are used to demark a critical section and should follow a contract which says that no operation inside the critical section should be moved before migrate_disable() and after migrate_enable(). These functions are declared inline and the compiler could theoretically move instructions across these. Spotted during code contemplation. Signed-off-by:
Tomasz Grabiec <tgrabiec@cloudius-systems.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Avi Kivity authored
The lock has been pushed down to a helper class and cannot be accessed here, so remove the assert. Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
- May 16, 2014
-
-
Claudio Fontana authored
revert to a simple implementation which is just atomic, adding a barrier for the compiler. The store release is used without load acquire. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
we start with only a tst-hello.so. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
move driver setup and console creation to arch-setup, and ioapic init for x64 to smp_launch, so that we can remove ifdefs and increase the amount of common code. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com> Reviewed-by:
Nadav Har'El <nyh@cloudius-systems.com>
-
Claudio Fontana authored
allow execution to flow until main_cont so we can reach the backtrace. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
we need to move one instruction back from the return address to get the address we want to show. Also format the pointers as long. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Jani Kokkonen authored
Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> [claudio: renamed fixup.hh to fault-fixup.hh] Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Jani Kokkonen authored
implement fixup fault and the backtrace functionality which is its first simple user. Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> [claudio: added elf changes to allow lookup and demangling to work] Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Jani Kokkonen authored
Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Jani Kokkonen authored
generate exception frame Signed-off-by:
Jani Kokkonen <jani.kokkonen@huawei.com> Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
arch::init_on_cpu should initialize the gic cpu interface, but not for the boot CPU, for which this has already been done. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
we need to do that because load/store instructions behave differently with Device-nGnRnE memory. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com>
-
Takuya ASADA authored
Add qemu command path argument to test different versions of qemu. Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
thread::current()->thread_clock() returns the CPU time consumed by this thread. A thread that wishes to measure the amount of CPU time consumed by some short section of code will want this clock to have high resolution, but in the existing code it was only updated on context switches, so shorter durations could not be measured with it. This patch fixes thread_clock() to also add the time that passed since the the time slice started. When running thread_clock() on *another* thread (not thread::current()), we still return a cpu time snapshot from the last context switch - even if the thread happens to be running now (on another CPU). Fixing that case is quite difficult (and will probably require additional memory-ordering guarantees), and anyway not very important: Usually we don't need a high-resolution estimate of a different thread's cpu time. Fixes #302. Reviewed-by:
Gleb Natapov <gleb@cloudius-systems.com> Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
Again, we are currently calling a function everytime we disable/enable preemption (actually a pair of functions), where simple mov instructions would do. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
We are heavily using this function to grab the address of the current thread. That means a function call will be issued every time that is done, where a simple mov instruction would do. For objects outside the main ELF, we don't want that to be inlined, since that would mean the resolution would have to go through an expensive __tls_get_addr. So what we do is that we don't present the symbol as inline for them, and make sure the symbol is always generated. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
It is sometimes useful to programatically learn if something will end up in the main ELF or in a shared library. This is the function usually performed by _KERNEL. However, using that proved quite difficult when Asias was trying to conditionally compile some zfs tools. This is because a lot of our headers expect _KERNEL to be defined and they are also included in the tools build. Too messy, unfortunately. Because of that, I am defining a new constant that will be available for every object that will end up in the main ELF, but won't be defined otherwise. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-