- Feb 06, 2014
-
-
Tomasz Grabiec authored
It does not belong in the isolate API but to the command line java starter.
-
Tomasz Grabiec authored
Auto reformat performed from IntelliJ (alt+ctrl+L)
-
Tomasz Grabiec authored
Interrupting a thread which is blocked waiting for an isolate to complete which was started via runSync() should interrupt the isolate. The waiter should return only after isolate terminated. I think this is what users would expect to happen when interrupting a foreground process.
-
Tomasz Grabiec authored
ContextIsolator is a generic API for starting new applications inside a single JVM. RunJava should be just a command-line starter which uses that API. I tried to change as little as possible during the code move so that any changes to that logic are clearly visible. The changes which adapt the code to a more generic API are in the next patches.
-
Tomasz Grabiec authored
The j.u.l framework allows only one log manager to exist. To isolate logging configurations we need to install our own log manager which delegates to context-local log managers. See #172. In order to have a fully isolated logging config system properties need to also be isolated. This will come in the following patches.
-
Tomasz Grabiec authored
It will allow to use automatic dependency management. It slows down the build a bit. Incremental build takes 3 seconds longer than previously. First build takes longer due to downloading of maven artifacts. This is once per machine.
-
Tomasz Grabiec authored
There is a convention to organize java modules in the following way: src/main/java - source root for production code src/test/java - source root for test code
-
Tomasz Grabiec authored
I need static-initialization-like semantics in several places. This abstraction makes it easier.
-
Tomasz Grabiec authored
We aim to support running multiple isolated Java applications in one JVM. Some work has already been done to isolate system class loaders. There is much more to it than that. Isolated applications (aka Contexts) do not map 1-1 to class loaders. One context may have many different class loaders. This change extracts context-specific logic to separate classes as a base for further additions.
-
Tomasz Grabiec authored
env->FindClass() triggers class initialization which may throw exceptions. Instead of printing misleading information that the class was not found we should print the exception stack-trace.
-
Tomasz Grabiec authored
Currently debug messages are not printed on console by default.
-
Zhi Yong Wu authored
It will allow the user to specify the vnc port number because the port "1" is often occupied by other applications. Signed-off-by:
Zhi Yong Wu <zwu.kernel@gmail.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
The JVM may unmap certain areas of the heap completely, which was confirmed by code inspection by Gleb. In that case, the current balloon code will break. This is because we were deleting the vma from finish_move(), and recreating the old mapping implicitly in the process. With this new patch, the tear down of the jvm balloon mapping is done by a separate function. Unmapping or evacuating the region won't trigger it. It still needs to communicate to the balloon code that this address is out of the balloons list. We do that by calling the page fault handler with an empty frame. jvm_balloon_fault will is patched to interpret an empty frame correctly. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Pekka Enberg authored
Add format specifier support to abort() to make it easier to produce useful error messages. Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
Fix also concurrent use of _module_index_list (for the per-module TLS). Use a new mutex _module_index_list mutex to protect it. We could probably have done something with the RCU instead, but just adding a new mutex is a lot easier. Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
After the above patches, one race remains in the dynamic linker: If an object is *unloaded* while some symbol resolution or object iteration (dl_iterate_phdr) is in progress, the function in progress may reach this object after it is already unmapped from memory, and crash. Therefore, we need to delay unmapping of objects while any object iteration is going on. We need to allow the object to be deleted from the _modules and _files list (so that new calls will not find it) but temporarily delay the actual freeing of the object's memory. The cleanest way to achieve this would have been to increment each modules' reference in the RCU section of modules_get(), so they won't get deleted while still in use. However, this will signficantly slow down users like backtrace() with dozens of atomic operations. So we chose a different solution: keep a counter _modules_delete_disable, which when non-zero causes all module deletion to be delayed until the counter drops back to zero. with_modules() now only needs to increment this single counter, not every separate module. Fixes #176. Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
In the current code, when a shared-object's reference count drops to zero, shared_ptr<object> calls delete on this object, which calls the object's destructor, which (see file::~file) calls program::remove_object() to remove this object from the program's _files list. This order was always awkward and unexpected, and I even had a comment in ~file noting that "we don't delete(ef) here - the contrary, delete(ef) calls us". But for the next patch, we can no longer live with this order, which is not just awkward but also wrong: In the next patch we'll want a shared object to be immediately removed from the program (so it is no longer found when resolving symbols, etc.), but the actual deletion of this object to be delayed till a more convenient time. For this to work, we cannot have the deletion of the object start before its removal from the program: We need the opposite - the removal of the object from the program will delete the object. Luckily, it's easy to do this properly using a shared_ptr deleter lambda. This tells shared_ptr to call remove_object(), not delete, when the reference count drops to zero. Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
This patch addresses the bugs of *use* of the dynamic linker - looking up symbols or iterating the list of loaded objects - in parallel with new libraries being loaded with get_library(). The underlying problem is that we have an unprotected "_modules" vector of loaded objects, which we need to iterate to look up symbols, but this list of modules can change when a new shared object is loaded. We decided *not* to solve this problem by using the same mutex protecting object load/unload: _mutex. That would make boot slower, as threads using new symbols are blocked just because another thread is concurrently loading some unrelated shared object (not a big problem with demand-paged file mmaps). Using a mutex can also cause deadlocks in the leak detector, because of lock order reversal between malloc's and elf'c mutexes: malloc() takes a lock first and then backtrace() will take elf's lock, and on the other hand elf can take its lock and then call malloc taking malloc's lock. Instead, this patch uses RCU to allow lock-free reading of the modules list. As in RCU, writing (adding or removing an object from the list) manufactures a new list, defering the freeing of the old one, allowing reads to continue using the old object list. Note that after this patch, concurrent lookups and get_library() will work correctly, but concurrent lookups and object *unload* still will still not be correct because we need to defer an object's unloading from memory while lookups are in progress. This will be solved in a following patch. Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
The previous patch fixed most of the concurrent shared-object load/unload bugs except one: We still have a rare race between loading and unloading the *same* library. In other words, it is possible that just when a library's reference count went down to 0, and its destruction will promptly start (the mutex is not yet taken), get_library() gets called to load the same library. We cannot return the already-loaded one (because it is in the process of being destructed). So in that case we need to ignore the library we found, and load it again. Luckily, std::weak_ptr's notion of "expired" shared pointers (that already went down to 0, and can't be lock()ed) and its atomic implementation thereof, makes fixing this problem easy and natural. The existing code already had an assert() in place to protect against this race (and it was so rare we never actually saw it), but this patch should fix this race once and for all. Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
Our current dynamic-linker code (elf.cc) is not thread safe, and all sort of disasters can happen if shared objects are loaded, unloaded and/or used concurrently. This and the following patches solve this problem in stages: The first stage, in this patch, is to protect concurrent shared-library loads and unloads. (if the dynamic linker is also in use concurrently, this will still cause problems, and will be solved in the next patches). Library load and unload use a bunch of shared data without protection, so concurrency can cause disaster. For example, two concurrent loads can pick the same address to map the objects in. We solve this by using a mutex to ensure only one shared object is loaded or unloaded at a time. Instead of this coarse-grain locking, we could have used finer-grained locks to allow several library loads to proceed in parallel, protecting just the actual shared data. However the benefits will be very small because with demand-paged file mmaps, "loading" a library just sets up the memory map, very quickly, and the object will only be actually read from disk later, when its pages get used. Fixes #175. Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
When running a command in the background, do_main_thread() passes the command line in a std::vector pointer to a new pthread. Unfortunately, soon afterwards the vector can go out of scope and the result is a crash. Fix this oversight. Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Nadav Har'El authored
Add a macro SCOPE_LOCK(mutex) which locks the given mutex and unlocks it when the scope ends (this uses RAII, so the mutex will correctly get unlocked even when the scope is exited via return or exception). This does the same as C++11's std::lock_guard, but far less verbose: To use std::lock_guard with a mutex m, one nees to do something like std::lock_guard<mutex> guard(m); where the mutex's type needs to be repeated, and a name needs to be invented for the guard which will likely not be used again. This macro makes these things unnecessary, and one just writes SCOPE_LOCK(m); Note that WITH_LOCK(m) { ... } should usually be preferred over SCOPE_LOCK. However, SCOPE_LOCK can come in handy in some cases, for example adding a lock to a function without reindenting it. Signed-off-by:
Nadav Har'El <nyh@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Raphael S. Carvalho authored
/proc must be unmounted to release refcnts which pertains to the root mountpoint, i.e. zfs. It was preventing zfs_umount from releasing the mp dentries properly, thus VOP_INACTIVE from being called on the respective vnodes. Found the problem while dumping the mountpoint refcnts. Reviewed-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Raphael S. Carvalho <raphaelsc@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Zhi Yong Wu authored
OSv's coding style is very similar to that of QEMU. The script probably needs some work with our C++ specific coding style but we have to start somewhere... Signed-off-by:
Zhi Yong Wu <zwu.kernel@gmail.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Glauber Costa authored
This following patch implements two of the tracepoints that helped me debug some of the xen blkfront problems. There are two tracepoint pairs: one of them for measuring time spent processing an interrupt, and the other time between interrupts themselves. Huge latencies could be due to either of them. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Raphael S. Carvalho authored
The main purpose of this tool is to understand/analyze the ARC behavior/ performance on specific workloads. $ scripts/run.py -e 'tests/misc-zfs-arc.so --help' OSv v0.05-155-g1f04e49 Allowed options: --help produce help message --set-max-target set ARC max target to 80% of the system memory. --check-arc-shrink check ARC shrink functionality --test arg analyze ARC performance on a given testcase, e.g. --test tst-001.so * --set-max-target: Used to check performance when ARC max target is higher than usual. Given that more data will be load into ARC, ZFS operations that needs I/O would perform better. 80% was chosen as the low watermark is 20%, so avoiding a bunch of memory pressure, thus more stability. * --check-arc-shrink: Check the functionality of the function arc_shrink from ARC. * --test arg: Check ARC performance on a specified testcase, e.g.: $ scripts/run.py -e 'tests/misc-zfs-arc.so --test tst-fs-link.so' * Default run, i.e -e 'tests/misc-zfs-arc.so' provides four distinct workloads: 1) Non-linear one where prefetch shouldn't be as effective. 2) Load all data into cache, then read it afterwards to check performance on such cases, almost speed of main memory. 3) Linear workload where the amount of data is 1.5% the size of the system memory, thus page replacement will be strongly used, and as the operation is sequential, prefetch (readahead) must be effective. It leads to a high cache hit ratio as blocks were read ahead of time. 4) Keep allocating memory through a populated anonymous mmaping to see if shrink would take place to release memory back to the operating system. Eventual reports and ARC stats are provided to ease the task of understanding ARC performance on specific workloads. Signed-off-by:
Raphael S. Carvalho <raphaelsc@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Raphael S. Carvalho authored
Mainly created to be used as a tool that reproduces specific workloads, so allowing us to understand how underlying components are performing, e.g. Adjustable Replacement Cache (ARC) from ZFS. Reviewed-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Raphael S. Carvalho <raphaelsc@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Raphael S. Carvalho authored
This patch registers the ARC shrinker by using the event handler list from BSD side. When ARC is initialized, it inserts the lowmem event handler into an external event handler list. lowmem basically signals the reclaiming thread which will then wake up to decide which approach should be used to shrink the ARC. The memory pressure on OSv is activated when the 20% watermark is reached, so the shrink policy will decide which shrinker should be called on such events. bsd_shrinker_init is the responsible to find the lowmem event handler from the external list, and integrate it into our shrinker infrastructure. arc_lowmem needed few changes to return the amount of released memory from the ARC. Glauber and I tested the functionality by filling up the ARC up to its target, then allocating as much memory as possible to see if the ARC shrinker would take place to release memory back to the operating system. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Raphael S. Carvalho <raphaelsc@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Raphael S. Carvalho authored
arc_reclaim_needed checks if reclaiming is needed by looking at two possible conditions: 1) the variable needfree is set. 2) memory used is higher than 3/4 of the system memory. The actual problem is that arc_reclaim_needed was completely disabled. It would always return 0, i.e. reclaim not needed. Add stub to vm_paging_needed to avoid an unneeded ifdef. Reclaiming support is planned, so let's re-enable what's needed and implement kmem_used which is used to determine the amount of memory used so far. Reviewed-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Raphael S. Carvalho <raphaelsc@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Raphael S. Carvalho authored
It's important to make this initial implementation as arc_stats completely relies on it to be exported to the outside world. Reviewed-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Raphael S. Carvalho <raphaelsc@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Claudio Fontana authored
this implements simple target detection borrowing from the idea previously in Makefile. Adds basic recognition of passed envs ARCH, CROSS_PREFIX, along with CC, LD, CXX, HOST_CXX. HOST_CXX is necessary because the compiler is used to produce and run binaries at build time. Signed-off-by:
Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by:
Avi Kivity <avi@cloudius-systems.com>
-
Pekka Enberg authored
Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com> Conflicts: arch/x64/processor.hh drivers/pci-function.cc
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
Takuya ASADA authored
Signed-off-by:
Takuya ASADA <syuu@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-