- Mar 25, 2013
-
-
Christoph Hellwig authored
-
Christoph Hellwig authored
This is not needed because each file still keeps a vnode reference and thus pins it, but at the same time makes the close implementation a lot simpler, especially after file operations are added.
-
Dor Laor authored
* 'master' of github.com:cloudius-systems/osv: Coarse-grained lock for mmap()/munmap()/mprotect(). TLB flush and other fixes to mmap/munmap/protect Add working mprotect() tests. tests: fix fpu test for testrunner.so
-
Dor Laor authored
interface we cannot get an answer and it stalls the test execution.
-
Nadav Har'El authored
Added a coarse-grain lock to prevent concurrent mmap()/munmap()/mprotect(). We can implement something more fine-grained if this becomes a performance bottleneck, but I doubt it ever would.
-
Nadav Har'El authored
Added the missing TLB flush to the mmap()/munmap()/mprotect() operations. Note that currently I only do this on the processor running mmap(), which is incorrect (I put in a TODO), but is good enough for Java's use case (which is to do these things on startup). Also added more tests, and fixed a bug on mmap(PROT_NONE) (mprotect(PROT_NONE) used to work, but mmap(PROT_NONE) didn't).
-
Nadav Har'El authored
Automate the tests for mprotect() using sigaction() - verifing that writing to read-only page causes SIGSEGV, and so on. One *failing* test is left commented out - currently we're missing a TLB flush on mprotect(), so if we write to a page and then make it read-only, the new read-only status isn't noticed by the processor.
-
- Mar 24, 2013
-
-
Avi Kivity authored
The fpu test was written to be run standalone, but that doesn't work running from testrunner.so since the entry points are different (main vs. osv_main).
-
- Mar 22, 2013
-
-
Dor Laor authored
* 'master' of github.com:cloudius-systems/osv: Replace bizarre uses of free_large in mempool.cc Some minor cleanups in mmu.cc x64: qualify newer cr4 features on cpuid x64: fix cpuid xsave parsing bsd: added a manifest.txt file for the networking stack
-
Dor Laor authored
once the destination interface owns the hard coded mac addr
-
Dor Laor authored
Additional cleanups and tries are needed.
-
Dor Laor authored
-
- Mar 21, 2013
-
-
Nadav Har'El authored
In several places in mempool.cc - free_page, free_huge_page, and free_initial_memory_range, we (ab)used free_large in a really strange way (giving it a pointer lower by page_size, knowing it will add page_size back). Instead, created a new function free_page_range(), which in a much clearer way and without monkey-business, returns a range of pages to the free list. Now all of the above functions, as well as free_large itself, use this new free_page_range().
-
Nadav Har'El authored
-
Avi Kivity authored
Breaks on older processors.
-
Avi Kivity authored
Used the wrong bit.
-
Guy Zana authored
-
Dor Laor authored
We cannot relay on the overlay protocol to provide this length and we must receive it from the hypervisor.
-
- Mar 20, 2013
-
-
Nadav Har'El authored
Rewriten the mmap/unmap/mprotect to be much less repetitive. There is a new "page_range_operation" class, from which the classes "populate", "unpopulate", and "protect" classes derive to implement mmap, munmap and mprotect respectively. The code is now much shorter, less repetitive, clearer (I hope), and also better conforming to the new coding conventions. Note that linear_map is still separate, and page_range_operation keeps its old algorithm (of starting at the root again for each page). Now that we have this clean OO structure, it will be easier to change this algorithm to be similar to linear_map's.
-
- Mar 19, 2013
-
-
Nadav Har'El authored
Java uses mprotect(..., PROT_NONE) for guard pages (e.g., to catch stack overflow). This patch implements it by removing the present bit on these pages' mappings, which does not mean the pages have been unmapped (their memory is kept intact, and running mprotect again can make them readable again.
-
Nadav Har'El authored
-
Nadav Har'El authored
some of the small pages were allocated twice and one was leaked). Also, prepare mmap/munmap for supporting PROT_NONE in mprotect: instead of relying on present-bit to say which pages are free, rely on the information in vma_list, and double-check this (with assertions added) against zeroing of the PTE when a page is freed. I'll add support for mprotect(PROT_NONE) in a following patch.
-
Avi Kivity authored
Now that the context switch code is red zone safe, allow the compiler to use it.
-
Avi Kivity authored
Due to the "red zone", stack operations may corrupt local variables. Use ordinary moves and jumps instead.
-
Avi Kivity authored
Causes an early failure for some reason. Disable until root cause is found.
-
Avi Kivity authored
-
Avi Kivity authored
-
Avi Kivity authored
Since that's what we build by default, that's what we should run.
-
Avi Kivity authored
-
Avi Kivity authored
-
Guy Zana authored
-
Avi Kivity authored
fpu preemption
-
Avi Kivity authored
-
Avi Kivity authored
Test that preemption does not corrupt fpu registers.
-
Avi Kivity authored
-
Avi Kivity authored
Normal scheduling does not need to save or restore the fpu when switching threads, since all fpu registers are caller-saved (so calling schedule()) may clobber the fpu). However this does not hold on preemption, so we need to save and restore the fpu state explicitly.
-
Avi Kivity authored
-
Avi Kivity authored
Parse cpuid flag bits relevant to osv.
-
Avi Kivity authored
-
Avi Kivity authored
-