- Dec 18, 2013
-
-
Glauber Costa authored
This patch adds the basic of memory tracking, and exposes an interface to for that data to be collected. We basically start with all stats at zero, and as we add memory to the System, we bump it up and recalculate the watermarks (to avoid recomputing them all the time). When a page range comes up, it will be added as free memory. We operate based on what is currently sitting in the page ranges. This means that we are effectively ignoring memory that sit in pools for memory usage. I think it is a good assumption because it allow us to focus in the big picture, and leave the pools to be used as liquid currency. Signed-off-by:
Glauber Costa <glommer@cloudius-systems.com> Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
-
- Sep 15, 2013
-
-
Nadav Har'El authored
Added Cloudius copyright statement to our own code in include/. Also added include/api/LICENSE saying that these are copied from Musl and public domain (according to the Musl COPYRIGHT file).
-
- Jul 21, 2013
-
-
Avi Kivity authored
If we allocate and free just one object in an empty pool, we will continuously allocate a page, format it for the pool, then free it. This is wastefull, so allow the pool to keep one empty page. The page is kept at the back of the free list, so it won't get fragemented needlessly.
-
Avi Kivity authored
Instead of an array of 64 free lists, let dynamic_percpu<> manage the allocations for us. This reduces waste since we no longer require cache line alignment.
-
Avi Kivity authored
With dynamic percpu allocations, the allocator won't be available until the first cpu is created. This creates a circular dependency, since the first cpu itself needs to be allocated. Use a simple and wasteful allocator in that time until we're ready. Objects allocated by the simple allocator are marked by having a page offset of 8.
-
- Jul 18, 2013
-
-
Avi Kivity authored
Avoid a #include loop with later patches.
-
- Jul 11, 2013
-
-
Avi Kivity authored
Virtio and other hardware needs physically contiguous memory, beyond one page. It also requires page-aligned memory. Add an explicit API for contiguous and aligned memory allocation. While our default allocator returns physically contiguous memory, the debug allocator does not, causing virtio devices to fail.
-
- Jul 08, 2013
-
-
Guy Zana authored
The new code paritions the free list of pages of each pool to be per cpu, allocations and deallocations are done locklessly. uses worker items to avoid a problem where free() for a buffer can be done from a cpu that is different than the one which allocated that buffer, we use N^2 rings which are used for communicating between the threads and worker items. The worker item will actually do the free() for a buffer from the same cpu it was allocated on.
-
- Jun 30, 2013
-
-
Guy Zana authored
-
- May 01, 2013
-
-
Nadav Har'El authored
Previously we had two different mutex types - "mutex_t" defined by <osv/mutex.h> for use in C code, and "mutex" defined by <mutex.hh> for use in C++ code. This is difference is unnecessary, and causes a mess for functions that need to accept either type, so they work for both C++ and C code (e.g., consider condvar_wait()). So after this commit, we have just one include file, <osv/mutex.h> which works both in C and C++ code. This results in the same type and same functions being defined, plus some additional conveniences when in C++, such as method variants of the functions (e.g., m.lock() in addition to mutex_lock(m)), and the "with_lock" function. The mutex type is now called either "mutex_t" or "struct mutex" in C code, or can also be called just "mutex" in C++ code (all three names refer to an identical type - there's no longer a different mutex_t and mutex type). This commit also modifies all the includers of <mutex.hh> to use <osv/mutex.h>, and fixes a few miscelleneous compilation issues that were discovered in the process.
-
- Apr 25, 2013
-
-
Nadav Har'El authored
Added to the loader a command-line option "--leak" to enable the leak detector immediately before running the payload's main(). For example, to look for leaks in tst-fpu.so (there are none, by the way ;-)), do scripts/imgedit.py setargs build/release/loader.img --leak tests/tst-fpu.so and when it ends, look at the leak detection results: $ gdb build/release/loader.elf (gdb) connect (gdb) osv leak show Unfortunately, this doesn't work when the payload is Java - I'm still trying to figure out why.
-
- Apr 24, 2013
-
-
Avi Kivity authored
This allocator works by giving each allocation its own virtual address range which is not reused for later allocations. After a free(), the range is made inaccessible, forever, so use-after-free will result in a page fault. Sub-page overruns are also detected by filling unallocated space with a pattern, and checking whether the pattern has been altered during free().
-
- Apr 14, 2013
-
-
Nadav Har'El authored
Needlessly aliased std::size_t to size_t (which does nothing but confuse Eclipse), defined a non-existant function, and exposed a function which shouldn't have been exposed.
-
- Apr 08, 2013
-
-
Nadav Har'El authored
Thread object creation used to leak one page for the FPU state (thanks Avi for spotting this). Fix this (add a destructor which frees the page) and add to the test-suite a test for checking that thread creation doesn't leak memory - and while we're at it, also checked that alloc_page() and malloc() at various sizes do not leak memory.
-
- Mar 06, 2013
-
-
Nadav Har'El authored
aligned and contiguous huge-page of a given size.
-
- Feb 28, 2013
-
-
Avi Kivity authored
Now working.
-
- Feb 27, 2013
-
-
Avi Kivity authored
This reverts commit e917ab25. If we have an assert, it can break badly, as printf() inside the assert allocates. Will reinstate after adding emergency allocators.
-
Avi Kivity authored
Mutexes are now allocation free and thus safe for use within the allocator.
-
- Feb 03, 2013
-
-
Avi Kivity authored
Need to replace with something cleverer later.
-
- Jan 28, 2013
-
-
Avi Kivity authored
-
- Jan 17, 2013
-
-
Avi Kivity authored
Initial memory is physical; the mmu converts it to virtual addresses, and then it can be added to the memory pool. Right now there is not much difference, but the 1:1 mapping is moving soon.
-
- Jan 16, 2013
-
-
Christoph Hellwig authored
-
- Jan 03, 2013
-
-
Avi Kivity authored
-
Avi Kivity authored
Currently the free memory pool consists of a statically allocated buffer. Replace it with a dynamic query of the amount of memory we actually booted with. (currently limited to 1GB since we haven't mapped anything else yet).
-
- Jan 01, 2013
-
-
Avi Kivity authored
We currently leak a pool page, because we cannot unlink the free objects belonging to the page from the pool's free list. Fix by having a per-page free list (containing objects only from that page). The pages are themselves placed on a doubly linked list. When we identify an empty page, we can now easily drop it since the local free list only point within the page.
-
Avi Kivity authored
Must be large enough to hold a free_object.
-
- Dec 26, 2012
-
-
Avi Kivity authored
This implementation stores small objects in pools of similar-sized objects, while large objects are allocated using a first-fit algorithm. There is also a specialized interface for allocating aligned pages.
-