Skip to content
Snippets Groups Projects
  1. Dec 18, 2013
    • Glauber Costa's avatar
      mempool: memory statistics · ca1ac80b
      Glauber Costa authored
      
      This patch adds the basic of memory tracking, and exposes an interface to for
      that data to be collected.
      
      We basically start with all stats at zero, and as we add memory to the System,
      we bump it up and recalculate the watermarks (to avoid recomputing them all the
      time). When a page range comes up, it will be added as free memory.
      
      We operate based on what is currently sitting in the page ranges. This means
      that we are effectively ignoring memory that sit in pools for memory usage. I
      think it is a good assumption because it allow us to focus in the big picture,
      and leave the pools to be used as liquid currency.
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      ca1ac80b
  2. Sep 15, 2013
  3. Jul 21, 2013
    • Avi Kivity's avatar
      mempool: add hysteresis · c549e0e8
      Avi Kivity authored
      If we allocate and free just one object in an empty pool, we will
      continuously allocate a page, format it for the pool, then free it.
      
      This is wastefull, so allow the pool to keep one empty page.  The page is kept
      at the back of the free list, so it won't get fragemented needlessly.
      c549e0e8
    • Avi Kivity's avatar
      mempool: switch to dynamic_percpu · a76e1813
      Avi Kivity authored
      Instead of an array of 64 free lists, let dynamic_percpu<> manage the
      allocations for us.  This reduces waste since we no longer require cache line
      alignment.
      a76e1813
    • Avi Kivity's avatar
      mempool: make the early allocator not depend on mempools · c148b754
      Avi Kivity authored
      With dynamic percpu allocations, the allocator won't be available until
      the first cpu is created.  This creates a circular dependency, since the
      first cpu itself needs to be allocated.
      
      Use a simple and wasteful allocator in that time until we're ready.  Objects
      allocated by the simple allocator are marked by having a page offset of 8.
      c148b754
  4. Jul 18, 2013
  5. Jul 11, 2013
    • Avi Kivity's avatar
      memory: add alloc_phys_contiguous_aligned() API · b15db045
      Avi Kivity authored
      Virtio and other hardware needs physically contiguous memory, beyond one page.
      It also requires page-aligned memory.
      Add an explicit API for contiguous and aligned memory allocation.
      
      While our default allocator returns physically contiguous memory, the debug
      allocator does not, causing virtio devices to fail.
      b15db045
  6. Jul 08, 2013
    • Guy Zana's avatar
      mempool: convert the memory allocator to be per-cpu · 3d4653e7
      Guy Zana authored
      The new code paritions the free list of pages of each pool to be
      per cpu, allocations and deallocations are done locklessly.
      
      uses worker items to avoid a problem where free() for a buffer can
      be done from a cpu that is different than the one which allocated
      that buffer, we use N^2 rings which are used for communicating
      between the threads and worker items. The worker item will actually
      do the free() for a buffer from the same cpu it was allocated on.
      3d4653e7
  7. Jun 30, 2013
  8. May 01, 2013
    • Nadav Har'El's avatar
      Unify "mutex_t" and "mutex" types · 3c692eaa
      Nadav Har'El authored
      Previously we had two different mutex types - "mutex_t" defined by
      <osv/mutex.h> for use in C code, and "mutex" defined by <mutex.hh>
      for use in C++ code. This is difference is unnecessary, and causes
      a mess for functions that need to accept either type, so they work
      for both C++ and C code (e.g., consider condvar_wait()).
      
      So after this commit, we have just one include file, <osv/mutex.h>
      which works both in C and C++ code. This results in the same type
      and same functions being defined, plus some additional conveniences
      when in C++, such as method variants of the functions (e.g.,
      m.lock() in addition to mutex_lock(m)), and the "with_lock" function.
      
      The mutex type is now called either "mutex_t" or "struct mutex" in
      C code, or can also be called just "mutex" in C++ code (all three
      names refer to an identical type - there's no longer a different
      mutex_t and mutex type).
      
      This commit also modifies all the includers of <mutex.hh> to use
      <osv/mutex.h>, and fixes a few miscelleneous compilation issues
      that were discovered in the process.
      3c692eaa
  9. Apr 25, 2013
    • Nadav Har'El's avatar
      Added "--leak" command line option · 600f960b
      Nadav Har'El authored
      Added to the loader a command-line option "--leak" to enable the leak
      detector immediately before running the payload's main(). For example, to
      look for leaks in tst-fpu.so (there are none, by the way ;-)), do
      
      	scripts/imgedit.py setargs build/release/loader.img --leak tests/tst-fpu.so
      
      and when it ends, look at the leak detection results:
      	$ gdb build/release/loader.elf
      	(gdb) connect
      	(gdb) osv leak show
      
      Unfortunately, this doesn't work when the payload is Java - I'm still trying
      to figure out why.
      600f960b
  10. Apr 24, 2013
    • Avi Kivity's avatar
      memory: debug allocator · 56b1f6b2
      Avi Kivity authored
      This allocator works by giving each allocation its own virtual address
      range which is not reused for later allocations.  After a free(), the
      range is made inaccessible, forever, so use-after-free will result in a
      page fault.
      
      Sub-page overruns are also detected by filling unallocated space with a
      pattern, and checking whether the pattern has been altered during free().
      56b1f6b2
  11. Apr 14, 2013
    • Nadav Har'El's avatar
      Remove some crap from mempool.hh · bd5aefec
      Nadav Har'El authored
      Needlessly aliased std::size_t to size_t (which does nothing but confuse
      Eclipse), defined a non-existant function, and exposed a function which
      shouldn't have been exposed.
      bd5aefec
  12. Apr 08, 2013
    • Nadav Har'El's avatar
      Fix memory leak in thread creation · 68284de3
      Nadav Har'El authored
      Thread object creation used to leak one page for the FPU state (thanks
      Avi for spotting this). Fix this (add a destructor which frees the page)
      and add to the test-suite a test for checking that thread creation doesn't
      leak memory - and while we're at it, also checked that alloc_page() and
      malloc() at various sizes do not leak memory.
      68284de3
  13. Mar 06, 2013
  14. Feb 28, 2013
  15. Feb 27, 2013
  16. Feb 03, 2013
  17. Jan 28, 2013
  18. Jan 17, 2013
    • Avi Kivity's avatar
      mmu: move initial memory setup to mmu.cc · 77cd3b6b
      Avi Kivity authored
      Initial memory is physical; the mmu converts it to virtual addresses, and
      then it can be added to the memory pool.  Right now there is not much
      difference, but the 1:1 mapping is moving soon.
      77cd3b6b
  19. Jan 16, 2013
  20. Jan 03, 2013
  21. Jan 01, 2013
    • Avi Kivity's avatar
      mempool: fix memory leak when a pool page become free · a6b6db7f
      Avi Kivity authored
      We currently leak a pool page, because we cannot unlink the free objects
      belonging to the page from the pool's free list.
      
      Fix by having a per-page free list (containing objects only from that page).
      The pages are themselves placed on a doubly linked list.  When we identify
      an empty page, we can now easily drop it since the local free list only
      point within the page.
      a6b6db7f
    • Avi Kivity's avatar
      mempool: repsect minimum object size · 0bedccee
      Avi Kivity authored
      Must be large enough to hold a free_object.
      0bedccee
  22. Dec 26, 2012
    • Avi Kivity's avatar
      memory: implement a real malloc/free set · 22a07c17
      Avi Kivity authored
      This implementation stores small objects in pools of similar-sized objects,
      while large objects are allocated using a first-fit algorithm.  There is also
      a specialized interface for allocating aligned pages.
      22a07c17
Loading