Skip to content
Snippets Groups Projects
  1. May 22, 2013
  2. May 21, 2013
  3. May 20, 2013
    • Avi Kivity's avatar
      pthread: drop 'pmutex' · 83745222
      Avi Kivity authored
      We had a klugey pmutex class used to allow zero initialization of
      pthread_mutex_t.  Now that the mutex class supports it natively we
      can drop it.
      83745222
    • Avi Kivity's avatar
      mutex: improve compatibility with pthread_mutex_t initializers · 97ee8ac1
      Avi Kivity authored
      pthread_mutex_t has a 32-bit field, __kind, at offset 16.  Non-standard
      static initializers set this field to a nonzero value, which can corrupt
      fields in our implementation.
      
      Rearrange field layout so we have a hole in that position.  To keep the
      structure size small enough so that condvar will still fit in
      pthread_condvar_t, we need to change the size of the _depth field to
      16 bits.
      97ee8ac1
    • Avi Kivity's avatar
      pthread: drop pthread's zombie reaper · e25fc7e7
      Avi Kivity authored
      Use the generic one instead; the cleanup function allows destroying
      the pthread object.
      e25fc7e7
    • Avi Kivity's avatar
      sched: fold detached_thread into thread · 4f334fd8
      Avi Kivity authored
      Instead of a subclass, make it a thread attribute.  This simplifies usage
      and also allows detaching a thread after creation.
      4f334fd8
    • Avi Kivity's avatar
      sched: add facility to execute a cleanup function when a thread is destroyed · cc79c49b
      Avi Kivity authored
      Detached threads are auto collected, so give users a chance to execute some
      cleanup function before dying.
      cc79c49b
    • Nadav Har'El's avatar
      The default backtrace depth for the alloc tracker of 20 wasn't enough · 2c1eb668
      Nadav Har'El authored
      for the very deep calls in Java. Increase it.
      
      In the future, I should have an option to save only the deepest calls,
      not the calls nearest the root.
      2c1eb668
    • Guy Zana's avatar
      tests: cleanup properly after tst-eventlist · 9b5ccdea
      Guy Zana authored
      9b5ccdea
    • Guy Zana's avatar
      bsd: enable bsd log() prints · 4b76224e
      Guy Zana authored
      4b76224e
    • Guy Zana's avatar
      bsd: fix read_random() make it appear more randomized. · cf99b754
      Guy Zana authored
      read_random() is used indirectly by the TCP stack to randomize a local port number,
      before this patch it was identical in each execution, and that caused NAT problems.
      
      also add a FIXME note to implement real random one day.
      cf99b754
    • Guy Zana's avatar
      includes: fix import of sa_family_t, use bsd_sa_family_t instead. · 30ab1c50
      Guy Zana authored
      the linux/musl definition is 2 bytes long instead of 1 byte as in
      freebsd, this patch is fixing this issue which caused ifconfig to fail
      30ab1c50
    • Guy Zana's avatar
      bsd: rename struct sockaddr et al, to bsd_sockaddr · b9c55866
      Guy Zana authored
      Attempting to put an end to the linux<->freebsd confusion.
      b9c55866
    • Nadav Har'El's avatar
      Show some progress while waiting for "osv leak show" · b6d66939
      Nadav Har'El authored
      "osv leak show" can take a very long time until you start seeing output.
      This patch shows the progress of the first phase (getting data from the
      debugged program), and then says the second phase (sorting) starts.
      By the way, it turns out the sorting phase is slower (complexity-wise,
      this is obvious, but I didn't really expect it in this case).
      b6d66939
    • Nadav Har'El's avatar
      Replace backtrace() implementation with one using libunwind · 53c7ade5
      Nadav Har'El authored
      The previous implementation of backtrace() required frame pointers.
      This meant it could only be used in the "debug" build, but worse,
      it also got confused by libstdc++ (which was built without frame pointers),
      leading to incorrect stack traces, and more rarely, crashes.
      
      This changes backtrace() to use libunwind instead, which works even
      without frame pointers. To satisfy the link dependencies, libgcc_eh.a
      needs to be linked *after* libunwind.a. Because we also need it linked
      *before* for other reasons, we end up with libgcc_eh.a twice on the
      linker's command line. The horror...
      53c7ade5
    • Nadav Har'El's avatar
      Fix bug in leak detection interaction with mmap() code · 064e99d5
      Nadav Har'El authored
      mmu::allocate(), implementing mmap(), used to first evacuate the
      region (marking it free), then allocate a tiny vma object (a start,end
      pair), and finally populate the region.
      
      But it turns out that the allocation, if it calls backtrace() for the first
      time, ends up calling mmap() too :-) These two running mmap()s aren't
      protected by the mutex (it's the same thread), and the second mmap could
      take the region just freed by the first mmap - before returning to the
      first mmap who would reuse this region.
      
      We solve this bug by allocating the vma object before evacuating the
      region, so the other mmap picks different memory.
      
      Before this fix, "--leak tests/tst-mmap.so" crashes with assertion
      failure. With this fix, it succeeds.
      064e99d5
    • Nadav Har'El's avatar
      Fix two deadlocks in the allocation tracking code · d25789d5
      Nadav Har'El authored
      This patch fixes two deadlocks which previously existed in the allocation
      tracking code in core/mempool.cc. These deadlock have become more visible
      in the new backtrace() implementation which uses libunwind and more often
      allocates memory during its operation (especially from dl_iterate_phdr()).
      
      1. alloc_page() wrongly held the free_page_ranges lock for a bit too
         long - including when calling tracker_remember(). tracker_remember()
         then takes the tracker mutex. Unfortunately, the opposite lock order
         also occurs: Consider a tracker_remember() (e.g., from malloc)
         needing to allocate memory and through the memory pool, end up calling
         alloc_page().
      
         This is a classic deadlock situation. The solution is for alloc_page()
         to drop free_page_ranges_lock before calling tracker_remember().
      
      2. Another deadlock occured between the tracker lock and a pool::_lock -
      
         thread A: malloc calls remember() taking the TRACKER LOCK, and then
         calling malloc() (often in dl_iterate_phdr()), which calls
         memory::pool::alloc and takes the POOL LOCK.
      
         thread B: malloc calls memory::pool::alloc which takes the POOL LOCK
         and then if the pool is empty, calls alloc_page() which is also
         tracked so it takes the TRACKER LOCK.
      
         Here the solution is not to track page allocations and deallocations
         from within the memory pool implementation. We add
         untracked_alloc_page() and untracked_free_page() and use those in
         the pool class. This not only solves the deadlock, it also provides
         better leak detection because pages held by the allocator are now
         no longer considered "leaks" (just the individual small objects
         themselves).
      
         The fact alloc_page() now calls untracked_alloc_page()
         also made solving problem 1 above more natural (the free_pages
         lock is held during untracked_alloc_page()).
      d25789d5
    • Nadav Har'El's avatar
      Don't waste time tracking allocations inside allocations · b211cf4b
      Nadav Har'El authored
      When the allocation tracking code itself does allocations, we do not
      track these allocations - and we notice this by the "depth" of the
      alloc_tracker mutex. This avoids some messy infinite recursion, and also
      improves performance because alloc_tracker does do a bunch of allocations
      (most of them not apparent from the code), and tracking them too would
      significantly slow it down, with no benefit because we're not debugging
      the allocation tracker itself.
      
      But while the existing code ignored nested alloc_tracker::remember(),
      we forgot to also ignore nested alloc_tracker::forget()! This meant
      that for each allocation inside alloc_tracker, we never tracked it,
      but wasted our time trying to delete it from the list of living
      allocations. This oversight caused a huge slowdown of alloc_tracker(),
      which this patch fixes. alloc_tracker() is now just very slow, not
      very very very slow ;-)
      b211cf4b
    • Nadav Har'El's avatar
      Add partial implementation of msync() for libunwind · de374193
      Nadav Har'El authored
      libunwind, which the next patches will use to implement a more reliable
      backtrace(), needs the msync() function. It doesn't need it to actually
      sync anything - just to recognize valid frame addresses (stacks are
      always mmap()ed).
      
      Note this implementation does the checking, but is missing the "sync" part
      of msync ;-) It doesn't matter because:
      
      1. libunwind doesn't need (or want) this syncing, and neither does anything
         else in the Java stack (until now, msync() was never used).
      
      2. We don't (yet?) have write-back of mmap'ed memory anyway, so there's
         no sense in doing any writing in msync either. We'll need to work on
         a full read-write implementation of file-backed mmap() later.
      de374193
  4. May 19, 2013
  5. May 18, 2013
Loading