-
- Downloads
Fix two deadlocks in the allocation tracking code
This patch fixes two deadlocks which previously existed in the allocation tracking code in core/mempool.cc. These deadlock have become more visible in the new backtrace() implementation which uses libunwind and more often allocates memory during its operation (especially from dl_iterate_phdr()). 1. alloc_page() wrongly held the free_page_ranges lock for a bit too long - including when calling tracker_remember(). tracker_remember() then takes the tracker mutex. Unfortunately, the opposite lock order also occurs: Consider a tracker_remember() (e.g., from malloc) needing to allocate memory and through the memory pool, end up calling alloc_page(). This is a classic deadlock situation. The solution is for alloc_page() to drop free_page_ranges_lock before calling tracker_remember(). 2. Another deadlock occured between the tracker lock and a pool::_lock - thread A: malloc calls remember() taking the TRACKER LOCK, and then calling malloc() (often in dl_iterate_phdr()), which calls memory::pool::alloc and takes the POOL LOCK. thread B: malloc calls memory::pool::alloc which takes the POOL LOCK and then if the pool is empty, calls alloc_page() which is also tracked so it takes the TRACKER LOCK. Here the solution is not to track page allocations and deallocations from within the memory pool implementation. We add untracked_alloc_page() and untracked_free_page() and use those in the pool class. This not only solves the deadlock, it also provides better leak detection because pages held by the allocator are now no longer considered "leaks" (just the individual small objects themselves). The fact alloc_page() now calls untracked_alloc_page() also made solving problem 1 above more natural (the free_pages lock is held during untracked_alloc_page()).
Loading
Please register or sign in to comment