Skip to content
Snippets Groups Projects
Commit ca1ac80b authored by Glauber Costa's avatar Glauber Costa Committed by Pekka Enberg
Browse files

mempool: memory statistics


This patch adds the basic of memory tracking, and exposes an interface to for
that data to be collected.

We basically start with all stats at zero, and as we add memory to the System,
we bump it up and recalculate the watermarks (to avoid recomputing them all the
time). When a page range comes up, it will be added as free memory.

We operate based on what is currently sitting in the page ranges. This means
that we are effectively ignoring memory that sit in pools for memory usage. I
think it is a good assumption because it allow us to focus in the big picture,
and leave the pools to be used as liquid currency.

Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
parent c7e560f1
No related branches found
No related tags found
No related merge requests found
......@@ -386,6 +386,34 @@ bi::set<page_range,
&page_range::member_hook>
> free_page_ranges __attribute__((init_priority((int)init_prio::fpranges)));
// Our notion of free memory is "whatever is in the page ranges". Therefore it
// starts at 0, and increases as we add page ranges.
//
// Updates to total should be fairly rare. We only expect updates upon boot,
// and eventually hotplug in an hypothetical future
static std::atomic<size_t> total_memory(0);
static std::atomic<size_t> free_memory(0);
static void on_free(size_t mem)
{
free_memory.fetch_add(mem);
}
static void on_alloc(size_t mem)
{
free_memory.fetch_sub(mem);
}
static void on_new_memory(size_t mem)
{
total_memory.fetch_add(mem);
}
namespace stats {
size_t free() { return free_memory.load(std::memory_order_relaxed); }
size_t total() { return total_memory.load(std::memory_order_relaxed); }
}
static void* malloc_large(size_t size)
{
size = (size + page_size - 1) & ~(page_size - 1);
......@@ -404,6 +432,7 @@ static void* malloc_large(size_t size)
header->size -= size;
ret_header = new (v + header->size) page_range(size);
}
on_alloc(size);
void* obj = ret_header;
obj += page_size;
trace_memory_malloc_large(obj, size);
......@@ -435,6 +464,9 @@ static page_range* merge(page_range* a, page_range* b)
static void free_page_range_locked(page_range *range)
{
auto i = free_page_ranges.insert(*range).first;
on_free(range->size);
if (i != free_page_ranges.begin()) {
i = free_page_ranges.iterator_to(*merge(&*boost::prior(i), &*i));
}
......@@ -497,6 +529,7 @@ static void refill_page_buffer()
auto p = &*it;
auto size = std::min(p->size, (limit - pbuf.nr) * page_size);
p->size -= size;
on_alloc(size);
void* pages = static_cast<void*>(p) + p->size;
if (!p->size) {
free_page_ranges.erase(*p);
......@@ -559,6 +592,7 @@ static void* early_alloc_page()
auto p = &*free_page_ranges.begin();
p->size -= page_size;
on_alloc(page_size);
void* page = static_cast<void*>(p) + p->size;
if (!p->size) {
free_page_ranges.erase(*p);
......@@ -635,10 +669,20 @@ void* alloc_huge_page(size_t N)
int endsize = v+range->size-ret-N;
// Make the original page range smaller, pointing to the part before
// our ret (if there's nothing before, remove this page range)
if (ret==v)
if (ret==v) {
free_page_ranges.erase(*range);
else
on_alloc(N);
} else {
// Note that this is is done conditionally because we are
// operating page ranges. That is what is left on our page
// ranges, so that is what we bill. It doesn't matter that we
// are currently allocating "N" bytes. The difference will be
// later on wiped by the on_free() call that exists within
// free_page_range in the conditional right below us.
on_alloc(range->size - (ret - v));
range->size = ret-v;
}
// Create a new page range for the endsize part (if there is one)
if (endsize > 0) {
void *e = (void *)(ret+N);
......@@ -680,6 +724,9 @@ void free_initial_memory_range(void* addr, size_t size)
if (!size) {
return;
}
on_new_memory(size);
free_page_range(addr, size);
}
......
......@@ -105,6 +105,10 @@ void enable_debug_allocator();
extern bool tracker_enabled;
namespace stats {
size_t free();
size_t total();
}
}
#endif
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment