mmu: don't bail out on huge page failure
Addressing that FIXME, as part of my memory reclamation series. But this is ready to go already. The goal is to retry to serve the allocation if a huge page allocation fails, and fill the range with the 4k pages. The simplest and most robust way I've found to do that was to propagate the error up until we reach operate(). Being there, all we need to do is to re-walk the range with 4k pages instead of 2Mb. We could theoretically just bail out on huge pages and move hp_end, but, specially when we have reclaim, it is likely that one operation will fail while the upcoming ones may succeed. Signed-off-by:Glauber Costa <glommer@cloudius-systems.com> [ penberg: s/NULL/nullptr/ ] Signed-off-by:
Pekka Enberg <penberg@cloudius-systems.com>
Loading
Please register or sign in to comment