This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 2/2] Free large chunks in ggc


> If the size to free is smaller than the quirk size, then it has the very
> undesirable effect that with using GC only you might run unnecessarily out
> of virtual address space, because it allocates pages in 2MB chunks, but
> if they are released in 1MB chunks, those released chunks will never be

1MB is just the minimum, it frees it in whatever it can find
(but only for a single GC cycle usually). So when enough continuous
memory is free it will be reused by GC.

I guess it would be possible to add a fallback to allocate a smaller
chunk if the large chunk fails, but unless someone actually comes
up with a test case I have doubts it is really needed.

> usable again for GC.  Consider on 32-bit address space allocating 3GB
> of GC memory, then freeing stuff in every odd 1MB chunk of pages, then
> wanting to allocate through GC the 1.5GB back.
> 
> IMHO we should munmap immediately in release_pages the > G.pagesize pages,

Then you get the fragmentation problem back in full force.

> those are not very likely to be reused anyway (and it had one in between
> ggc_collect cycle to be reused anyway), and for the == G.pagesize
> (the usual case, the only ones that are allocated in GGC_QUIRK_SIZE sets)
> we should note which page was the first one in the GGC_QUIRK_SIZE chunk
> and munmap exactly those 2MB starting at the first page only.

I tried this first with aligned 2MB chunks, but it doesn't trigger ever in a 
normal (non LTO) bootstrap.

-Andi
-- 
ak@linux.intel.com -- Speaking for myself only.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]