[PATCH 2/2] Free large chunks in ggc
Wed Oct 19 15:08:00 GMT 2011
On Wed, Oct 19, 2011 at 04:37:45PM +0200, Andi Kleen wrote:
> > If the size to free is smaller than the quirk size, then it has the very
> > undesirable effect that with using GC only you might run unnecessarily out
> > of virtual address space, because it allocates pages in 2MB chunks, but
> > if they are released in 1MB chunks, those released chunks will never be
> 1MB is just the minimum, it frees it in whatever it can find
> (but only for a single GC cycle usually). So when enough continuous
> memory is free it will be reused by GC.
> I guess it would be possible to add a fallback to allocate a smaller
> chunk if the large chunk fails, but unless someone actually comes
> up with a test case I have doubts it is really needed.
> > usable again for GC. Consider on 32-bit address space allocating 3GB
> > of GC memory, then freeing stuff in every odd 1MB chunk of pages, then
> > wanting to allocate through GC the 1.5GB back.
> > IMHO we should munmap immediately in release_pages the > G.pagesize pages,
> Then you get the fragmentation problem back in full force.
Why? For one, such allocations are very rare (you only get them when
a single GC allocation requests > page of memory, like perhaps a string
literal over 4KB or similar or function call with over 1000 arguments etc.).
And if they are unlikely to be reused, not munmapping them means wasting
more virtual address space than needed.
More information about the Gcc-patches