Heap fragmentation (Was: Debugging "Leaks" With Boehm-GC)

Boehm, Hans hans.boehm@hp.com
Wed Jan 18 13:36:00 GMT 2006


Thanks.

It looks like the first dump was generated right after a GC, but there
was some intervening allocation between the last GC and the second dump?
I'm asking because it contains a bunch of unmarked objects, which is OK
if they were newly allocated, but not otherwise..

You were trying to allocate something larger than about 450K, the size
of the largest free block?

You might check the finalizable object statistics, and make sure that
number is not continuously growing over time.  It seems fairly high to
me, but I don't have good gcj statistics to compare it to.

This looks like a large object fragmentation problem, which is
theoretically unavoidable without moving objects, and we sometimes can't
do much better in practice :-).  As always, the huge static root size
makes this worse by causing the collector to collect too infrequently
with the default heuristic.

Unfortunately, I think your options are mostly the ones that have
already been mentioned:

- Increase GC_free_space_divisor.
- Avoid allocating very large objects, if you can.  (Turning a large
single dimensional array into a two dimensional one is likely to help.)
- Don't declare finalizers unless you really need them.  (Not that
you're necessarily doing that.)
- Wait for someone to fix/help fix the root size issue.

Unless I'm missing something, or my interpretation above is incorrect, I
don't see anything really broken.

Hans

> -----Original Message-----
> From: java-owner@gcc.gnu.org [mailto:java-owner@gcc.gnu.org] 
> On Behalf Of Martin Egholm Nielsen
> Sent: Tuesday, January 17, 2006 12:59 PM
> To: java@gcc.gnu.org
> Subject: Re: Heap fragmentation (Was: Debugging "Leaks" With Boehm-GC)
> 
> 
> Hi Hans,
> 
> >>>Just stumpled across this old thread - seems that I'm beginning to 
> >>>run into fragmentation problems - bot being able to 
> allocate memory 
> >>>for some things. I have set GC_MAXIMUM_HEAP_SIZE to 
> 14000000 (14mb), 
> >>>but I wonder where I can configure GC_free_space_divisor? In 
> >>>"alloc.c" it seems - but there it's set to 3, whereas "gc.h" says 
> >>>it's initially 4?!
> >>We set GC_free_space_divisor before calling JvRunMain.  I 
> don't know 
> >>what happens if you call it after the runtime is already started.
> > 
> > That should be OK.  It should control future GC/heap expansion 
> > decisions.
> > 
> >>>Moreover, is 20 a super value? Or is this trial-and-error?
> >>Trial and error.
> >>The larger the divisor, the more time spent in GC, but the 
> less likely 
> >>you are to end up in the pathological situation where there 
> is plenty 
> >>of free memory, but it is all in pools for objects of a size other 
> >>than you are trying to allocate.
> >>
> >>I think the default value is probably appropiate for cases 
> where there 
> >>is no upper bound on memory size.  For bounded memory size, we have 
> >>found that a larger divisor is needed.
> > If the issue here is really fragentation, it would be nice to 
> > understand it better.  A call to GC_dump() or setting the 
> > GC_DUMP_REGULARLY environment variable should tell you 
> what's in the 
> > heap.  Really fragmentation per se can only occur if either:
> 
> Here goes Hans - attached is a dump from GC when it crashes 
> followed by 
> a GC-dump after the client gave up it's attempt to do the 
> "RPC-operation".
> 
> I can provide you plenty of dumps :-)
> 
> Regards,
>   Martin Egholm
> 



More information about the Java mailing list