Garbage collector stopping my world for half a second

Boehm, Hans hans.boehm@hp.com
Thu Dec 1 21:10:00 GMT 2005


The size of the root set will be affected by the total size of libraries
(or pieces of static libraries) you load.    If the root sizes are the
problem, it may not occur with a small test program, especially if it's
statically linked.  I would try running with GC_DUMP_REGULARLY.

The simple test example may also have had less live data, or less
pointer-dense data.

I believe the collector is still scanning all sorts of junk (e.g.
exception tables on some platforms) that it shouldn't be scanning.  (The
fact that it is scanning library data by default means that it sees Java
pointers stored by CNI code into globals, which is probably good.  The
fact that it is sometimes scanning exception tables is clearly bad.)

Given that you're on a slow machine, I think your best bets are:

1) Reduce root sizes if those are a large part of the problem.

2) If that's insufficient, you might try getting the incremental GC to
work.

You might also be able to get some benefit from adding a PREFETCH
implementation to gcconfig.h for your platform.  This might be trivial,
since there is one for Darwin/PowerPC.  What's less clear is if it will
do any good on your platform.

Hans

> -----Original Message-----
> From: java-owner@gcc.gnu.org [mailto:java-owner@gcc.gnu.org] 
> On Behalf Of Martin Egholm Nielsen
> Sent: Thursday, December 01, 2005 11:48 AM
> To: java@gcc.gnu.org
> Subject: Re: Garbage collector stopping my world for half a second
> 
> 
> Hi guys,
> 
> >  > Could you remind me what your platform is?
> It's my embedded 133 Mhz PPC405EP - so it's nothing like the pc 
> analogies you mention ;-(
> 
> >  > For a modern PC, with a heap size of 8MB, and assuming the root 
> > size is  > reasonable, 60 msecs should be on the high side. 
>  (The GC 
> > time should be  > roughly proportional to <live data size> + <root 
> > size>.  On a GC_Bench  > variant, the marker seems to scan roughly 
> > 200-250MB/sec on a single 2  > GHz P4 Xeon.  There is a bit 
> of other 
> > overhead associated with  > collections, but that's the 
> lion's share.)  
> > >  > Unless you're on a very slow processor, the 360msecs seems 
> > anomalous to  > me.  It would be useful to understand 
> what's going on 
> > during that time.  > Presumably paging is not an issue?
> So, yes it's a slow processor, I guess... And there is no paging, no.
> 
> >  > More generally, the GC provides some support for incremental 
> > collection,  > but AFAIK that's not really supported by 
> gcj, and hence 
> > may take some  > effort to get to work for your application.
> Bummer! :-)
> But I stumbled across some "SMALL_CONFIG" parameter stuff 
> googling old 
> postings, and ofcourse the "divisor" variable, as well. 
> However, I never got a feeling of how this latter parameter 
> would affect 
> things...
> 
> > With gcj our root set is way, way too big.  Fixing this is 
> at the top 
> > of my list of things to fix.
> But, since I've seen gc-times of 60 msecs for a simple 
> test-example, I 
> guess the root-set scanning-time is below (~equal to) this time, and 
> similar regardless the other application?! If this is the case, the 
> remainder 300 msecs is <live data size>...
> 
> Best regards,
>   Martin Egholm
> 
> 



More information about the Java mailing list