This is the mail archive of the java@gcc.gnu.org mailing list for the Java project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: Analysis of Mauve failures - The final chapter


This is a long story, and I think it has been discussed here before. 

The "short" answer is that in an ideal world, the gcc back end should:

1) Guarantee GC-safety, i.e. guarantee that all live objects are referenced
by recognizable by pointers to the object.  Currently it's just close enough
to guaranteeing that that nobody complains, and thus nobody is sufficiently
motivated to fix it.  I believe currently all Java references stored in
statically allocated memory or the heap are in fact pointers to the base of
the corresponding object, and are thus guaranteed to be collector
recognizable.  This is not true for temporaries in registers or spilled to
the stack.  But the collector always recognizes interior pointers from those
sources, so in nearly all cases, the collector makes up for this problem,
especially since Java arrays have a header.  The compiler should guarantee
that it's true in ALL cases.  I'm not sure there's an official bug report
about this.  AFAIK, it has never been observed in practice with gcj.  It's
easy enough to contrive C test cases for which it breaks with optimization,
at least on some architectures.

2) Ensure that for arrays, an accessible array is always referenced by a
pointer to near the beginning of the array.  (The conjecture is that if we
did (1), this would be easy.)

If it did these, we could allocate large arrays such that interior pointers
would never need to be recognized, which would basically cause these
warnings to disappear, at least in the absence of native code that allocated
collectable objects.  (The collector hooks to do this have been there for a
long time.  They're the ...ignore_off_page() allocation calls.)

Clearly none of this will happen in time for 3.1.

As it stands, I'm hesitant to turn off the warnings by default, though I can
see arguments either way.  If the warnings occur repeatedly, they are
indicative of a potential memory leak.  If someone wants to turn it off by
default, and instead provide an environment variable to turn it back on, I
could probably be talked into that, too.

Hans


> -----Original Message-----
> From: Andrew Haley [mailto:aph@cambridge.redhat.com]
> Sent: Friday, April 05, 2002 1:08 AM
> To: Boehm, Hans
> Cc: 'Mark Wielaard'; java@gcc.gnu.org
> Subject: RE: Analysis of Mauve failures - The final chapter
> 
> 
> Boehm, Hans writes:
>  > > From: Mark Wielaard [mailto:mark@klomp.org]
>  > > > !java.lang.reflect.Array.newInstance
>  > > Ugh, not fun. Running by hand also hangs, but turning on 
> the -debug or
>  > > -verbose flag makes it run... When not giving any flags 
> it only prints
>  > > Needed to allocate blacklisted block at 0x824b000
>  > > The test actually tries to force a OutOfMemoryError 
> exception which
>  > > might explain this. But the Object.clone() test also 
> seems to do this
>  > > and that one just works.
> 
>  > All tests that allocate large objects should ideally be 
> run with the
>  > environment variable GC_NO_BLACKLIST_WARNING defined.  
> That will get rid of
>  > the message.  The occurrence of the warning is often less than 100%
>  > deterministic, and that's expected.  Was there an issue 
> here beyond the
>  > warning?
> 
> Hans,
> 
> I don't understand.  If the gc isn't buggy, why does it produce this
> warning at all in a production quality system?
> 
> Andrew.
> 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]