This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Release RTL bodies after compilation (sometimes)


On Wed, Sep 15, 2004 at 08:30:43AM -0400, Diego Novillo wrote:
> On Wed, 2004-09-15 at 07:32, Jan Hubicka wrote:
> 
> > What about renaming ggc_free to ggc_dead to make it obvious that one can
> > not "free" live data?
> > 
> It's not a matter of naming.  When you are expunging a basic block, you
> really consider it dead, but as we have proven, you don't really know if
> it's reachable from somewhere else.

Then this isn't a suitable candidate for ggc_free, obviously.  Unless,
of course, you want to _define_ that it is dead when expunging it.  In
that case, ggc_free acts as a form of assertion.

> Again, if we find ourselves having to help GC by expressly telling it
> what is dead and what isn't, then our GC system is broken and ggc_free
> is _NOT_ the way to fix it.
> 
> If we are holding onto too much unnecessary data in our algorithms, then
> the solution _ought_ to involve breaking the chains to the dead data so
> that we can collect all that garbage.  And the way of breaking those
> chains should simply be writing NULL to your pointers.
> 
> If you want to use ggc_free() in your local tree to find out which
> passes are holding on to garbage for too long, that is fine with me. 
> But _please_ do not use that crutch to compensate for GC's shortcomings.
> 
> Together with that, we need to distinguish data structures whose usage
> pattern is not really suitable for GC.  Again, the fact that you are
> adding ggc_free() here and there may be a symptom of a memory allocation
> mismatch.

But this part, that you and Jeff keep reiterating, doesn't make sense.
Even with an improved garbage collector well beyond what we have today,
the fact is that collection is expensive, and the locality penalty for
garbage is expensive.  I've done measurements which suggest that, if
"optimal" memory allocation were possible, it would be 10-15% faster.
Reducing the number of collections required for a file has similar
benefit.

If you "solve" this by moving the object out of GC, you lose the
powerful checking ability of ggc_free.  With GC checking, it poisons
the object that you're claiming is dead.  With GCAC checking, it
verifies at the next collection that it really isn't reachable.

Some degree of this can be fixed by fixing "GC's shortcomings" - for
instance, putting optimizer data specific to a function into a GC zone
and discarding the zone (optionally checking that it isn't reachable!)
at the end of the function.  And Jeff's right to remind us that
ggc_free still has overhead - less in the collector I'm working on, but
it's still there.  So looping over small allocations to destroy them is
probably not a good idea.  But within processing a function, for large
allocations, what's the problem with using ggc_free?

-- 
Daniel Jacobowitz


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]