This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Faster compilation speed


"Tim Josling wrote:

>This is consistent with my tests; I found that a simplistic allocation which
>put everything on the same page, but which never freed anything, actually
>bootstrapped GCC faster than the standard GC.
>
Not too surprising actually; GCC's own sources aren't the hard cases for GC.

>
>The GC was never supposed to make GCC faster, it was supposed to reduce
>workload by getting rid of memory problems. But I doubt it achieves that
>objective. Certainly, keeping track of all the attempts to 'fix' GC has burned
>a lot of my time.
>
The original rationale that I remember was to deal with hairy C++ code
where the compiler would literally exhaust available VM when doing
function-at-a-time compilation.  If that's still the case, then memory
reclamation is a correctness issue.  But it's worth tinkering with the
heuristics; we got a little improvement on Darwin by bumping
GGC_MIN_EXPAND_FOR_GC from 1.3 to 2.0 (it was a while back, don't
have the comparative numbers).

Stan"

Much of the overhead of GC is not the collection as such, but the allocation
process and its side-effects. In fact, if you allocate using the GC code, the
build runs faster if you do the GC, though tweaking the threshold can help.
However for many programs you are better off to allocate very simply and not
do GC at all. 

The GC changes have, in my opinion, made small number of programs better at
the expense of making most compiles slower. We should not be using GC for most
compiles at all.

This - an optimisation that actually make things worse overall - is
unfortunately a common situation with 'improvments' to GCC.

Tim Josling


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]