This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: Faster compilation speed
- From: Noel Yap <yap_noel at yahoo dot com>
- To: Linus Torvalds <torvalds at transmeta dot com>, gcc at gcc dot gnu dot org
- Date: Sat, 10 Aug 2002 19:20:08 -0700 (PDT)
- Subject: Re: Faster compilation speed
--- Linus Torvalds <torvalds@transmeta.com> wrote:
> In article
> <20020809200413.46719.qmail@web21403.mail.yahoo.com>
> you write:
> >Build speeds are most helped by minimizing the
> number
> >of files opened and closed during the build.
>
> I _seriously_ doubt that.
Yes, my statement is exagerated although they are not
completely truthless.
The study conducted by John Lakos and some testing
that I have conducted point to the fact that
minimizing file opens does speed up builds
significantly.
Of course, that's not to say that other courses of
action shouldn't be pursued.
> Opening (and even reading) a cached file is not an
> expensive operation,
> not compared to the kinds of run-times gcc has.
> We're talking a few
> microseconds per file open at a low level. Even
> parsing it should not
> be that expensive, especially if the preprocessor is
> any good (and from
> all I've seen, these days it _is_ good).
Hmm, perhaps it's time I conducted some tests again.
I'm assuming you're talking about caching at the OS
level?
> I strongly suspect that what makes gcc slow is that
> it has absolutely
> horrible cache behaviour, a big VM footprint, and
> chases pointers in
> that badly cached area all of the time.
Maybe you're not talking about caching at the OS
level. Caching at the compiler level will certainly
help with header files that are included multiple
times. OTOH, caching at the OS level and/or
preprocessing header files will help with that /and/
header files that are included across compiles.
> And that, in turn, is probably impossible to fix as
> long as gcc uses
> garbage collection for most of its internal memory
> management. There
> just aren't all that many worse ways to f*ck up your
> cache behaviour
> than by using lots of allocations and lazy GC to
> manage your memory.
>
> The problem with bad cache behaviour is that you
> don't get nice spikes
> in specific places that you can try to optimize -
> the cost ends up being
> spread all over the places that touch the data
> structures.
>
> The problem with trying to avoid GC is that if you
> do that you have to
> be careful about your reference counts, and I doubt
> the gcc people want
> to be that careful, especially considering that the
> code-base right now
> is not likely to be very easy to convert.
>
> (Plus the fact that GC proponents absolutely refuse
> to see the error of
> their ways, and will flame me royally for even
> _daring_ to say that GC
> sucks donkey brains through a straw from a
> performance standpoint. If
> order to work with refcounting, you need to have the
> mentality that
> every single data structure with a non-local
> lifetime needs to have the
> count as it's major member)
I'll leave it to the experts to hash this area out.
Noel
__________________________________________________
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com