This is the mail archive of the
mailing list for the GCC project.
Re: Faster compilation speed
- From: Nix <nix at esperi dot demon dot co dot uk>
- To: Noel Yap <yap_noel at yahoo dot com>
- Cc: Neil Booth <neil at daikokuya dot co dot uk>, gcc at gcc dot gnu dot org
- Date: 11 Aug 2002 12:22:21 +0100
- Subject: Re: Faster compilation speed
- References: <firstname.lastname@example.org>
[rewrapped my quoted text]
On Sat, 10 Aug 2002, Noel Yap stated:
> --- Nix <email@example.com> wrote:
>> Are you sure that this isn't because GCC is having to parse the
>> headers over and over again, while the precompiled system can avoid
>> that overhead?
> No, I'm not sure. In any case, whether it's due to
> elimination of reparsing or elimination of reopening,
> would you agree that precompiled headers should speed
> up builds?
Yes, but mainly (IMHO) because the `precompilation' process includes
some parsing work. The preprocessing job (compilation phases 1--4)
should be quite fast.
So speeding up *parsing* is the point here; getting rid of bison should
help fix that :)
(Maybe I'm being too pedantic here.)
>> Especially for C++ header files (which tend to be large, complex,
>> interdependent, and include a lot of code), the parsing and
>> compilation time *vastly* dominates the preprocessing time.
> What about for us lowly C programmers?
(oops, sorry, I thought you were using C++, because C++ users really
*notice* time spent in headers.)
The disparity there isn't anywhere near so extreme, but it's still there
I know that even with large bodies of C code I've never been able to
spot preprocessing time; even the old cccp was damned-near instantaneous
(well, except on very memory-constrained boxes where even ls(1) was a
>> Now obviously with a less toy example the time consumed optimizing
>> would rise; but that doesn't affect my point, that the lion's share
>> of time spent in C++ header files is parsing time, and that speeding
>> up the preprocessor will have limited effect now (thanks to Zack and
>> Neil speeding it up so much already :) ).
> What kind of effect does it have for C? Do you think
... from my quick check (so primitive that I'm not even going to post it
here) preprocessing and parsing seem to consume roughly equal amounts of
time, and both are far exceeded by the amount of time taken to compile
the code itself.
So there's not much need for preprocessor optimization in C as far as I
> saving preprocessor output (of header files) can speed
> up a build consisting of many, many compiles?
Preprocessor *output*? In its current state, the output phase is the
slowest part of the preprocessor, such that feeding token streams
straight into the compiler (as 3.3-to-be will) is faster than saving it
out to disk would be :)
And for C code in particular I imagine that the larger size of the
precompiled header lumps would cause extra disk I/O time that would
exceed the time taken to parse the headers in the first place... but
this is a guess: some of the people who've actually been working on
precompiled headers can probably answer this better :)
`There's something satisfying about killing JWZ over and over again.'
-- 1i, personal communication