This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Problem with PFE approach [Was: Faster compilation speed]


Timothy J. Wood wrote:


So, another problem with PFE that I've noticed after working with it for a while...

If you put all your commonly used headers in a PFE, then changing any of these headers causes the PFE header to considered changed. And, since this header is imported into every single file in your project, you end up in a situation where changing any header causes the entire project to be rebuilt. This is clearly not good for day to day development.

A PCH approach that was automatic and didn't have a single monolithic file would avoid the artificial tying together of all the headers in the world and would thus lead to faster incremental builds due to fewer files being rebuilt.

Another approach that would work with a monolithic file would be some sort of fact database that would allow the build system to decide early on that the change in question didn't effect some subset of files.
It sounds to me like you're favoring a revival of the old NeXT system
of precomps that were systemwide and and per-include file.  That approach,
even after years of tuning and tweaking, tended to top out at about 4X
speedup, while PFE precomps are at 6X after just a few months of work.
(Admittedly, the NeXT scheme is based on a separate preprocessor, which
limits its effectiveness.)

Interestingly though, transparent precomps doesn't work as well as one
might imagine.  If they are invalid (touched a file, changed a -D, etc),
then either you have to put out a warning or just silently go slower.
But if you put out the warning, you'll hear about problems with messed-up
system precomps, for which you then have to become root and re-create.

I think the fundamental problem is that precompiled headers are basically
a cache, but that the serial nature of preprocessing in the C family means
that you can't use cache management algorithms and instead have to treat
the precomp as a sort of weird object file, whether it includes only
system headers or your own as well.  A truly transparent precomp would
have to have some sort of branching tree structure to be able to
trace all the possible #if paths, and it would all have to be at
the token level, instead of being able to cache GCC's trees and
thus avoid much expensive semantic analysis.

But as a practical thing, a 6X speedup in the compiler so radically
changes what you can do day-to-day, that it's worth some effort and
some process change to accommodate it.  CW precomps have all the flaws
you're pointing out, and yet CW users are pretty happy with it; by
editing their prefix file, they can adjust their one precomp to include
more or fewer of their own headers, depending on whether a header is
stable or not, and can do this at any point during development.
Sometimes the compiler will do too much recompiling, but who cares if
it only takes a minute to completely rebuild a big project?

Stan






Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]