This is the mail archive of the
mailing list for the GCC project.
Re: Problem with PFE approach [Was: Faster compilation speed]
- From: Daniel Berlin <dberlin at dberlin dot org>
- To: "Timothy J. Wood" <tjw at omnigroup dot com>
- Cc: Devang Patel <dpatel at apple dot com>, Mike Stump <mrs at apple dot com>,<gcc at gcc dot gnu dot org>
- Date: Sat, 17 Aug 2002 23:04:11 -0400 (EDT)
- Subject: Re: Problem with PFE approach [Was: Faster compilation speed]
- Reply-to: dberlin at dberlin dot org
On Sat, 17 Aug 2002, Timothy J. Wood wrote:
> So, another problem with PFE that I've noticed after working with it
> for a while...
> If you put all your commonly used headers in a PFE, then changing any
> of these headers causes the PFE header to considered changed. And,
> since this header is imported into every single file in your project,
> you end up in a situation where changing any header causes the entire
> project to be rebuilt.
Um, this header should *not* be explicitly included in the files.
It's *prefix* header.
The only thing that would need to be rebuilt in this case is the prefix
Everything else that would normally not be rebuilt will not be rebuilt.
IE the only thing extra that gets rebuilt is the prefix header.
> This is clearly not good for day to day
> A PCH approach that was automatic and didn't have a single monolithic
> file would avoid the artificial tying together of all the headers in
> the world and would thus lead to faster incremental builds due to fewer
> files being rebuilt.
> Another approach that would work with a monolithic file would be some
> sort of fact database that would allow the build system to decide early
> on that the change in question didn't effect some subset of files.