This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: compile time regressions (was: merging for 3.4)
- From: Mike Stump <mrs at apple dot com>
- To: Neil Booth <neil at daikokuya dot co dot uk>
- Cc: Benjamin Kosnik <bkoz at redhat dot com>, Dan Nicolaescu <dann at ics dot uci dot edu>, pfeifer at dbai dot tuwien dot ac dot at, echristo at redhat dot com, hubicka at ucw dot cz, jbuck at synopsys dot com, dnovillo at redhat dot com, mark at codesourcery dot com, gdr at integrable-solutions dot net, pcarlini at unitus dot it, libstdc++ at gcc dot gnu dot org, gcc at gcc dot gnu dot org
- Date: Tue, 10 Dec 2002 14:37:26 -0800
- Subject: Re: compile time regressions (was: merging for 3.4)
On Tuesday, December 10, 2002, at 02:23 PM, Neil Booth wrote:
No, no, you have it backwards... PCH is going to expose all that slow
code that should be fast, that has to be fast, so that we can fix it.
By being 12x faster, or just 2x faster, there is _much_ less room to
be
slow, not more, really.
But slow code in things that PCH elides won't get exposed by the very
nature of PCH. e.g. drastic CPP slowdowns (not gonna happen!), or say
the parser.
This is called pch build time. It is important, just not that
important. In a 10 minute compile, it might be 20 seconds. Also, by
having cycle counter times in -freport-time, and measuring pch build
time, one can speed that up, arbitrarily fast, if one wants.
You'll notice however, that more time is wasted reconsidering in
cxx_finish_file however.
:-)