This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: Faster compilation speed
- From: Mike Stump <mrs at apple dot com>
- To: Joe Wilson <developir at yahoo dot com>
- Cc: gcc at gcc dot gnu dot org
- Date: Fri, 16 Aug 2002 11:04:22 -0700
- Subject: Re: Faster compilation speed
On Friday, August 16, 2002, at 05:08 AM, Joe Wilson wrote:
Mat Hounsell wrote:
But why load and unload the compiler and the headers for every file
in a
module. It would be far more effecient to adapt the build process and
start gcc
for the module and then to tell it to compile each file that needs to
be
re-compiled. Add pre-compiled header support and it wouldn't even
need to
compile the headers once.
I was thinking the same thing, except without introducing new pragmas.
You could do the common (header) code precompiling only for modules
listed
on the commandline without having to save state to a file-based code
respository. i.e.:
g++ -c [flags] module1.cpp module2.cpp module3.cpp
But compiling groups of modules at one time is contrary to the way most
makefiles work, so it might not be practical.
Perhaps GCC already economizes the evaluation of common code in such
"group" builds. Can anyone comment on whether it does or not?
We already have this optimization in house. Both precompiled headers
and a gcc server process that you can conceptualize as forking after it
is all set up, so that it can run many builds on an X-way system, if
the files are on the command line, and so as to gain the benefit having
the PFE stuff all hot and ready to go. So, I'm mostly interested in
benefits after those technologies.