This is the mail archive of the
mailing list for the GCC project.
Re: Faster compilation speed
- From: Noel Yap <yap_noel at yahoo dot com>
- To: Neil Booth <neil at daikokuya dot co dot uk>, Stan Shebs <shebs at apple dot com>
- Cc: Noel Yap <yap_noel at yahoo dot com>, Mike Stump <mrs at apple dot com>, gcc at gcc dot gnu dot org
- Date: Sat, 10 Aug 2002 16:12:47 -0700 (PDT)
- Subject: Re: Faster compilation speed
--- Neil Booth <email@example.com> wrote:
> Stan Shebs wrote:-
> > Is this assertion based on empirical measurement,
> and if so, for what
> > source code and what system? For instance, the
> longest source file
> > in GCC is about 15K lines, and at -O2, only a
> small percentage of
> > time is spent messing with files. If I use
> -save-temps on cp/decl.c on
> > one of my (Linux) machines, I get a total time of
> about 38 sec from
> > source to asm. If I just compile decl.i, it's
> about 37 sec, so that's
> > 1 sec for *all* preprocessing, including all file
> Yes, it's very rare that preprocessing is more than
> 2% of -O2 time;
> it's often less than 1%. IMO that says more about
> the efficiency
> of the rest than of CPP.
I would agree if you're talking about complete builds
spanning only a few C/C++ files. OTOH, when builds
span many hundreds of these files, build-time (not
just compile-time) starts getting bogged down on
(mostly) reopening and repreprocessing the same files
over and over.
Within our system, builds on Windows are magnitudes
faster since we're able to take advantage of
precompiled headers. AFAIK, I legitimate study was
made studying whether to use this feature or not.
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs