This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: Apple's implementation of precompiled headers
- To: dewar at gnat dot com
- Subject: Re: Apple's implementation of precompiled headers
- From: Stan Shebs <shebs at apple dot com>
- Date: Sat, 29 Sep 2001 06:09:25 -0700
- CC: degger at fhm dot edu, gcc at gcc dot gnu dot org
- References: <20010929125647.B2BA8F2BB1@nile.gnat.com>
dewar@gnat.com wrote:
>
> <<You shouldn't be that amazed, I reported the experimental results
> last year. If you want fast compilation of programs with huge
> headers (for Macs the round number is 100K lines of header pulled
> in per source file), you have to have the set of declarations in
> memory and available for random access by name. Parsing, compiling,
> reading from disk, all of them take too long.
> >>
>
> But they really shouldn't take that long, disks are slow beasts. Yes, it
> is certainly understandable that code generation can take a long time, but
> it is surprising for front end analysis for a simple language like C to
> be this slow. Certainly one should be able to parse C at a million lines
> a minute on a modern machine, and there is not much semantic analysis to
> do in C.
At a million lines/minute, GCC would still be slower than Metrowerks.
We do have to be just as fast with C++.
> I am puzzled by the comment on reading from disk, precompiled headers
> still have to be read from disk, and the problem is usually that the
> precompiled headers are much larger than the sources. So I must be
> missing something.
VM is your friend. mmap keeps you from having to process every byte,
and the most-popular declarations will tend to stay in memory from one
compilation to the next.
If that doesn't suffice, then the next thing would be to keep the
compiler process live and feed it a stream of filenames to work on,
but I don't know that we need to go that far yet.
Stan