This is the mail archive of the
mailing list for the GCC project.
Re: gcc compile-time performance
- From: dewar at gnat dot com (Robert Dewar)
- To: dewar at gnat dot com, gdr at codesourcery dot com
- Cc: gcc at gcc dot gnu dot org, scott at coyotegulch dot com
- Date: Sat, 18 May 2002 11:49:37 -0400 (EDT)
- Subject: Re: gcc compile-time performance
> Why should we keep going down that path?
Well look, no one is in favor of making GCC slow for the sake of making
GCC slow, and everyone is in favor of improving the speed and memory usage
But that's only one criterion. Much more important for the success of this
technology in the long run is that it keep up with the competition from
other compilers in terms of generated code efficiency. Look for example
at the Wind River case. The benchmarks between the DIAB (sp?) compiler and
the GCC compiler show some very significant differences, and the failure
of g++ to do something about memory bloat from implicit instantiations
signigficantly threatens the usability of g++ in many environments.
We have to balance the needs here. Yes, it would be nice if GCC were
fast enough to be a good replacement for Borland on your ancient 486
machines, but most of the continued resources for GCC development come
from different sources :-) and companies like Wind River are going to
be more worried about runtime performance than compile time performance.
By the way, WRS at least for now is committed to maintaining support for
the GCC compilers, since there is a lot of customer demand, but it's still
worrisome to see GCC falling behind.
So you have to balance things out. Someone for instance suggested that
everything be dropped in favor of increasing compile time speed. That
seems a quite absurd arrangement of priorities to me.
I think quite a big part of the problme here is that there is insufficient
testing. In the GNAT world we systematically build on all machines and run
our test suite on all machines every night. Any significant slow down in
compialtion speed gets noticed, although I still think we don't do a careful
enough job of tracking small changes.
at least we should know what we are trading off. I have a rule that says
"all optimizations are disappointing", and by that I mean that over and over
again, compiler writers install neat optimizations that seem like they will
help, but in fact have no measurable value at all. You indeed have to be very
careful that you are not falling into the trap of trading off compile time
for optimizations that don't do much for you.