This is the mail archive of the
mailing list for the GCC project.
Re: gcc compile-time performance
- From: Stan Shebs <shebs at apple dot com>
- To: Daniel Berlin <dberlin at dberlin dot org>
- Cc: Dara Hazeghi <dhazeghi at pacbell dot net>, Neil Booth <neil at daikokuya dot demon dot co dot uk>, Andi Kleen <ak at suse dot de>, gcc at gcc dot gnu dot org
- Date: Fri, 17 May 2002 09:25:32 -0700
- Subject: Re: gcc compile-time performance
- References: <Pine.LNX.email@example.com>
Daniel Berlin wrote:
> On Fri, 17 May 2002, Dara Hazeghi wrote:
> > [...]
> > Now the other problem is how to distinguish noise from real compile-time
> > performance regressions, as much of the differences in performance up 'til
> > now seem to have most likely been a cumulative effect of hundreds of patches
> > (I have no data to back this up, just a personal opinion).
> I think we actually *do* have data to back this up somewhere.
> I also remember Stan Shebs mentioning it at some point (Stan, maybe i'm
> misremembering, so if i just attributed something to you that you never
> said, ....).
> Much like a software project gets a year late one day at a time, gcc has
> gotten 5 minutes slower at compiling one second at a time.
That's my personal suspicion too, but no, I don't have any real
evidence. The lack of hot spots in profiling is a strong hint.
One oddball idea I've thought about is to functionize all the
tree and rtl macros, and run a profile on that to see what are
the most used/abused macros.
Another idea is to use gcov to identify areas of code that are
always dead for a particular config, then look at the
conditionals that guard the dead area. There are probably
cases where the conditionals are not ordered for maximal
efficiency. (Of course it's possible that a good ordering
for one config is bad for another.) The gcov route could also
identify more opportunities to GC code for obsolete configs.
Then there's the extreme approach of having maintainers only
accept patches that either remove code or make the compiler run