This is the mail archive of the
mailing list for the GCC project.
Re: gcc compile-time performance
- From: Dara Hazeghi <dhazeghi at pacbell dot net>
- To: Neil Booth <neil at daikokuya dot demon dot co dot uk>, Andi Kleen <ak at suse dot de>
- Cc: gcc at gcc dot gnu dot org
- Date: Fri, 17 May 2002 06:47:07 -0700
- Subject: Re: gcc compile-time performance
- References: <email@example.com><firstname.lastname@example.org><20020516190955.GB2934@daikokuya.demon.co.uk>
On Thursday 16 May 2002 12:09 pm, Neil Booth wrote:
> Andi Kleen wrote:-
> > I guess the only way to avoid such things in the future would be to track
> > the compiler performance of mainline (similar to how the performance of
> > the resulting code is tracked at http://www.suse.de/~aj/SPEC/index.html).
> > Then it would be obvious which changes caused bad performance
> > regressions.
> I suspect that might be a good idea.
I think the next problem would be to determine what is and isn't a relevant
test. Obviously gcc 18.104.22.168 works for testing the C front-end (heck, the SPEC
folks agree too...). Would it make sense to come up with a bunch of
"representative" applications for the various front-ends? Alternatively,
would it make more sense to track the compile time of the SPEC builds
(shouldn't be too difficult).
Now the other problem is how to distinguish noise from real compile-time
performance regressions, as much of the differences in performance up 'til
now seem to have most likely been a cumulative effect of hundreds of patches
(I have no data to back this up, just a personal opinion).
Just thinking out loud...