This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: We're out of tree codes; now what?


On 3/23/07, Marc Espie <espie@quatramaran.ens.fr> wrote:
In article <24b520d20703191627v257c60ffw8bc96c5f73b1e789@mail.gmail.com> you write:
>On 19 Mar 2007 19:12:35 -0500, Gabriel Dos Reis <gdr@cs.tamu.edu> wrote:
>> similar justifications for yet another small% of slowdown have been
>> given routinely for over 5 years now.  small% build up; and when they
>> build up, they don't not to be convincing ;-)
>
>But what is the solution? We can complain about performance all we
>want (and we all love to do this), but without a plan to fix it we're
>just wasting effort. Shall we reject every patch that causes a slow
>down? Hold up releases if they are slower than their predecessors?
>Stop work on extensions, optimizations, and bug fixes until we get our
>compile-time performance back to some predetermined level?

Simple sociology.

Working on new optimizations = sexy.
Trimming down excess weight = unsexy.

GCC being vastly a volunteer project, it's much easier to find people
who want to work on their pet project, and implement a recent
optimization they found in a nice paper (that will gain 0.5% in some
awkward case) than to try to track down speed-downs and to reverse them.
I'm not sure I buy this.
Most of the new algorithm implementations I see are generally
replacing something slower with something faster.
Examples:
GVN-PRE is about 10x faster than SSAPRE in all cases, while doing
about 30% better on every testcase that SSAPRE sucked at.
The new points-to implementation is about 100x faster than the old one
(on smaller cases, it actually gets faster as the size of the problem
to be solved grows).
New ivopts algorithm replaced old ivopts algorithm for a large speedup
Newer propagation algorithm replaced older CCP implementation for a speedup

Most of these have not had *any* real effect on the time it takes to
run GCC in the common case.  Why?
Because except for the same edge cases you complain we are spending
time speeding up, they aren't a significant amount of time!
Most of the time is in building and manipulating trees and RTL.

And then disappointment, as the ssa stuff just got added on top of the
RTL stuff, and the RTL stuff that was supposed to vanish takes forever
to go away...

Mainly because people want it to produce *exactly* the same code it used to, instead of being willing to take a small generated code performance hit for a while. Since backend code generation is a moving target with very complex dependencies, this is a hard target to hit.


At some point, it's going to be really attractive to start again from scratch, without all the backends/frontend complexities and interactions that make cleaning up stuff harder and harder...
This i agree with.  I'd much rather stop trying to do everything we
can to support more than the top 5 architectures (though i have no
problem with all their OS variants).

Also, I have the feeling that quite a few of gcc sponsors are in it for
the publicity mostly (oh look, we're nice people giving money to gcc),
and new optimization passes that get 0.02% out of SPEC are better bang
for their money.
And some people just like to sit on the sidelines and complain instead
of submitting patches to do anything.

Kuddoes go to the people who actually manage to reverse some of the excesses of the new passes.

Most of these people are the same people who implemented the passes in
the first place!


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]