This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Some thoughts about steerring commitee work


Eric Botcazou <ebotcazou@libertysurf.fr> writes:

> > Please, just look at those charts
> >
> > https://vmakarov.108.redhat.com/nonav/spec/comparison.html
> >
> > The compilation speed decrease without a performance improving (at least
> > for the default case) is really scary.
> 
> Right, I also found those charts a bit depressing, given the time and energy 
> that have been put in the compiler since GCC 3.2.3.  For example, it seems 
> that the Tree-SSA infrastructure has brought very little benefit in terms of 
> performance in the generic case, in exchange for a massive dump of new code.
> 
> Does anyone have the beginning of an idea as to why this is so?  Did GCC hit a 
> fundamental wall some time ago, for example because of its portability?
> 
> On the other hand, those efforts have not been lost, since the compiler is now 
> much more modern in terms of infrastructure and algorithms.  However, before 
> triggering the next internal earthquake (namely LTO), we should probably try 
> to understand what's going on (or not going on).

These charts are certainly discouraging.  On the other hand, for some
real code we're seeing each new version of gcc produce an incremental
runtime improvement.  So I'm not sure what to make of it.

This is hardly a new thought, but I believe that for the i386 gcc is
handicapped by reload.  No matter how smart we are before reload, it
just take one poor decision by reload in an inner loop and we've lost
all the gains.  Reload has enormous complexities which are mostly
irrelevant for the i386.  And I think that the idea of doing register
allocation separately from spill code generation does not make sense
on the i386.

I don't think gcc has hit any sort of wall, except insofar as we have
no plan for eliminating reload.  I don't think portability comes into
play here.

That said, I certainly agree that we should try to figure out what is
going on as much as possible.

I also want to say that at present LTO is a collection of different
projects.  Most of them are aimed directly at speeding up the
compiler, reducing compile time.  So far nobody is working on any
changes to the actual optimization framework.  So I'm not too
concerned yet about LTO in this context.  I certainly agree that when
LTO moves into an optimization phase, we need to make sure that any
default passes pay off in terms of compilation time or runtime
performance.

Ian


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]