This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

re: Some thoughts and quetsions about the data flow infrastructure


>   On Sunday I had accidentally chat about the df infrastructure on
> IIRC.  I've got some thoughts which I'd like to share.
> 
>   I like df infrastructure code from the day one for its clearness.
> Unfortunately users don't see it and probably don't care about it.
> With my point of view the df infrastructure has a design flaw.  It
> extracts a lot of information about RTL and keep it on the side.  It
> does not make the code fast.  It would be ok if we got a better code
> quality.  Danny told me that they have 1.5% better code using df.  It
> is a really big improvement (about half year work for all compiler
> team according to Proebsting's law).  IMHO, it could justify promised
> 5% compiler slowness.
>
Vlad, 

I think that different people can have different perspectives.  

You have been working on improving the register allocation for several
years, but very little has come of it because the reload
infrastructure does not suit itself to being integrated with modern
register allocators.  You have spent several years of work without
touching the underlying problem that reload is generally going to
defeat almost any effort to get good benefits out of a new register
allocator.  I do not want to denigrate your work in any way, but at
the end of the day, any new register allocator will be compromised by
the existing reload implementation.

I am interested bringing the rest of the back end into the modern
world.  While some of the passes can and should be moved into the ssa
middle end of the compiler, there are several optimizations that can
only be done after the details of the target have been fully exposed.

My experience with trying to do this was that the number one problem
was that the existing dataflow is in many cases wrong or too
conservative and that it was not flexible enough to accommodate many
most modern optimization techniques.  So rather than hack around the
problem, I decided to attack the bad infrastructure problem first, and
open the way for myself and the others who work on the back end to
benefit from that infrastructure to get the rest of passes into shape.

There are certainly performance issues here.  There are limits on how
much I, and the others who have worked on this have been able to
change before we do our merge.  So far, only those passes that were
directly hacked into flow, such as dce, and auto-inc-dec detection
have been rewritten from the ground up to fully utilize the new
framework.  

However, it had gotten to the point where the two frameworks really
should not coexist.  Both implementations expect to work in an
environment where the information is maintained from pass to pass and
doing with two systems was not workable.  So the plan accepted by the
steering committee accommodates the wholesale replacement of the
dataflow analysis but even after the merge, there will still be many
passes that will be changed.

I would have liked to have the df information more tightly integrated
into the rtl rather than it being on the side.  It is cumbersome to
keep this information up to date.  However, the number of places in
the backends that depend on the existing rtl data structures apis make
such a replacement very difficult.

I do believe that by the time that we merge the branch, we will be
down to a 5% compile time regression.  While I would like this number
to be 0% or negative, I personally believe that having precise and correct
information is worth it and that over time we will be able to remove
that 5% penalty.  

As far as the other regressions, these will be dealt with very soon.  

Kenny


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]