This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: some tree-ssa vs mainline stats


On Thu, 2003-06-19 at 10:15, law@redhat.com wrote:
> In message <1056026534.20227.1.camel@p4>, Andrew MacLeod writes:
>  >On Thu, 2003-06-19 at 00:29, Steven Bosscher wrote:
>  >> Dan Nicolaescu wrote:
>  >> 
>  >> >I've done some empirical comparisons of GCC from CVS HEAD and the
>  >> >tree-ssa branch. 
>  >> >One comparison was for compiling a C file: combine.i,
>  >
>  >> >
>  >> >
>  >> >Does tree-ssa create unique identifiers for each ssa name, or
>  >> >something similar? Is that OK?
>  >> >
>  >
>  >One of the things on the SSA->Normal todo list is to examine coalescing
>  >non-interfereing temporaries into a single temporary.. That'd probably
>  >reduce the number quite significantly
> Seems to me you could get your candidate sets for this by looking at
> PHI nodes which have elements from distinct variables.  Those are the
> cases that are going to cause copies and we know the variables are related
> in some important way (otherwise they wouldn't have appeared as PHI
> arguments in the same PHI node).
> 

This will reduce the number of copies, and its on the list to consider.
As long as we dont care that 2 or more user variables no longer get
their own address space, this is not hard to do. We skip it right now,
but its not hard to add. Forget debugging generated code if we do this.
:-)

First,  prioritizing the coalesces instead of the current brute force
will probably buy us more right off the bat.

> Now if the whole point is simply to reduce the number of variables, there's
> a couple things we could investigate.
> 
>   1. Combining totally unrelated variables.  I'm not sure if this is wise
>   or not.
> 

I was planning this for any T.* variable that are of the same type. We
generate a *lot* of these, and most of them are live for very short
periods. THey are clear wins.

>   2. Elimination of unused variables.  Thus allowing their nodes to be
>   released.   This mostly speeds up expansion by having fewer variables
>   to expand.  It can also save stack space (imagine an addressable
>   VAR_DECL which is unused).
> 
I thought your cruft cleanup did that? Perhaps I am mistaken...

> Neither of these attacks the "problem" at the source, namely gimplification
> really likes to create new variables.  I don't offhand know how wasteful
> the gimplifier is being these days.  The one area I know the gimplifier 
> sucks and creates unnecessary temps is NOP conversions.
> 

I see lots of local automatics on big function. Lots. There is
definately a footprint to be reduced.

THe main part of the experiment was really the location of the pass...
Do I include this as part of the original conflict graph and coalescing,
or do I do some of it as a seperate run later.. Im not sure which is
faster. The original conflict graph will be a lot denser, and it has a
lot more elements in it. THe secondary one after the initial coalescing
would be a lot smaller...

I think a secondary graph and memory location coalescing is probably
most effective, and cleaner, but we'll see.  We can also only trigger it
if we think we saw enough stuff during the original build to make it
worth while.  I'll be doing some of this next week.

Andrew
  



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]