This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Value Range Propagation Patch -- Part 2 of 2



  In message <200005100240.WAA24356@jwlab.FEITH.COM>you write:
  > The basic implementation can produce correct code without ordering the
  > basic blocks.  The current implementation actually makes use of knowing the
  > ordering to flow the current value range for some registers through the
  > blocks without having to wait for the next iteration.  Making use of
  > knowledge about the ordering was a design decision which reduces the
  > number of iterations required and produces more desirable results.
OK.  You're attacking a couple problems here.

First, ordering of blocks to decrease the number of iterations for convergence;
this is something we've been debating in another thread already.  If we are
going to have code to order blocks for dataflow analysis, I want it to be 
generic routine we can call from other dataflow solvers if it proves useful
in reducing compilation times.

The second problem you're addressing is to get the "maximal solution" when
solving a dataflow problem.  The way this is usually done is to start with
an optimistic initial state and iterate to the correct steady state.  I did
go back and look at a value range propagation paper to verify that starting
optimistic and iterating was the way others were approaching this problem
for value range propagation.

So, to summarize, if we're going to do block reordering, then it needs to
be a general routine we can call for any code doing dataflow analysis.  To
solve the maximal/minimal problem, I'd prefer to stick with what other people
are using (start optimistic and iterate to steady state) rather than invent
something of our own.

  > > Whether or not we get the desired code depends on a few factors like whet
  > her
  > > or not we do the optimistic initialization vs pessimistic initialization.
  > 
  > Currently pessimistic initialization is used and the desired result (for
  > the example in my earlier email) is attained through using knowledge of
  > the basic block ordering.  I'll have to think more on the issue of applying
  > optimistic initialization to value range propagation.
There are some papers on value range propagation in PLDI through the 90s which
should discuss this issue.


  > What I'm trying
  > to get clear in my mind is what I need to do in order for it make sense
  > to add this optimization to the source tree so that other people can take
  > advantage of it, beat up on it, and perhaps help improve it.

  >  What it sounds
  > like is that you're still evaluating it and in the meantime I should look
  > at the pointers in your email to see if I can modify the implementation so
  > that it produces the desired results without using inside knowledge
  > regarding the ordering (which your gut tells you is unnecessary).  It
  > also sounds like the current approach isn't forbidden, it's just not
  > desirable if I can find time to come up with a more standard approach.
That's a fair summary.

  > Do any other issues come to mind at the moment?
None at the moment, but as you observed, I'm still evaluating the code.

I do hope others play around with this stuff.  It's interesting for a
variety of purposes.  For example, it can be used to eliminate useless
bounds checking, or in branch predictions (see PLDI papers).  Some claim
that VRP optimizations completely subsume global constant propagation
and it (in some ways) also subsumes conditional constant propagation.

jeff


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]