This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: local data flow


In http://gcc.gnu.org/ml/gcc/2006-07/msg00390.html, you write:
> depending on what you are doing, you can update the solution in place.
> The point of the dataflow talk was not to say that you cannot do
> anything incremental, it was to say that there are no good GENERAL
> techniques.  Many times it is possible to update the solution precisely
> if you have a very good understanding of the local conditions under
> which the transformation is done on.

Right.  So the struct-equiv code does not have to be converted to
use def-use chains to be beter integrated with thedataflow branch,
it can happily go on using regsets of live registers.
What it needed is a solid understanding about what invariants are
required and how we can maintain them while we are changing the rtl,
in order to keep the cost of updating the information under control.
Which, ironically, is what I proposed to use in the first place outside
of the dataflow branch to address the compile time problems, but Berndt
considerd that approach to fragile.


> The other trick it to do what I do in the new fast dce or the if-cvt on
> the dataflow branch:
> order the basic blocks so that it does not matter if you mess up the
> solution. Generally a post order or a reverse postorder traversial of
> the basic blocks will have the property that you can just mess things up
> in a wave that moves from the beginning to the end or the end to the end
> of the program without ever seeing the mess you make.

cross-jumping two entire basic blocks creates opportunities for further
cross-jumping and jump simplification involving the predecessor blocks.
This appears to fit a reverse postorder traversal.

However, if-conversion creates further if-conversion opportunities involving
the newly merged block.  Thus, while a postorder / reverse postorder
traversal makes sense, we also need valid information for the newly merged
block, its successors and predecessors.
I suppose reg-live-at-start / reg-live-at-end information is actually easier
to maintain during if-conversion that def-use chains.

>>> I think that what you want is something like the reaching uses problem
>>> but you want a forwards version of this rather than a backwards version
>>> as is defined in df-problems.c.
...
> the gen set of the reaching uses problem is the set of uses.  the kill
> set are the defs and the clobbers.  This is why the problem is called
> "reaching uses".

In my problem, the gen set is the set of uses, and the kill set are
defs, clobbers and uses.  Hence, it's still backwards problem.

But it is not really useful to compute this just for one or two code
transformations - benefits could only be obtained if this information
can be kept usable between different transformation to reduce the number
of global updates.

I suppose I should instead continue to operate on regsets, but keep them
usable without frequent global updates, if that's all right with Berndt.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]