This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [tree-ssa] AST optimizer in C++?

On Mon, 26 Aug 2002, [iso-8859-1] Pop Sébastian wrote:
> On Sun, Aug 25, 2002 at 04:36:59PM -0500, Chris Lattner wrote:

>   In my view, it is always best
> > to eliminate the number of redundancies and number of cases that must be
> > handled in the representation (ie, by lowering ++X to use primitives, you

>   In fact by reducing the number of cases to be handled (as in reducing synactic
> sugar, and control sugar) we simplify the optimizer.  Reducing "control sugar" is
> something that is well done in LLVM since all loop constructs are lowered to the
> same common denominator: branch instructions.

Yes, it makes things much simpler to deal with.

>   However after the discussion about the SWITCH_STMT elimination I saw things
> differently:  in a first step we reduce control structures to a common
> representative, then it is worth to consider that some architectures support high
> level control constructs and then a promoting pass has to be applied.
>   Thus the underlying question is: is it worth to lower statements for promoting
> them later?

This comes up a lot in LLVM, because it is decidedly, well, low-level.  :)
In our case, we do actually have a switch statement as part of the
representation (, but I
am strongly considering removing it, and using a general indirect branch
instead.  The idea is that by having it be an indirect branch, you have to
following features:

1. Optimizations don't have to worry about the special case anymore.
2. Anything that uses the fact that it was a "switch" statement (such as
   subsequent lowering, or other optimizations), can simply be implemented
   to see if the pointer argument is derived from an array of labels.
   Since LLVM provides strong support for representing global values, and
   it would be a constant array, traditional optimizations would not
   really be more complex (especially with a couple of wrapper functions).

Because of this, I'm strongly leaning towards eliminating the instruction,
but haven't yet.

>   My opinion is that we win in having less complex optimizers.

Definately.  Less complex optimizers are more likely to be correct also.

> > Note that this representation is equivalent to how RTL stores the Mode
> > information in all of the "values" it uses.  Here, however, the type
> > system is much more powerful (and useful).  My key point is that this
> > information is kept IN the representation itself, not along-side in a
> > table
> same thing happens in tree representation where symbol table is stored in
> DECL_STMTs dirrectly in the tree.

Ah yes, you're right.  I'm glad.  :)  The next thing that I need to
convince you of is the need for a language independant type system.  :)

> > This is great, but it doesn't really answer how we get a CFG out of this.
> > The simplest CFG interface can tell you all of the incoming and outgoing
> > edges of a block.  To do this, we have iterators that wrap the following
> > [trivial] analyses:
> >   the succ_iterator & pred_iterator

> Yep that's a nice interface for handling CFG nodes.
> I saw something like this also in open64.
> It's so simple to just say: iterate over all BB in topological order
> and just write "for (topological_CFG::iterator i = ..."
> :-)

Definately.  We have all kinds of nice things like:

// traverse flow graph in DFO
for (df_iterator<BasicBlock> I = df_begin(BB)...

// traverse flow graph in depth first order along the predecessors,
// instead of successor links.
for (idf_iterator<BasicBlock> I = idf_begin(BB)...

> > Do you have any idea what kinds of things are possible in simple that you
> > don't think would be possible (or pleasant) in LLVM?  I'd like to get an
> > idea if there is something that would be hard to do in LLVM, but is easy
> > in SIMPLE (to figure out how much overlap there is).

> For the moment I see that LLVM can express high level loop structures by
> handling them as side information.  This trick can be used also to store
> information about loop induction variables, thus fortran's DO-loops can
> be effectively handled by optimizers in the same way as on tree structures.
> And since you handle only the lowest level loop structures you win in
> having low level complexity optimizers.

Yup, a trick that we use is to have the cannonical induction variable for
a loop moved to be the first PHI node in a loop header.  That way you
don't even have to store information on the side for it.  Loop
transformations just look like this:

for each loop
  if (isACannonicalIndVar(EntryBB->front()))
    do stuff

Where isACannonicalIndVar just checks to see if the PHI node is formed
with an incoming constant and an add instruction (which is very fast).

> [flat trees]
> LLVM could be obtained by goto introduction from SIMPLE
>   break, continue -> br
>   for, while, do -> br
>   switch -> switch ("an extended br")
> But then you have to construct loop information before you flatten statements.

LLVM computes loop information from dominator information, which it
computes directly from the LLVM itself.  Loop information is not required
to read and write LLVM, it's one of several analyses that can be computed
on the side.

> > I'm really sorry for the length of this mail, but I think it helps explain
> > a bit of the philosophy behind LLVM...
> >
> thanks a lot for contributing ideas :-)

No problem.  :)  Now I just have to convince everyone of the utility and
advantage of making the IR a full fledged language in its own right, which
is self contained.  :)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]