This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: call_insn's and argument locations


This is long...

On Fri, 29 Jun 2001, Chris Lattner wrote:

> Date: Fri, 29 Jun 2001 18:16:46 -0500 (CDT)
> From: Chris Lattner <sabre@nondot.org>
> To: gcc@gcc.gnu.org
> Subject: call_insn's and argument locations
>
>
> Is there any good way to find out where the arguments to a call_insn are?
> Part of my work involves working on an SSA pass (to be run after the DCE
> pass) that inspects call_insn instructions.  The only problem is that I'm
> not sure how to find out where the arguments are.  Should I effectively do
> what expand_call does to get the register and stack slot numbers from the
> back end?  Is there an easier way?  (for example, are the N SET insn's
> before the call_insn required to be arguments [at the SSA phase]?)

There's been a bit of traffic of late on SSA and "middle-end"
representations, and I'd like to put in my 2 cents to
argue for a semantically advanced SSA-able middle-end that
can handle arrays (not just vectors of vectors), function calls,
etc., so that g95 and friends can exploit some of the
array optimizations that are available. Shucks, they may
even help C.  8^}

I used SSA in APEX, an APL compiler that pumped out SISAL code.
SSA made MANY optimizations trivial, at very high levels of
abstraction, and made the job of writing the compiler a delight,
so I'm an SSA convert. It also let me generate some
very high-performance code with zilch effort. For example,
parallelism came for free; some benchmarks beat Fortran 77.

I found that interprocedural optimization was greatly eased
by having a proper function call within my IR -- formal
arguments and formal results. The presence of this let me
pass around array predicates and other dataflow tags
in a simple and consistent manner; it also gave me the
tools I needed to generate functional (as opposed to imperative)
SISAL code out the back end.

I started out with the extant function calls within the
source code, then amended the call points with additional
parameters and results to convert references to semi-globals and
globals into pure functions. Once I had this, I could
then optimize, move code around, clone functions, and so on
with ease.

Although I'm just puzzling out insns now, I believe we need
a significantly more powerful IR to let semantically advanced
optimizers do their work; it may also facilitate some of the
work I'm doing on optimizing Secure Pointer references.

As an example of the sort of IR I'm talking about, I suggest
people look at SISAL (now moribund, thanks to LLNL deciding
to concentrate on their area of excellence -- creating
weapons of mass destruction -- rather than developing
high-performance computer languages).
Hmm. I can't find the docs on the LLNL web site, but
probably have a copy on backup tape somewhere.
Here's a start:

D.C. Cann: Compilation Techniques for High Performance Applicative
Computation; Colorado State Universitry tech report CS-89-108

J. McGraw, et al.: SISAL: Streams and Iterations in a
Single-assignment language: LLNL Tech Rpt M-146.

D.C. Cann: The Optimizing SISAL Compiler; LLNL UCRL-MA-110080

What I like about SISAL is that the formal representation
of its IR facilitates excellent, strong optimizations.

A bit closer to gcc is SAC, an extended functional subset
of C that has some of the same ideas as SISAL, but has, I
believe, more promise, as it believes in real arrays
(as opposed to vectors-of-vectors, which I won't discuss
here, as it's off-topic). SAC also has some very nice
optimizer design:

http://www.informatik.uni-kiel.de/~sacbase/

So much for the sales pitch. Here's the proposed product:

a. Introduce a powreful, extendable IR into gcc that is
   friendly to high-level optimizations and SSA.
   It doesn't have to optimize anything, at first.

b. Provide a code generator that turns that IR into rtl.

At this point, we are, I think, back where we are today,
except that it probably runs slower. 8^{
It would, however, let things get off the ground.

c. Next, provide a facility, perhaps optional, to support
   multi-phase optimization passes, so that people can
   easily add their own optimizations. This is what SAC
   does; I believe that it's not part of the definition of rtl
   to have an export/import form of rtl that can be stored on
   flat files.
   This would make optimizations easier to include/exclude,
   and would also encourage universities and others to
   provide gcc-able optimizers, much as the Stanford
   SUIF system has done.

d. Provide a way to handle multi-pass interprocedural
   optimization. That it, optimizations may have to
   pass information back and forth among many functions,
   propagating semantic information up and down call chains.
   I think this is at complete odds with how
   gcc works now, but it may be possible to handle it
   completely within the new middle-end, leaving
   everything that's rtl-related as it is now.

I'm willing to put a significant amount of design and
development effort into this, but:

a. people here working in SSA and g95 have to express interest in it,

b. I will have to work closely with the front-end people to
   design appropriate IR functions, as I am compketely
   clueless about that whole area, and

c. I need some reassurance from developers
   that this is a reasonable design. Since I've only
   been working on gcc for a few weeks, it's obvious that
   there may be better ways to do this.

If y'all have better ideas on how to do this, I'm all ears,
as this is just one possible way to approach the problem.

Bob

ps: Do we have a better phrase than "middle end"?

Robert Bernecky                  Snake Island Research Inc.
bernecky@acm.org                 18 Fifth Street, Ward's Island
+1 416 203 0854                  Toronto, Ontario M5J 2B9 Canada
http://www.snakeisland.com



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]