This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: intermodule optimisation patch


On Tue, 20 May 2003, Richard Henderson wrote:
> On Mon, May 19, 2003 at 04:59:39PM -0500, Chris Lattner wrote:
> > I find it interesting that you consider this a disadvantage.  :)
>
> Reference the involved discussion last month regarding
> allowing parts of the Java front end to be in C++.  We've
> just now gotten acceptance for dropping support for K&R C;
> there's zero chance we'll raise the bar from C90 to C++
> within the next couple of years.

I understand and remember the discussion.  In this case however, it would
be possible to add another stage to the bootstrap process to support it.
Obviously this is sub-optimal, but the whole point of the paper is to
advocate the _architecture_, not necessarily the implementation.  The
implementation is there for people who want to use it _today_, that's all.

> > program.  With my approach you need to redo only the interprocedural
> > optimizations and code generation phases.
>
> Eh?  "Only" the interprocedural optimizations? Given that information
> should be used during the high-level optimization phases (in particular
> for noticing what memory is read/written by a function), this does in
> fact mean starting over from near-scratch.  The only thing you get to
> skip is parsing and semantic analysis, which ought to be near the bottom
> of the profile during optimization anyway.

Actually that's incorrect.  The whole point of the proposed architecture
is that you can do substantial optimization at _compile time_, which is
what makes it much different than the techniques used by most commercial
compilers.  In fact, in our current implementation we have a large suite
of scalar optimizations that are run at compile time.  This reduces the
size of the input to the link-time optimizer, among other things.

As an arbitrary example, this means that you can perform your dead code
elimination and constant propagation _before_ inlining, solving a lot of
the current problems with inlining heuristics.

So yes, the point is that you only have to redo interprocedural
optimizations if a single source file is changed.  This also assumes that
you don't use techniques such as those described in "Interprocedural
Optimization:  eliminating unnecessary recompilation" by Burke and
Torczon, which (in theory) could eliminate most of that overhead.

-Chris

-- 
http://llvm.cs.uiuc.edu/
http://www.nondot.org/~sabre/Projects/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]