This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [RFC] callgraph and unit at time compilation
- From: Jan Hubicka <jh at suse dot cz>
- To: Daniel Berlin <dberlin at dberlin dot org>
- Cc: Diego Novillo <dnovillo at redhat dot com>, Jan Hubicka <jh at suse dot cz>,"gcc-patches at gcc dot gnu dot org" <gcc-patches at gcc dot gnu dot org>,"gcc at gcc dot gnu dot org" <gcc at gcc dot gnu dot org>
- Date: Sat, 16 Nov 2002 10:14:29 +0100
- Subject: Re: [RFC] callgraph and unit at time compilation
- References: <1037413258.5015.19.camel@frodo> <Pine.LNX.4.44.0211152357590.21537-100000@dberlin.org>
> > Solving this for only one front end will force us to solve the same
> > problem for all the front ends. Better design something that we can use
> > in the future to do real unit-at-a-time optimizations.
>
> If we want to do real unit-at-a-time optimizations, we need to write
> things out to disk (be it AST's or other information). If we don't do
> this, it's just going to be either non-scalable, or non-workable. If we
> aren't writing AST's due to whatever the heck politics, we are going to mainly
> stuck writing annotations and summaries (summaries of various flow
> info, side-effects, etc)
> If Jan was to work on this part of the infrastructure, which would
> consist of some sort of
> database, be it an existing reuse of one, or something else, with an
> interface that let us store annotations on things like trees, refs, and
> basic blocks, as well as storing some generic graph type with
> annotations on edges and vertices, it would make the rest of the basic
> unit-at-a-time infrastructure much easier to do right.
Agreed in this. I think PCH branch is slowly targetting this position
(being able to write the tree structures to disk and read them back), so
I don't want to conflick here. However I think I can do bit here by
keeping things in memory right now but keeping local and global analysis
independent
(ie I first build grapha and later optimize. Fact is that right now I
need tree reprezentation for first inlining pass, but that is mostly
temporary until I can get more of data into separate place)
Honza
> IMHO.
> If we approach real unit-at-a-time optimizations from the perspective that
> we can keep everything in memory all the time, or redo everything all the
> time (rather than store some info somewhere on disk), we're just going to end up with
> something that is likely slow, huge (in memory footprint), unmaintainable,
> and non-scaling.
> We shouldn't build a house of cards when we have a chance to lay a real
> foundation.
>
> > > > Diego.
> >
> >