This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC] callgraph and unit at time compilation


> > > Solving this for only one front end will force us to solve the same
> > > problem for all the front ends.  Better design something that we can use
> > > in the future to do real unit-at-a-time optimizations.
> > 
> > If we want to do real unit-at-a-time optimizations, we need to write 
> > things out to disk (be it AST's or other information).  If we don't do 
> > this, it's just going to be either non-scalable, or non-workable.  If we 
> > aren't writing AST's due to whatever the heck politics, we are going to mainly 
> > stuck writing annotations and summaries (summaries of various flow 
> > info, side-effects, etc)
> > If Jan was to  work on this part of the infrastructure, which would 
> > consist of some sort of 
> > database, be it an  existing reuse of one, or something else, with an 
> > interface that let us  store annotations on things like trees, refs, and 
> > basic blocks, as well as storing some generic graph type with 
> > annotations on edges and vertices, it would make the rest of the basic 
> > unit-at-a-time infrastructure much easier to do right.
> 
> Agreed in this.  I think PCH branch is slowly targetting this position
> (being able to write the tree structures to disk and read them back), so
> I don't want to conflick here.  However I think I can do bit here by
> keeping things in memory right now but keeping local and global analysis
> independent
> (ie I first build grapha and later optimize.  Fact is that right now I
> need tree reprezentation for first inlining pass, but that is mostly
> temporary until I can get more of data into separate place)
... and of course trying to keep in the mind that the datastructure
holding global information should behave like database.
As I imagine it in long term, gcc should parse files, do local analysis,
flush trees to the disk, store local data for easy manipulation and at
the end load local analysis data, do global analysis and then fetch
function one by one and compile them to final output.

I think I can get into some short-minded goals cheaply with keeping code
consistent with this vision.

Honza
> 
> Honza
> > IMHO.
> > If we approach real unit-at-a-time optimizations from the perspective that 
> > we can keep everything in memory all the time, or redo everything all the 
> > time (rather than store some info somewhere on disk), we're just going to end up with 
> > something that is likely slow, huge (in memory footprint), unmaintainable, 
> > and non-scaling. 
> > We shouldn't build a house of cards when we have a chance to lay a real 
> > foundation.
> > 
> >  > > > Diego.
> > > 
> > > 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]