This is the mail archive of the
mailing list for the GCC project.
Re: Interprocedural Dataflow Analysis - Scalability issues
- From: Dan Kegel <dank at kegel dot com>
- To: GCC Mailing List <gcc at gcc dot gnu dot org>
- Date: Tue, 19 Apr 2005 21:44:11 -0700
- Subject: Re: Interprocedural Dataflow Analysis - Scalability issues
Daniel Berlin wrote:
I am working on interprocedural data flow analysis(IPDFA) and need some
feedback on scalability issues in IPDFA. Firstly since one file is
compiled at a time, we can do IPDFA only within a file.
For starters, we're working on this.
(I was curious, so I searched a bit. It looks like
gcc-4.0 supports building parts of itself in this mode?
Though only C and Java stuff right now, not C++.
Related keywords are
--enable-intermodule (see the thread http://gcc.gnu.org/ml/gcc-patches/2003-07/msg01146.html)
--enable-libgcj-multifile (see http://gcc.gnu.org/ml/java-patches/2003-q3/msg00658.html)
and IMI. It seems that just listing multiple source files
on the commandline is enough to get it to happen?)
But that would
prevent us from doing analysis for funcitons which are called in file
A, but are defined in some other file B.
You just have to make conservative assumptions, of course.
You almost *never* have the whole program at once, except in
True, but hey, if you really need that one server to run
fast, you might actually feed the whole program to the
compiler at once. Or at least a big part of it.
Morever even if we are able to store
information of large number of functions, it would cost heavily in
memory, and threfore non scalable.
Uh, not necessarily.
Speaking as a user, it's ok if whole-program optimization takes more memory
than normal compilation. (Though you may end up needing
a 64 bit processor to use it on anything really big.)
Trying to get a job as a c++ developer? See http://kegel.com/academy/getting-hired.html