This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: compiling very large functions.


Paolo Bonzini wrote:
>
>> While I agree with you, I think that there are so many things we are
>> already trying to address, that this one can wait.  I think we've
>> been doing a very good job on large functions too, and I believe that
>> authors of very large functions are just getting not only what they
>> deserve, but actually what the expect: large compile times
>> (superlinear).
>
> Not too mention, that these huge functions are usually central to the
> program.  If GCC decided that it is not worth optimizing the
> machine-generated bytecode interpreter of GNU Smalltalk, for example,
> I might as well rewrite it in assembly (or as a JIT compiler).  Same
> for interpret.cc in libjava, though it is a tad smaller than GNU
> Smalltalk's interpreter.
>
> Unlike the authors of other VM's, I have no problem writing code so
> that the *latest* version of GCC will do its best, instead of
> complaining that GCC compiles my code worse on every release.  So, I
> am ok with GCC doing stupid things because of bugs that I/we can fix,
> but not with GCC just giving up optimization on code that has always
> been compiled perfectly (in one/two minutes for about 30,000 lines of
> machine-generated code, despite being chock-full of computed gotos),
> that *can* be optimized very well, and that is central to the
> performance of a program.
>
> Paolo
I actually think that you small talk example is the exception and not
the rule.  I would guess that the vast majority of very large functions
are machine generated simulations where the optimizer most likely
provides little benefit. 

In the case of dataflow, reaching defs is much more expensive than
simple liveness, not because there are more bit vectors (there are
exactly the same number) but because there are an order of magnitude
more bits in those bit vectors than in live variables and it just takes
more time and space to move them around. 

The thing is that even as memories get larger, something has to give. 
There are and will always be programs that are too large for the most
aggressive techniques and my proposal is simply a way to gracefully shed
the most expensive techniques as the programs get very large. 

The alternative is to just to just shelve these bugs and tell the
submitter not to use optimization on them.  I do not claim to know what
the right approach is.

kenny


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]