This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: compiling very large functions.


On 11/4/06, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
Richard Guenther wrote:
> On 11/4/06, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
>> Richard Guenther wrote:
>> > On 11/4/06, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
>> >> I think that it is time that we in the GCC community took some
>> time to
>> >> address the problem of compiling very large functions in a somewhat
>> >> systematic manner.
>> >>
>> >> GCC has two competing interests here:  it needs to be able to provide
>> >> state of the art optimization for modest sized functions and it
>> needs to
>> >> be able to properly process very large machine generated functions
>> using
>> >> reasonable resources.
>> >>
>> >> I believe that the default behavior for the compiler should be that
>> >> certain non essential passes be skipped if a very large function is
>> >> encountered.
>> >>
>> >> There are two problems here:
>> >>
>> >> 1) defining the set of optimizations that need to be skipped.
>> >> 2) defining the set of functions that trigger the special processing.
>> >>
>> >>
>> >> For (1) I would propose that three measures be made of each function.
>> >> These measures should be made before inlining occurs. The three
>> measures
>> >> are the number of variables, the number of statements, and the
>> number of
>> >> basic blocks.
>> >
>> > Why before inlining?  These three numbers can change quite
>> significantly
>> > as a function passes through the pass pipeline.  So we should try
>> to keep
>> > them up-to-date to have an accurate measurement.
>> >
>> I am flexible here. We may want inlining to be able to update the
>> numbers.  However, I think that we should drive the inlining agression
>> based on these numbers.
>
> Well, for example jump threading and tail duplication can cause these
> numbers to significantly change.  Also CFG instrumentation and PRE
> can increase the BB count.  So we need to deal with cases where an
> optimization produces overly large number of basic blocks or
> instructions.
> (like by throtting those passes properly)
>
I lean to leave the numbers static even if they do increase as time goes
by.  Otherwise you get two effects, the first optimizations get to be
run more, and you get the wierd non linear step functions where small
changes in some upstream function effect the down stream.

Ok, I guess we can easily flag each function as having - many BBs - big BBs - complex CFG (many edges) and set these flags at CFG construction time during the lowering phase (which is after the early inlining pass I believe).

The number of basic blocks is kept up-to-date during optimization, the
other numbers would need to be re-generated if we want to keep them
up-to-date.

But with just using three (or even only one?) flag, we can easily fit this
information in struct function.

I also like the idea of a "hot" function flag to be able to dynamically
switch betweed optimize_size and !optimize_size in the tree optimizers.
Profile based inlining already tries to follow that route.

Richard.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]