This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Does gcc automatically lower optimization level for very large routines?


Trying to plan memory consumption ahead-of-work contradicts with the nature
of the graph traversal. Estimation may work very well for something simple
like linear or log-linear behavior. But many compiler algorithms are known
to be polynomial or exponential (or even worse in case of bugs). So,
estimation is a nice step ahead, but only fallback & recover can ultimately
solve the problem.

пт, 20 дек. 2019 г. в 02:32, David Edelsohn <dje.gcc@gmail.com>:

> On Thu, Dec 19, 2019 at 7:41 PM Jeff Law <law@redhat.com> wrote:
> >
> > On Thu, 2019-12-19 at 17:06 -0600, Qing Zhao wrote:
> > > Hi, Dmitry,
> > >
> > > Thanks for the responds.
> > >
> > > Yes, routine size only cannot determine the complexity of the routine.
> Different compiler analysis might have different formula with multiple
> parameters to compute its complexity.
> > >
> > > However, the common issue is: when the complexity of a specific
> routine for a specific compiler analysis exceeds a threshold, the compiler
> might consume all the available memory and abort the compilation.
> > >
> > > Therefore,  in order to avoid the failed compilation due to out of
> memory, some compilers might set a threshold for the complexity of a
> specific compiler analysis (for example, the more aggressive data flow
> analysis), when the threshold is met, the specific aggressive analysis will
> be turned off for this specific routine. Or the optimization level will be
> lowered for the specific routine (and given a warning during compilation
> time for such adjustment).
> > >
> > > I am wondering whether GCC has such capability? Or any option provided
> to increase or decrease the threshold for some of the common analysis (for
> example, data flow)?
> > >
> > There are various places where if we hit a limit, then we throttle
> > optimization.  But it's not done consistently or pervasively.
> >
> > Those limits are typically around things like CFG complexity.
> >
> > We do _not_ try to recover after an out of memory error, or anything
> > like that.
>
> I have mentioned a few times before that IBM XL Compiler allows the
> user to specify the maximum memory utilization for the compiler
> (including "unlimmited").  The compiler optimization passes estimate
> the memory usage for the data structures of each optimization pass.
> The the memory usage is too high, the pass attempts to sub-divide the
> region and calculates the estimated memory usage again, recursing
> until it can apply the optimization within the memory limit or the
> optimization would not be effective.  IBM XL Compiler does not try to
> recover from an out of memory error, but it explicitly considers
> memory use of optimization passes.  It does not adjust the complexity
> of the optimization, but it does adjust the size of the region or
> other parameters to reduce the memory usage of the data structures for
> an optimization.
>
> Thanks, David
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]