This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Does gcc automatically lower optimization level for very large routines?


Thanks a lot for all these help.

So, currently, if GCC compilation aborts due to this reason, what’s the best way for the user to resolve it? 
I added “#pragma GCC optimize (“O1”) to the large routine in order to workaround this issue.  
Is there other better way to do it?

Is GCC planning to resolve such issue better in the future?

thanks.

Qing

> On Dec 20, 2019, at 5:13 AM, Richard Biener <richard.guenther@gmail.com> wrote:
> 
> On December 20, 2019 1:41:19 AM GMT+01:00, Jeff Law <law@redhat.com <mailto:law@redhat.com>> wrote:
>> On Thu, 2019-12-19 at 17:06 -0600, Qing Zhao wrote:
>>> Hi, Dmitry,
>>> 
>>> Thanks for the responds. 
>>> 
>>> Yes, routine size only cannot determine the complexity of the
>> routine. Different compiler analysis might have different formula with
>> multiple parameters to compute its complexity. 
>>> 
>>> However, the common issue is: when the complexity of a specific
>> routine for a specific compiler analysis exceeds a threshold, the
>> compiler might consume all the available memory and abort the
>> compilation. 
>>> 
>>> Therefore,  in order to avoid the failed compilation due to out of
>> memory, some compilers might set a threshold for the complexity of a
>> specific compiler analysis (for example, the more aggressive data flow
>> analysis), when the threshold is met, the specific aggressive analysis
>> will be turned off for this specific routine. Or the optimization level
>> will be lowered for the specific routine (and given a warning during
>> compilation time for such adjustment).  
>>> 
>>> I am wondering whether GCC has such capability? Or any option
>> provided to increase or decrease the threshold for some of the common
>> analysis (for example, data flow)?
>>> 
>> There are various places where if we hit a limit, then we throttle
>> optimization.  But it's not done consistently or pervasively.
>> 
>> Those limits are typically around things like CFG complexity.
> 
> Note we also have (not consistently used) -Wmissed-optimizations which is supposed to warn when we run into this kind of limiting telling the user which knob he might be able to tune. 
> 
> Richard. 
> 
>> We do _not_ try to recover after an out of memory error, or anything
>> like that.
>> 
>> jeff


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]