This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Does gcc automatically lower optimization level for very large routines?


Hi, Dmitry,

Thanks for the responds. 

Yes, routine size only cannot determine the complexity of the routine. Different compiler analysis might have different formula with multiple parameters to compute its complexity. 

However, the common issue is: when the complexity of a specific routine for a specific compiler analysis exceeds a threshold, the compiler might consume all the available memory and abort the compilation. 

Therefore,  in order to avoid the failed compilation due to out of memory, some compilers might set a threshold for the complexity of a specific compiler analysis (for example, the more aggressive data flow analysis), when the threshold is met, the specific aggressive analysis will be turned off for this specific routine. Or the optimization level will be lowered for the specific routine (and given a warning during compilation time for such adjustment).  

I am wondering whether GCC has such capability? Or any option provided to increase or decrease the threshold for some of the common analysis (for example, data flow)?

Thanks.

Qing

> On Dec 19, 2019, at 4:50 PM, Dmitry Mikushin <dmitry@kernelgen.org> wrote:
> 
> This issue is well-known in research/scientific software. The problem of
> compiler hang or RAM overconsumption is actually not about the routine
> size, but about too complicated control flow. When optimizing, the compiler
> traverses the control flow graph, which may have the misfortune to explode
> in terms of complexity. So you may want to check whether your routine
> heavily deploys nested cascades of "if ... else" or goto-s. That is, the
> routine size is not a good metric to catch this behavior. GCC may rather
> attempt "reversible" strategy of optimizations to stop and undo those that
> get beyond a certain threshold.
> 
> Kind regards,
> - Dmitry.
> 
> 
> чт, 19 дек. 2019 г. в 17:38, Qing Zhao <QING.ZHAO@oracle.com>:
> 
>> Hi,
>> 
>> When using GCC to compile a very large routine with -O2, it failed with
>> out of memory during run time.  (O1 is Okay)
>> 
>> As I checked within gdb,  when “cc1” was consuming around 95% of the
>> memory,  it’s at :
>> 
>> (gdb) where
>> #0  0x0000000000ddbcb3 in df_chain_create (src=0x631006480f08,
>>    dst=0x63100f306288) at ../../gcc-8.2.1-20180905/gcc/df-problems.c:2267
>> #1  0x0000000000dddd1a in df_chain_create_bb_process_use (
>>    local_rd=0x7ffc109bfaf0, use=0x63100f306288, top_flag=0)
>>    at ../../gcc-8.2.1-20180905/gcc/df-problems.c:2441
>> #2  0x0000000000dde5a7 in df_chain_create_bb (bb_index=16413)
>>    at ../../gcc-8.2.1-20180905/gcc/df-problems.c:2490
>> #3  0x0000000000ddeaa9 in df_chain_finalize (all_blocks=0x63100097ac28)
>>    at ../../gcc-8.2.1-20180905/gcc/df-problems.c:2519
>> #4  0x0000000000dbe95e in df_analyze_problem (dflow=0x60600027f740,
>>    blocks_to_consider=0x63100097ac28, postorder=0x7f23761f1800,
>>    n_blocks=40768) at ../../gcc-8.2.1-20180905/gcc/df-core.c:1179
>> #5  0x0000000000dbedac in df_analyze_1 ()
>> ….
>> 
>> The routine that was compiled is very big, has about 119258 lines of code.
>> I suspected that GCC’s data flow analysis might not handle very large
>> routine very well, consume too much memory, therefore out of memory for
>> very big routines.
>> 
>> Currently, I found one GCC’s source level pragma,
>> 
>> #pragma GCC optimize ("O1”)
>> 
>> And added it before the large routine (also added another one #pragma GCC
>> reset_options after the routine), this workaround the out of memory issue
>> for now.
>> 
>> However, manually locating large routines is time consuming, I am
>> wondering whether GCC can automatically detect large routines and lower the
>> optimization for those
>> Routines automatically? Or is there any internal parameters inside GCC’s
>> data flow analysis that compute the complexity of the routine, if it’s very
>> big, then will turn off
>> The aggressive analysis automatically?  Or any option provided to end user
>> to control the aggressive data flow manually ?
>> 
>> 
>> Thanks a lot for any help.
>> 
>> Qing


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]