This is the mail archive of the
mailing list for the GCC project.
Re: Does gcc automatically lower optimization level for very large routines?
- From: Segher Boessenkool <segher at kernel dot crashing dot org>
- To: Andi Kleen <ak at linux dot intel dot com>
- Cc: Qing Zhao <QING dot ZHAO at ORACLE dot COM>, gcc at gcc dot gnu dot org
- Date: Wed, 1 Jan 2020 09:20:01 -0600
- Subject: Re: Does gcc automatically lower optimization level for very large routines?
- References: <9A7132EC-5B85-4AF1-A6B9-7DAB0FA11759@ORACLE.COM> <email@example.com>
On Tue, Dec 31, 2019 at 09:25:01PM -0800, Andi Kleen wrote:
> Would be useful to figure out in more details where the memory
> consumption goes in your test case.
> Unfortunately gcc doesn't have a good general heap profiler,
> but I usually do (if you're on Linux). Whoever causes most page
> faults likely allocates most memory.
> perf record --call-graph dwarf -e page-faults gcc ...
> perf report --no-children --percent-limit 5 --stdio > file.txt
> and post file.txt into a bug in bugzilla.
There also is the last column of -ftime-report (amount of GC memory
alocated in each pass), it often is helpful.