[Bug rtl-optimization/80960] [7/8/9 Regression] Huge memory use when compiling a very large test case

rguenther at suse dot de gcc-bugzilla@gcc.gnu.org
Fri Apr 5 07:44:00 GMT 2019


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80960

--- Comment #18 from rguenther at suse dot de <rguenther at suse dot de> ---
On Thu, 4 Apr 2019, segher at gcc dot gnu.org wrote:

> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80960
> 
> --- Comment #17 from Segher Boessenkool <segher at gcc dot gnu.org> ---
> (In reply to rguenther@suse.de from comment #16)
> 
> > Actually it already does two walks over the whole function in
> > combine_instructions it seems, so recording # insns per EBB should
> > be possible?  (if that's really the key metric causing the issue)
> 
> The average distance between a set and its first use is the key metric.
> The numbers make it feel like that is pretty constrained here still
> (I haven't run numbers on it), but 100 is very much already if there are
> 1M insns in the block (or whatever).  All numbers that aren't terrible,
> but combines it takes up quite a chunk of time.

Hmm, so if we'd have numbered stmts in an EBB we could check the
distance between set and use and not combine when that gets too big?

> Combine also makes garbage for every try, and none of that is cleaned
> up during combine.  Maybe we should change that?  (I can try next week).

Not sure how easy that is but yes, it might help quite a bit due
to less churn on the cache.  Just ggc_free()ing the "toplevel"
RTX of failed attempts might already help a bit.  It's of course
kind-of a hack then but with an appropriate comment it would be
fine I guess (recursively ggc_free()ing might run into sharing
issues so that probably won't work).


More information about the Gcc-bugs mailing list