optimization/10155: [3.3/3.4 regression] gcc -O2/-O3 uses excessive amount of memory
Zdenek Dvorak
rakdver@atrey.karlin.mff.cuni.cz
Fri May 2 17:38:00 GMT 2003
Hello,
> > http://gcc.gnu.org/cgi-bin/gnatsweb.pl?cmd=view%20audit-trail&database=gcc&pr=10155
> >
> > How much of this can be explained with Kaveh's physmem
> > patch? IIRC that patch is not in 3.2, and the increase
> > in memory consumption at -O2 may be a result of that
> > patch.
> >
> > The increase in memory at -O3 is a result of unit at a
> > time compilation (which is why I CC you, Honza). You
> > can check that by compiling with -O2 + all flags enabled
> > at -O3 except -funit-at-a-time:
> >
> > ./cc1 10155.c -quiet -ftime-report -O2
> > TOTAL : 24.74 0.74 26.24
> >
> > ./cc1 10155.c -quiet -ftime-report -O2 -funswitch-loops
> > -frename-registers -finline-functions
> > TOTAL : 31.49 0.59 33.87
> >
> > Loop unswitching is responsible for most of the compile
> Zdenek, this really ought not to happen, what is going on?
I haven't tested the loop optimizer against a program consisting
from several thousand 3-line loops, so I am not that much surprised
that something went wrong. I will check where is the problem.
Zdenek
> > time increase.
> > Now add -funit-at-a-time, and kabooooom! you lose.
> >
> > Apparently unit-at-a-time should still honor some size
> > constraints, and it does not in its current form.
>
> It should be more problem of inlining heuristics, than unit-at-a-time
> (ie unit-at-a-time enables more inlining oppurtunities but it is
> inlining heuristic mistake to take so many of them).
> Or perhaps we manage to flatten functions called once into one
> extraordinarily large function body and give up on it. I will try to
> investigate it, but my current priority is to get unit-at-a-time working
> on C++. Fixing this testcase should be easy then :)
>
> Honza
> >
> > Greetz
> > Steven
> >
> >
More information about the Gcc-bugs
mailing list