This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
Re: optimization/10155: [3.3/3.4 regression] gcc -O2/-O3 uses excessive amount of memory
- From: Zdenek Dvorak <rakdver at atrey dot karlin dot mff dot cuni dot cz>
- To: Jan Hubicka <jh at suse dot cz>
- Cc: Steven Bosscher <s dot bosscher at student dot tudelft dot nl>,p dot van-hoof at qub dot ac dot uk, gcc-gnats at gcc dot gnu dot org, gcc-bugs at gcc dot gnu dot org,nobody at gcc dot gnu dot org, gcc-prs at gcc dot gnu dot org, jh at suse dot de
- Date: Fri, 2 May 2003 19:38:39 +0200
- Subject: Re: optimization/10155: [3.3/3.4 regression] gcc -O2/-O3 uses excessive amount of memory
- References: <3EB24F67.8010207@student.tudelft.nl> <20030502132516.GC8780@kam.mff.cuni.cz>
Hello,
> > http://gcc.gnu.org/cgi-bin/gnatsweb.pl?cmd=view%20audit-trail&database=gcc&pr=10155
> >
> > How much of this can be explained with Kaveh's physmem
> > patch? IIRC that patch is not in 3.2, and the increase
> > in memory consumption at -O2 may be a result of that
> > patch.
> >
> > The increase in memory at -O3 is a result of unit at a
> > time compilation (which is why I CC you, Honza). You
> > can check that by compiling with -O2 + all flags enabled
> > at -O3 except -funit-at-a-time:
> >
> > ./cc1 10155.c -quiet -ftime-report -O2
> > TOTAL : 24.74 0.74 26.24
> >
> > ./cc1 10155.c -quiet -ftime-report -O2 -funswitch-loops
> > -frename-registers -finline-functions
> > TOTAL : 31.49 0.59 33.87
> >
> > Loop unswitching is responsible for most of the compile
> Zdenek, this really ought not to happen, what is going on?
I haven't tested the loop optimizer against a program consisting
from several thousand 3-line loops, so I am not that much surprised
that something went wrong. I will check where is the problem.
Zdenek
> > time increase.
> > Now add -funit-at-a-time, and kabooooom! you lose.
> >
> > Apparently unit-at-a-time should still honor some size
> > constraints, and it does not in its current form.
>
> It should be more problem of inlining heuristics, than unit-at-a-time
> (ie unit-at-a-time enables more inlining oppurtunities but it is
> inlining heuristic mistake to take so many of them).
> Or perhaps we manage to flatten functions called once into one
> extraordinarily large function body and give up on it. I will try to
> investigate it, but my current priority is to get unit-at-a-time working
> on C++. Fixing this testcase should be easy then :)
>
> Honza
> >
> > Greetz
> > Steven
> >
> >