This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: [GCC 3.0] Bad regression, binary size
- To: <dewar at gnat dot com>
- Subject: Re: [GCC 3.0] Bad regression, binary size
- From: Linus Torvalds <torvalds at transmeta dot com>
- Date: Tue, 24 Jul 2001 09:25:33 -0700 (PDT)
- cc: <jthorn at galileo dot thp dot univie dot ac dot at>, <gcc at gcc dot gnu dot org>
On Mon, 23 Jul 2001 dewar@gnat.com wrote:
>
> <<My argument is really the same, but with a twist: keep it alive, but
> don't make it the default if it makes non-FP code bigger.
> >>
>
> It is always tricky to argue about defaults. One critical issue with defaults
> is to make benchmarks work better out of the box, but the default of -O0
> seriously undermines this design criterion in any case.
I suspect that everybody who does benchmarking are so aware of the -O flag
that the "default" to not optimize is not really a default at all except
in a very theoretical sense. Although maybe gcc could make the default -O
a bit stronger.
But talking about the -O flag, I _really_ think that the fact that -O3
generally generates much worse code than -O2 is non-intuitive and can
really throw some people.
Why does -O3 imply "-finline-functions", when it's been shown again and
again to just make things worse? Now the current gcc seems to have fixed
this to some degree by just making "-finline-functions" much weaker
(good), but still, shouldn't we always have the rule that higher
optimization numbers tends to make code run faster for at least a
meaningful subset of the word "code" ;)
If somebody wants to pessimize their code by inlining everything, let them
use -O-1 ("inline everything, but don't bother with things like CSE" ;^),
or jst explicitly say "-finline".
This certainly threw me the first time I used gcc. I remember how the
original Linux makefiles used "-O6" because some place had (incorrectly)
mentioned that -Ox modified gcc behaviour up to the value 6, and I assumed
that -O6 would be better than -O2.
I think it would make much more sense if -O3 meant everything that -O2
does, plus the things we don't do because it makes debugging harder (ie
-fomit-frame-pointer and similar). That actually speeds things up and
makes code visibly smaller at times. Unlike the current fairly strange
thing -O3 does..
(Yeah, I know, we already turn on -fomit-frame-pointer for -O, but only on
the few targets where it doesn't hurt debugging. I'm just saying that
maybe we should do it for everything once you hit -O3, and then gently
warn about the combination of -O3 and -g).
Linus