This is the mail archive of the
mailing list for the GCC project.
Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
- From: Ian Lance Taylor <iant at google dot com>
- To: Paul Eggert <eggert at CS dot UCLA dot EDU>
- Cc: autoconf-patches at gnu dot org, bug-gnulib at gnu dot org, gcc at gcc dot gnu dot org
- Date: 31 Dec 2006 18:38:15 -0800
- Subject: Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
- References: <200612300047.kBU0lFwk014817@localhost.localdomain> <firstname.lastname@example.org> <10612302258.AA24598@vlsi1.ultra.nyu.edu> <email@example.com> <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com>
Paul Eggert <eggert@CS.UCLA.EDU> writes:
> "Daniel Berlin" <firstname.lastname@example.org> writes:
> >> http://www.suse.de/~gcctest/SPEC/CFP/sb-vangelis-head-64/recent.html
> >> and
> >> http://www.suse.de/~gcctest/SPEC/CINT/sb-vangelis-head-64/recent.html
> > Note the distinct drop in performance across almost all the benchmarks
> > on Dec 30, including popular programs like bzip2 and gzip.
> That benchmark isn't that relevant, as it's using -O3, as is typical
> for SPEC benchmarks.
> This discussion is talking about -O2, not -O3. The proposed change
> (i.e., having -O2 imply -fwrapv) would not affect -O3 benchmark
I don't entirely understand where this particular proposal is coming
from. Historically in gcc we've used -O2 to mean turn on all
optimizations even if means taking longer to compile code. We've used
-O3 to turn on optimizations that make riskier tradeoffs for compiled
code. Code compiled with -O3 may be slower than code compiled with
We've never said that -O3 causes the compiler to adhere more closely
to the language standard or that it should introduce riskier
optimizations. We could say that. But then we should discuss it in
those terms: a change in the meaning of -O3.
The obvious analogy to this discussion about whether signed overflow
should be defined is strict aliasing. Historically we've turned on
-fstrict-aliasing at -O2. I think it would take a very strong
argument to handle signed overflow differently from strict aliasing.
I also have to note that at least some people appear to be approaching
this discussion as though the only options are to use require -fwrapv
or to assume that signed overflow should be treated as undefined in
all cases. Those are not the only options on the table, and in fact I
believe that neither option is best. I believe the best option is
going to be to take an case by case approach to selecting which
optimizations should be enabled by default, and which optimizations
should not be done except via a special extra option (by which I mean
not a -O option, but a -f option).
I appreciate your need to move this discussion along, but I'm not
entirely happy with what I take to be stampeding it by introducing
what I believe would be a completely inappropriate patch to autoconf,
rather than, say, opening a gcc bugzilla problem report for the cases
you feel gcc should handle differently.
You are asserting that most programmers assume -fwrapv, but, except
for your initial example, you are presenting examples which gcc
already does not change. You are not considering the examples for
which most programmers do not assume -fwrapv, examples like
#define MULT(X) ((X) * 10)
foo (int x)
return MULT (x) / 5;
With -fwrapv, we must multiply by 10 and then divide by 5. Without
-fwrapv, we can simply multiply by 2. A comparison of the generated
assembly for this trivial example may be instructive.
I already took the time to go through all the cases for which gcc
relies on signed overflow being undefined. I also sent a very
preliminary patch providing warnings for those cases. I believe that
we will get the best results along those lines, not by introducing an
Do you disagree?