This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Document arithmetic overflow semantics


Hi Richard,
> I think that's backwards.  If program in C, C++, or Ada generates an
> overflow, that program is undefined.  That means the optimizer can do
> anything it wants to the program, which, in turn, means that the optimizer
> can assume that overflow *does not* occur, and that allows a lot more
> optimizations.

I completely disagree, and so do GCC's patch reviewers.  The behaviour
of a program with optimization should always be the same as its behaviour
without optimization.

Consider the case of x << y, where y is larger than the word size of
x.  Nobody would be prepared to constant "x << 98" as just zero.  Its
true that the behaviour is undefined by the language specification,
but that still doesn't mean we can do anything.

Indeed GCC refuses to constant fold  "100 << 98" at compile time, and
performs the shift at run-time to preserve these semantics.  This case
is actually more difficult than two's-complement overflow, as differnent
processors generate different results for the above even when they all
use 32-bit operands.

Hence

int shift(int x, int y) { return x << y; }

int main()
{
  if (shift (100, 98) != (100 << 98))
    abort ();
  return 0;
}


is a test-case that demonstrates that GCC strives to produce identical
behaviour between optimized and unoptimized code, even in the presence
of "undefined behaviour" in the language specification.

I can submit a patch to constant fold x << y to zero, for suitably
large y, if anyone disagrees.  It will of course break the above
test-case, which might no be what a programmer expects.

Roger
--


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]