This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

The utility of standard's semantics for overflow





After reading many messages on the subject of overflow
of signed integral values, I am inclined to think that the
standard was wrong to say that int overflow is undefined.

Of course this definition improves performance, but at
what cost? At the cost of program stability?

If the programmer wants a robust application,
then casting to unsigned must be present for almost any
usage of int. In the end, nobody will use int because it
is very difficult to prove that it does not wrap for all
possible input. And since the compiler will draw more
and more conclusions from "assume overflow never happens",
the effects will become more and more destructive.

Checking for int overflow after the fact is much simpler
than playing tricks with casting - before the fact. So
people will simply move to unsigned and implement
2's complement themselves.

To let the compilers gain the performance the standard
intended, it had to introduce a new type or a modifier.

Only
   for (nooverflow int i=0; i <= x ; ++i)
       ++count;
would be transformed to
  count+=x+1


Normal 'int' will then have an implementation defined
behavior of overflow. Unfortunately, the standard attempts
at improving legacy code, and as a result breaks legacy
code.

This is unlike aliasing, when most lines of code out there,
did not break aliasing rules (even before they were
introduced). Int overflow is violated by most lines of
code I have seen (it is very uncommon to find code that
asserts no overflow before a+b).

Also, the compiler itself may introduce new cases of
overflow (e.g. after transforming a*b+a*c  to a*(b+c),
when run with a==0, and b,c == MAX_INT).
I am not sure if this may create invalid assumptions
in later compiler passes (today's gcc or later). I did not
try to formally prove it either way. (I tend to think that
invalid assumptions will be introduced by, e.g.,  a
later VRP pass).

I don't know what gcc can do to improve the situation,
the standard is a thing gcc has to live with. Maybe
start by trying to affect c++0x ?


   Michael


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]