This is the mail archive of the
mailing list for the GCC project.
Re: Is it OK that gcc optimizes away overflow check?
- From: Ian Lance Taylor <iant at google dot com>
- To: Agner Fog <agner at agner dot org>
- Cc: gcc-help at gcc dot gnu dot org
- Date: Sun, 24 Jul 2011 23:04:04 -0700
- Subject: Re: Is it OK that gcc optimizes away overflow check?
- References: <4E2B2B72.firstname.lastname@example.org>
Agner Fog <email@example.com> writes:
> I have a program where I check for integer overflow. The program
> failed, and I found that gcc has optimized away the overflow check. I
> filed a bug report and got the answer:
>> Integer overflow is undefined. You have to check before the fact, or compile
>> > with -fwrapv.
> ( http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49820 )
> I disagree for several reasons:
I see that I've already been quoted in the bug report. Here I'll just
stress that I think it's important that gcc implement the relevant
standards. There are arguments on both sides of an issue like whether a
compiler should optimize based on strict overflow. When facing
arguments on both sides, which should we pick? When possible and
feasible, we pick the alternative which is written in the standard.
That seems to me to be the most reasonable solution to such a problem.
> 1). It is often easier and more logical to check for overflow after it
> happens than before. It can be quite complicated to write a code that
> predicts an overflow before it happens, in a portable way that works
> with all integer sizes. Checking for overflow after it happens is the
> only way that is sure to work in a hypothetical system that uses
> something else than 2's complement representation.
It's reasonably straightforward to check for overflow of any operation
by doing the arithmetic in unsigned types. By definition of the
language standard, unsigned types wrap rather than overflow.
> 2). This is a security problem. It takes a very twisted mind to
> predict that your code is not safe when you are actually checking for
I certainly recommend that the security conscious use
-fno-strict-overflow or -Wno-strict-overflow, along with a number of
other options such as -fstack-protector. gcc serves a number of
different communities, though. Many programmers have no reason to be
security conscious. Repeating myself rhetorically, what should be the
default behaviour? The one documented in the standard.
> 3). I think that you are interpreting the C/C++ standard in an
> over-pedantic way. There are good reasons why the standard says that
> the behavior in case of integer overflow is undefined. 2's complement
> wrap-around is not the only possible behavior in case of
> overflow. Other possibilities are: saturate, signed-magnitude
> wrap-around, reserve a bit pattern for overflow, throw an
> exception. If a future implementation uses internal floating point
> representation for integers then an overflow might variously cause
> loss of precision, INF, NAN, or throw an exception. I guess this is
> what is meant when the standard says the behavior is undefined. What
> the gcc compiler is doing is practically denying the existence of
> overflow (
> ) to the point where it can optimize away an explicit check for
> overflow. I refuse to believe that this is what the standard-writers
> intended. There must be a sensible compromize that allows the
> optimizer to make certain assumptions that rely on overflow not
> occurring without going to the extreme of optimizing away an overflow
It would be interesting to try to write such a compromise.
> 4). The bug in my case disappears if I compile with -fwrapv or
> -fno-strict-overflow or without -O2, but this is not my point. My
> point is that gcc should be useful to a programmer with average
There are many many ways to cut yourself when using C++. Personally I
suspect that a programmer with average skills should stick to Go or an