This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: PATCH RFC: -Wstrict-overflow


On Feb  1, 2007, Ian Lance Taylor <iant@google.com> wrote:

> calls.c:885: warning: assuming signed overflow is undefined when negating a division

> The code is:
>     bytes -= bitsize / BITS_PER_UNIT;
> The optimization is effectively changing this to
>     bytes += bitsize / - BITS_PER_UNIT;
> I think this is a little uglier.

Erhm...  I'd rather the warning was something along the lines of
"assuming absence of signed overflow" instead of "assuming signed
overflow is undefined", because the former is more meaningful, but now
I can't quite figure out whether this is indeed the case for the
snippet above.

I can see that the transformation takes advantage of knowledge that
signed overflow on the target platform would produce the same results
as expressed in the input language, but I can't see any useful case of
signed overflow in the input that the compiler is taking advantage of
in making the optimization that would yield worse results after the
transformation on the target platform.


If the division overflows, we have INT_MIN / -1, so negating the
divisor can only be an improvement in terms of the arithmetic result,
and we might avoid a crash.  Sounds good to me.

If the subtraction overflows, adding the negated value will yield the
same result, even if the negation overflows, so the transformation is
effectively unchanging in this case.

If the subtraction does not overflow, then adding the negated value
won't overflow either, and even if the negation overflows, the end
result will be the same.

Even if the division does not overflow, negating the divisor might, if
we start out with INT_MIN / 1, so in this case the transformation is
harmful.


So, are we warning here that the compiler might have introduced a bug
in the program (I assume it is clever enough to avoid the
transformation in this case), or just that we may have (harmlessly?)
removed an unlikely arithmetic exception?

Am I missing anything in my reasoning?

> Another example:

> real.c:979: warning: assuming signed overflow is undefined when changing X +- C1 cmp C2 to X cmp C1 +- C2

> The code is:
>    if (REAL_EXP (r) <= 0)
> where REAL_EXP is

> #define REAL_EXP(REAL) \
>   ((int)((REAL)->uexp ^ (unsigned int)(1 << (EXP_BITS - 1))) \
>    - (1 << (EXP_BITS - 1)))

> Here the compiler is moving the constant (1 << (EXP_BITS - 1)) from
> one side of the comparison to the other.  Again, I don't see any clean
> way to fix this.

Presumably we could do a better job here with VRP or perhaps with
__builtin_assume()s that let the compiler know uexp's range, so as to
guarantee there is no possibility of overflow in the subtraction.

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]