PATCH COMMITTED: gcc 4.2 support for -Wstrict-overflow/-fstrict-overflow

Ian Lance Taylor iant@google.com
Tue Mar 13 14:53:00 GMT 2007


"Richard Guenther" <richard.guenther@gmail.com> writes:

> As far as I understand we will get regressions whenever intermediate propagation
> causes an overflow.  Basically an overflowed infinity is "sticky" in
> the value ranges,
> we preserve it for all costs (or avoid using it by dropping to
> VARYING). Consider:
> 
>   if (a < 0)
>     {
>        a = -a;  // now a has the range [1, +INF(OVF)]
>        a = a - 1;  // we now either get [0, +INF(OVF)] or drop to
> VARYING, we don't
>                       // get [0, INT_MAX - 1] as we got before the patch.
>        if (a != INT_MAX)  // we cannot fold this anymore
> 
> Maybe Ian can clarify if I got things wrong.

Something like that, yes.  What I don't have is an example where this
actually leads to a significant change in the generated code.  There
were no significant code changes when compiling the cc1 .i files.
That particular example also happens to continue to work.

There is another, more significant, case where we can get an
optimization regression relative to earlier versions of 4.2: the VRP
propagation will not propagate an overflow value through a
conditional.  You can see this in code like:

  for (i = 1; i > 0; i += i)
    if (foo ())
      break;
  if (i > 0)
    ...

Before my patch VRP would propagate through the conditional.  Now it
will not, and in the dump you will see:
    Ignoring predicate evaluation because it assumes that signed overflow is undefined

Again this doesn't lead to any significant code changes that I know
of.  In the cases which I've examined, the effect is masked by the
jump threading and the substitute_and_fold() which is run at the end
of the VRP pass.


> The only way we can avoid these kind of issues is to track two value ranges, one
> where we assume wrapping semantics and one where we assume undefined overflow
> behavior.  Which comes at another cost, of course.

Note that the VRP code has to be careful about using wrapping
semantics, since it can lead to range reversal.  The earlier VRP code
already pegged to INF on overflow.  The main difference that my patch
introduced in this area is that when it pegs to INF, it stays pegged.
So I believe that the only code for which your proposal would make a
difference would be code like your example above: code which tests
against numbers very close to INT_MIN/INT_MAX.

Another approach we could use would be a separate bit in the
value_range_t structure to indicate whether overflow has occurred.
That would permit us to turn -[-INF, X] into [-X, INF](OVF), rather
than [-X, INF(OVF)].  That would slightly extend our ability to use
the resulting range.

It's difficult to know whether either of these approaches makes sense
when we don't have any real test cases, by which I mean not so much as
a test case from real code but simply a test case for which the
generated code is significantly different.  Any optimization can
always be polished indefinitely but it would be nice to have some
clear reason to keep polishing.

Ian



More information about the Gcc-patches mailing list