PATCH COMMITTED: gcc 4.2 support for -Wstrict-overflow/-fstrict-overflow

Richard Guenther richard.guenther@gmail.com
Tue Mar 13 15:31:00 GMT 2007


On 13 Mar 2007 07:52:43 -0700, Ian Lance Taylor <iant@google.com> wrote:
> "Richard Guenther" <richard.guenther@gmail.com> writes:
>
> > Sure, or something like
> >
> >   if (a < 0)
> >    {
> >       a = -a;       // [1, +INF]
> >       a = a - 10; // [-9, +INF - 10]
> >       a = -a;      //  [-INF + 9, 9]
> >       if (a > 10)
> >         ...
>
> Not a great example, because this one works fine today.  The sequence
> goes:
>     a = -a;      // [1, +INF(OVF)]
>     a = a - 10;  // [-9, +INF(OVF)]
>     a = -a;      // [-INF(OVF), 9]
>     if (a > 10)  // folded to 0 with a -Wstrict-overflow warning
>
>
> > > Another approach we could use would be a separate bit in the
> > > value_range_t structure to indicate whether overflow has occurred.
> > > That would permit us to turn -[-INF, X] into [-X, INF](OVF), rather
> > > than [-X, INF(OVF)].  That would slightly extend our ability to use
> > > the resulting range.
> >
> > But it doesn't provide more accuracy for the warning.  If you had "both"
> > value ranges you can compare the outcome of a transformation you do
> > based on either of the value ranges and warn if they differ.  So what
> > you would warn for is "warning: optimization result differs if overflow is
> > undefined compared to wrapping semantics" or something like that.
>
> That may be possible but I'm not convinced.  If the overflow bit is
> set for a range which we use to fold, then we would warn that we are
> relying on undefined signed overflow.  That is what we want to do
> anyhow if the range uses INF(OVF) anywhere.  So what is the
> difference?

The difference would be in the above case where we now do (?) warn
at folding a > 10 to 0.  With the proposed scheme we would avoid this
because the value range with wrapping semantics is [-INF+9, +9] (if
I computed it right, via the sequence ~[-INF+1,0] [-9,+INF-8] [-INF+9,+9]).
The stickyness of the overflow bit at the moment also means it doesn't
go away again if we "undo" the overflow.

> > > It's difficult to know whether either of these approaches makes sense
> > > when we don't have any real test cases, by which I mean not so much as
> > > a test case from real code but simply a test case for which the
> > > generated code is significantly different.  Any optimization can
> > > always be polished indefinitely but it would be nice to have some
> > > clear reason to keep polishing.
> >
> > The only reason I can see is to somehow separate the warning machinery
> > and the value range propagation.  If you'd propagate two ranges with just
> > the "normal" machinery you'd have to put the warning code at the folding
> > place(s) only.
>
> To be clear, the warning code is already only at the folding places.
> The code which is not at the folding places is the code which does
> infinity arithmetic, and that would be required under your proposal as
> well.

No, my proposal would not require special infinity arithmetic.  But it would
require doubling the amount of value ranges we keep track (and doubling
the propagation work).  And it requires (the anyway pending in my tree)
improvements for how we handle wrapping semantics and anti-ranges.

Richard.



More information about the Gcc-patches mailing list