PATCH RFC: Use explicit representation of infinity in VRP
Ian Lance Taylor
iant@google.com
Fri Mar 2 02:46:00 GMT 2007
Diego Novillo <dnovillo@redhat.com> writes:
> Ian Lance Taylor wrote on 02/27/07 10:50:
>
> > +/* Return whether TYPE can support our overflow infinity
> > + representation: we use the TREE_OVERFLOW flag, which only exists
> > + for consteants. If TYPE doesn't support this, we don't optimize
>
> s/consteants/constants/
Thanks, fixed.
> > @@ -1915,6 +2267,40 @@ extract_range_from_unary_expr (value_ran
> > /* Otherwise, operate on each end of the range. */
> > min = fold_unary_to_constant (code, TREE_TYPE (expr), vr0.min);
> > max = fold_unary_to_constant (code, TREE_TYPE (expr), vr0.max);
> > +
> > + if (uses_overflow_infinity (TREE_TYPE (expr)))
> > + {
> > + gcc_assert (code != NEGATE_EXPR && code != ABS_EXPR);
> > + if (is_overflow_infinity (vr0.min))
> > + min = vr0.min;
> > + else if (TREE_OVERFLOW (min))
> > + {
> > + if (supports_overflow_infinity (TREE_TYPE (expr)))
>
> I'm confused. If uses_overflow_infinity (TYPE) is true, why would
> support_overflow_infinity (TYPE) be false? (I'm going by the names of
> the predicates, it may be an indication that they need to be renamed).
Hmmm. The code is correct. uses_overflow_infinity means that TYPE is
an integer type which not wrap on overflow. It returns true if we
want to use a special infinity to indicate that we've overflowed.
supports_overflow_infinity means that our particular representation of
overflow works for TYPE. This is false for types for which
TYPE_{MIN,MAX}_VALUE is not a constant. The problem is that we use
TREE_OVERFLOW to indicate an overflow, and that flag only exists for
constants. So it is perfectly possible, even normal, to have types
which pass uses_overflow_infinity but not supports_overflow_infinity.
I originally used a more complex scheme with a hash table to handle
such types. However, they are unusual--in fact I believe they only
occur for Ada. So I have now opted for a simpler scheme in which we
just punt on signed overflow for that sort of type.
Any suggestions for better names for the two functions?
> > + bool sop = false;
> > + tree val = vrp_evaluate_conditional (expr, false, &sop);
> > +
> > + /* A disadvantage of using a special infinity as an overflow
> > + representation is that we lose the ability to record overflow
> > + when we don't have an infinity. So we have to ignore a result
> > + which relies on overflow. */
> > +
>
> Sorry, I don't follow this one. You mean when the type does not have
> an infinity value? Show me an example of this?
The problem is that we can only use our overflow indicator when one of
the bounds is INF. If neither bound is INF, then we have no way to
indicate that the result depends on the assumption that signed
overflow does not occur. So in that (unusual) case, we punt.
In this case we could track this reliably by adding a new field to
value_range_t. I opted against that because of the increased
complexity and the fact that it rarely matters in practice.
> > @@ -2412,7 +2855,9 @@ dump_value_range (FILE *file, value_rang
> > if (INTEGRAL_TYPE_P (type)
> > && !TYPE_UNSIGNED (type)
> > - && vr->min == TYPE_MIN_VALUE (type))
> > + && (uses_overflow_infinity (type)
> > + ? is_negative_overflow_infinity (vr->min)
> > + : vr->min == TYPE_MIN_VALUE (type)))
> > fprintf (file, "-INF");
> > else
> > print_generic_expr (file, vr->min, 0);
>
> Here in dump_value_range we should distinguish INF from INF(OVF) (or some
> other overflow indicator).
Fixed.
> Any compile time effects of this? Changing the probability of getting
> VARYING values sometimes has a significant effect on simulation times.
No major compile time impact. The patched code does seem to run
faster on 20001226-1.c.
> > @@ -1112,7 +1113,8 @@ fold_predicate_in (tree stmt)
> > else
> > return false;
> > - val = vrp_evaluate_conditional (*pred_p, true);
> > + sop = false;
> > + val = vrp_evaluate_conditional (*pred_p, true, &sop);
>
> I guess you plan to use this in the follow up patch?
Yes.
> I find it a bit unfortunate that we have to munge up an optimization so much just
> to get better diagnostics. It's a slippery slope similar to the one we have
> with -Wuninitialized. I would love to see all the warning machinery use a
> totally separate and optimization-independent mechanism. But I realize that
> we are not quite there yet, so I don't have a problem with this idea.
Yes, it is a very tricky issue with warnings like -Wstrict-overflow
which are intended to warn about undefined behaviour. It's hard to
detect undefined behaviour. But I believe it's very important to warn
about cases where the compiler relies on the definition of undefined
behaviour, because otherwise our users, who do not in general
understand the language standard completely, start to disable
optimizations. As you may recall, this entire series of patches was
started in response to Paul Eggert's proposal to always compile with
-fwrapv. The only way to head that off is to insert warnings. And to
only way to make this sort of warning meaningful is to make it
dependent on the optimizers.
I agree that as a general rule our warnings should not depend on our
optimizations. And -Wuninitialized and -Wreturn-type should probably
change to be optimization independent. But for cases like
-Wstrict-overflow I believe that is impossible.
> Test cases coming in with the actual warning patch, right?
Yes. No test cases with this patch because there is no visible change
in functionality. In particular, all the existing VRP tests pass.
Ian
More information about the Gcc-patches
mailing list