[PATCH] Fix folding of UBSAN_CHECK_SUB (x, 0) etc.
Richard Biener
rguenther@suse.de
Tue Mar 25 09:31:00 GMT 2014
On Tue, 25 Mar 2014, Richard Biener wrote:
> On Tue, 25 Mar 2014, Jakub Jelinek wrote:
>
> > Hi!
> >
> > While working on previous patch, I've noticed a severe omission in the
> > -fsanitize=signed-integer-overflow support. Nothing ever
> > folds addition/subtraction of 0 and multiplication by [0,1], none of these
> > may ever overflow and thus we can fold them to normal operations and let
> > those be folded as normal operations.
> >
> > Bootstrapped/regtested on x86_64-linux and i686-linux, tested with
> > --with-build-config=bootstrap-ubsan bootstrap on i686-linux, ok for trunk?
>
> I think you only need to adjust gimple_fold_stmt_to_constant_1.
Actually there only for a * 0 and a - a, as others don't result in
constants. Still the other cases should be handled in fold_stmt,
not only in VRP.
Richard.
> Richard.
>
> > 2014-03-25 Jakub Jelinek <jakub@redhat.com>
> >
> > * tree-vrp.c (simplify_internal_call_using_ranges): If only
> > one range is range_int_cst_p, but not both, at least optimize
> > addition/subtraction of 0 and multiplication by 0 or 1.
> >
> > --- gcc/tree-vrp.c.jj 2014-03-12 10:14:18.000000000 +0100
> > +++ gcc/tree-vrp.c 2014-03-24 16:14:50.822674164 +0100
> > @@ -9336,31 +9336,58 @@ simplify_internal_call_using_ranges (gim
> > else if (TREE_CODE (op0) == INTEGER_CST)
> > set_value_range_to_value (&vr0, op0, NULL);
> > else
> > - return false;
> > + set_value_range_to_varying (&vr0);
> >
> > if (TREE_CODE (op1) == SSA_NAME)
> > vr1 = *get_value_range (op1);
> > else if (TREE_CODE (op1) == INTEGER_CST)
> > set_value_range_to_value (&vr1, op1, NULL);
> > else
> > - return false;
> > + set_value_range_to_varying (&vr1);
> >
> > - if (!range_int_cst_p (&vr0) || !range_int_cst_p (&vr1))
> > - return false;
> > -
> > - tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
> > - tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
> > - if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
> > - || r2 == NULL_TREE || TREE_OVERFLOW (r2))
> > - return false;
> > - if (subcode == MULT_EXPR)
> > + if (!range_int_cst_p (&vr0))
> > + {
> > + /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
> > + optimize at least x = y + 0; x = y - 0; x = y * 0;
> > + and x = y * 1; which never overflow. */
> > + if (!range_int_cst_p (&vr1))
> > + return false;
> > + if (tree_int_cst_sgn (vr1.min) == -1)
> > + return false;
> > + if (compare_tree_int (vr1.max, subcode == MULT_EXPR) == 1)
> > + return false;
> > + }
> > + else if (!range_int_cst_p (&vr1))
> > + {
> > + /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
> > + optimize at least x = 0 + y; x = 0 * y; and x = 1 * y;
> > + which never overflow. */
> > + if (subcode == MINUS_EXPR)
> > + return false;
> > + if (!range_int_cst_p (&vr0))
> > + return false;
> > + if (tree_int_cst_sgn (vr0.min) == -1)
> > + return false;
> > + if (compare_tree_int (vr0.max, subcode == MULT_EXPR) == 1)
> > + return false;
> > + }
> > + else
> > {
> > - tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
> > - tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
> > - if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
> > - || r4 == NULL_TREE || TREE_OVERFLOW (r4))
> > + tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
> > + tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
> > + if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
> > + || r2 == NULL_TREE || TREE_OVERFLOW (r2))
> > return false;
> > + if (subcode == MULT_EXPR)
> > + {
> > + tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
> > + tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
> > + if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
> > + || r4 == NULL_TREE || TREE_OVERFLOW (r4))
> > + return false;
> > + }
> > }
> > +
> > gimple g = gimple_build_assign_with_ops (subcode, gimple_call_lhs (stmt),
> > op0, op1);
> > gsi_replace (gsi, g, false);
> >
> > Jakub
> >
> >
>
>
--
Richard Biener <rguenther@suse.de>
SUSE / SUSE Labs
SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746
GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer
More information about the Gcc-patches
mailing list