This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: Support <, <=, > and >= for offset_int and widest_int
- From: Richard Biener <richard dot guenther at gmail dot com>
- To: GCC Patches <gcc-patches at gcc dot gnu dot org>, richard dot sandiford at arm dot com
- Date: Mon, 2 May 2016 10:50:26 +0200
- Subject: Re: Support <, <=, > and >= for offset_int and widest_int
- Authentication-results: sourceware.org; auth=none
- References: <87pot8fvo5 dot fsf at e105548-lin dot cambridge dot arm dot com>
On Fri, Apr 29, 2016 at 2:26 PM, Richard Sandiford
<richard.sandiford@arm.com> wrote:
> offset_int and widest_int are supposed to be at least one bit wider
> than all the values they need to represent, with the extra bits
> being signs. Thus offset_int is effectively int128_t and widest_int
> is effectively intNNN_t, for target-dependent NNN.
>
> Because the types are signed, there's not really any need to specify
> a sign for operations like comparison. I think things would be clearer
> if we supported <, <=, > and >= for them (but not for wide_int, which
> doesn't have a sign).
>
> Tested on x86_64-linux-gnu and aarch64-linux-gnu. OK to install?
Ok.
Thanks,
Richard.
> Thanks,
> Richard
>
>
> gcc/
> * wide-int.h: Update offset_int and widest_int documentation.
> (WI_SIGNED_BINARY_PREDICATE_RESULT): New macro.
> (wi::binary_traits): Allow ordered comparisons between offset_int and
> offset_int, between widest_int and widest_int, and between either
> of these types and basic C types.
> (operator <, <=, >, >=): Define for the same combinations.
> * tree.h (tree_int_cst_lt): Use comparison operators instead
> of wi:: comparisons.
> (tree_int_cst_le): Likewise.
> * gimple-fold.c (fold_array_ctor_reference): Likewise.
> (fold_nonarray_ctor_reference): Likewise.
> * gimple-ssa-strength-reduction.c (record_increment): Likewise.
> * tree-affine.c (aff_comb_cannot_overlap_p): Likewise.
> * tree-parloops.c (try_transform_to_exit_first_loop_alt): Likewise.
> * tree-sra.c (completely_scalarize): Likewise.
> * tree-ssa-alias.c (stmt_kills_ref_p): Likewise.
> * tree-ssa-reassoc.c (extract_bit_test_mask): Likewise.
> * tree-vrp.c (extract_range_from_binary_expr_1): Likewise.
> (check_for_binary_op_overflow): Likewise.
> (search_for_addr_array): Likewise.
> * ubsan.c (ubsan_expand_objsize_ifn): Likewise.
>
> Index: gcc/wide-int.h
> ===================================================================
> --- gcc/wide-int.h
> +++ gcc/wide-int.h
> @@ -53,22 +53,26 @@ along with GCC; see the file COPYING3. If not see
> multiply, division, shifts, comparisons, and operations that need
> overflow detected), the signedness must be specified separately.
>
> - 2) offset_int. This is a fixed size representation that is
> - guaranteed to be large enough to compute any bit or byte sized
> - address calculation on the target. Currently the value is 64 + 4
> - bits rounded up to the next number even multiple of
> - HOST_BITS_PER_WIDE_INT (but this can be changed when the first
> - port needs more than 64 bits for the size of a pointer).
> -
> - This flavor can be used for all address math on the target. In
> - this representation, the values are sign or zero extended based
> - on their input types to the internal precision. All math is done
> - in this precision and then the values are truncated to fit in the
> - result type. Unlike most gimple or rtl intermediate code, it is
> - not useful to perform the address arithmetic at the same
> - precision in which the operands are represented because there has
> - been no effort by the front ends to convert most addressing
> - arithmetic to canonical types.
> + 2) offset_int. This is a fixed-precision integer that can hold
> + any address offset, measured in either bits or bytes, with at
> + least one extra sign bit. At the moment the maximum address
> + size GCC supports is 64 bits. With 8-bit bytes and an extra
> + sign bit, offset_int therefore needs to have at least 68 bits
> + of precision. We round this up to 128 bits for efficiency.
> + Values of type T are converted to this precision by sign- or
> + zero-extending them based on the signedness of T.
> +
> + The extra sign bit means that offset_int is effectively a signed
> + 128-bit integer, i.e. it behaves like int128_t.
> +
> + Since the values are logically signed, there is no need to
> + distinguish between signed and unsigned operations. Sign-sensitive
> + comparison operators <, <=, > and >= are therefore supported.
> +
> + [ Note that, even though offset_int is effectively int128_t,
> + it can still be useful to use unsigned comparisons like
> + wi::leu_p (a, b) as a more efficient short-hand for
> + "a >= 0 && a <= b". ]
>
> 3) widest_int. This representation is an approximation of
> infinite precision math. However, it is not really infinite
> @@ -76,9 +80,9 @@ along with GCC; see the file COPYING3. If not see
> precision math where the precision is 4 times the size of the
> largest integer that the target port can represent.
>
> - widest_int is supposed to be wider than any number that it needs to
> - store, meaning that there is always at least one leading sign bit.
> - All widest_int values are therefore signed.
> + Like offset_int, widest_int is wider than all the values that
> + it needs to represent, so the integers are logically signed.
> + Sign-sensitive comparison operators <, <=, > and >= are supported.
>
> There are several places in the GCC where this should/must be used:
>
> @@ -255,6 +259,12 @@ along with GCC; see the file COPYING3. If not see
> #define WI_BINARY_RESULT(T1, T2) \
> typename wi::binary_traits <T1, T2>::result_type
>
> +/* The type of result produced by a signed binary predicate on types T1 and T2.
> + This is bool if signed comparisons make sense for T1 and T2 and leads to
> + substitution failure otherwise. */
> +#define WI_SIGNED_BINARY_PREDICATE_RESULT(T1, T2) \
> + typename wi::binary_traits <T1, T2>::signed_predicate_result
> +
> /* The type of result produced by a unary operation on type T. */
> #define WI_UNARY_RESULT(T) \
> typename wi::unary_traits <T>::result_type
> @@ -316,7 +326,7 @@ namespace wi
> VAR_PRECISION,
>
> /* The integer has a constant precision (known at GCC compile time)
> - but no defined signedness. */
> + and is signed. */
> CONST_PRECISION
> };
>
> @@ -379,6 +389,7 @@ namespace wi
> so as not to confuse gengtype. */
> typedef generic_wide_int < fixed_wide_int_storage
> <int_traits <T2>::precision> > result_type;
> + typedef bool signed_predicate_result;
> };
>
> template <typename T1, typename T2>
> @@ -394,6 +405,7 @@ namespace wi
> so as not to confuse gengtype. */
> typedef generic_wide_int < fixed_wide_int_storage
> <int_traits <T1>::precision> > result_type;
> + typedef bool signed_predicate_result;
> };
>
> template <typename T1, typename T2>
> @@ -404,6 +416,7 @@ namespace wi
> STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
> typedef generic_wide_int < fixed_wide_int_storage
> <int_traits <T1>::precision> > result_type;
> + typedef bool signed_predicate_result;
> };
>
> template <typename T1, typename T2>
> @@ -3050,6 +3063,21 @@ wi::min_precision (const T &x, signop sgn)
> return get_precision (x) - clz (x);
> }
>
> +#define SIGNED_BINARY_PREDICATE(OP, F) \
> + template <typename T1, typename T2> \
> + inline WI_SIGNED_BINARY_PREDICATE_RESULT (T1, T2) \
> + OP (const T1 &x, const T2 &y) \
> + { \
> + return wi::F (x, y); \
> + }
> +
> +SIGNED_BINARY_PREDICATE (operator <, lts_p)
> +SIGNED_BINARY_PREDICATE (operator <=, les_p)
> +SIGNED_BINARY_PREDICATE (operator >, gts_p)
> +SIGNED_BINARY_PREDICATE (operator >=, ges_p)
> +
> +#undef SIGNED_BINARY_PREDICATE
> +
> template<typename T>
> void
> gt_ggc_mx (generic_wide_int <T> *)
> Index: gcc/tree.h
> ===================================================================
> --- gcc/tree.h
> +++ gcc/tree.h
> @@ -5318,7 +5318,7 @@ wi::max_value (const_tree type)
> inline bool
> tree_int_cst_lt (const_tree t1, const_tree t2)
> {
> - return wi::lts_p (wi::to_widest (t1), wi::to_widest (t2));
> + return wi::to_widest (t1) < wi::to_widest (t2);
> }
>
> /* Return true if INTEGER_CST T1 is less than or equal to INTEGER_CST T2,
> @@ -5327,7 +5327,7 @@ tree_int_cst_lt (const_tree t1, const_tree t2)
> inline bool
> tree_int_cst_le (const_tree t1, const_tree t2)
> {
> - return wi::les_p (wi::to_widest (t1), wi::to_widest (t2));
> + return wi::to_widest (t1) <= wi::to_widest (t2);
> }
>
> /* Returns -1 if T1 < T2, 0 if T1 == T2, and 1 if T1 > T2. T1 and T2
> Index: gcc/gimple-fold.c
> ===================================================================
> --- gcc/gimple-fold.c
> +++ gcc/gimple-fold.c
> @@ -5380,7 +5380,7 @@ fold_array_ctor_reference (tree type, tree ctor,
> be larger than size of array element. */
> if (!TYPE_SIZE_UNIT (type)
> || TREE_CODE (TYPE_SIZE_UNIT (type)) != INTEGER_CST
> - || wi::lts_p (elt_size, wi::to_offset (TYPE_SIZE_UNIT (type)))
> + || elt_size < wi::to_offset (TYPE_SIZE_UNIT (type))
> || elt_size == 0)
> return NULL_TREE;
>
> @@ -5457,7 +5457,7 @@ fold_nonarray_ctor_reference (tree type, tree ctor,
> fields. */
> if (wi::cmps (access_end, bitoffset_end) > 0)
> return NULL_TREE;
> - if (wi::lts_p (offset, bitoffset))
> + if (offset < bitoffset)
> return NULL_TREE;
> return fold_ctor_reference (type, cval,
> inner_offset.to_uhwi (), size,
> Index: gcc/gimple-ssa-strength-reduction.c
> ===================================================================
> --- gcc/gimple-ssa-strength-reduction.c
> +++ gcc/gimple-ssa-strength-reduction.c
> @@ -2506,8 +2506,7 @@ record_increment (slsr_cand_t c, widest_int increment, bool is_phi_adjust)
> if (c->kind == CAND_ADD
> && !is_phi_adjust
> && c->index == increment
> - && (wi::gts_p (increment, 1)
> - || wi::lts_p (increment, -1))
> + && (increment > 1 || increment < -1)
> && (gimple_assign_rhs_code (c->cand_stmt) == PLUS_EXPR
> || gimple_assign_rhs_code (c->cand_stmt) == POINTER_PLUS_EXPR))
> {
> Index: gcc/tree-affine.c
> ===================================================================
> --- gcc/tree-affine.c
> +++ gcc/tree-affine.c
> @@ -929,7 +929,7 @@ aff_comb_cannot_overlap_p (aff_tree *diff, const widest_int &size1,
> else
> {
> /* We succeed if the second object starts after the first one ends. */
> - return wi::les_p (size1, diff->offset);
> + return size1 <= diff->offset;
> }
> }
>
> Index: gcc/tree-parloops.c
> ===================================================================
> --- gcc/tree-parloops.c
> +++ gcc/tree-parloops.c
> @@ -1868,7 +1868,7 @@ try_transform_to_exit_first_loop_alt (struct loop *loop,
>
> /* Check if nit + 1 overflows. */
> widest_int type_max = wi::to_widest (TYPE_MAXVAL (nit_type));
> - if (!wi::lts_p (nit_max, type_max))
> + if (nit_max >= type_max)
> return false;
>
> gimple *def = SSA_NAME_DEF_STMT (nit);
> Index: gcc/tree-sra.c
> ===================================================================
> --- gcc/tree-sra.c
> +++ gcc/tree-sra.c
> @@ -1055,7 +1055,7 @@ completely_scalarize (tree base, tree decl_type, HOST_WIDE_INT offset, tree ref)
> idx = wi::sext (idx, TYPE_PRECISION (domain));
> max = wi::sext (max, TYPE_PRECISION (domain));
> }
> - for (int el_off = offset; wi::les_p (idx, max); ++idx)
> + for (int el_off = offset; idx <= max; ++idx)
> {
> tree nref = build4 (ARRAY_REF, elemtype,
> ref,
> Index: gcc/tree-ssa-alias.c
> ===================================================================
> --- gcc/tree-ssa-alias.c
> +++ gcc/tree-ssa-alias.c
> @@ -2380,10 +2380,10 @@ stmt_kills_ref_p (gimple *stmt, ao_ref *ref)
> rbase = TREE_OPERAND (rbase, 0);
> }
> if (base == rbase
> - && wi::les_p (offset, roffset)
> - && wi::les_p (roffset + ref->max_size,
> - offset + wi::lshift (wi::to_offset (len),
> - LOG2_BITS_PER_UNIT)))
> + && offset <= roffset
> + && (roffset + ref->max_size
> + <= offset + wi::lshift (wi::to_offset (len),
> + LOG2_BITS_PER_UNIT)))
> return true;
> break;
> }
> Index: gcc/tree-ssa-reassoc.c
> ===================================================================
> --- gcc/tree-ssa-reassoc.c
> +++ gcc/tree-ssa-reassoc.c
> @@ -2464,7 +2464,7 @@ extract_bit_test_mask (tree exp, int prec, tree totallow, tree low, tree high,
> return NULL_TREE;
> bias = wi::to_widest (tbias);
> bias -= wi::to_widest (totallow);
> - if (wi::ges_p (bias, 0) && wi::lts_p (bias, prec - max))
> + if (bias >= 0 && bias < prec - max)
> {
> *mask = wi::lshift (*mask, bias);
> return ret;
> Index: gcc/tree-vrp.c
> ===================================================================
> --- gcc/tree-vrp.c
> +++ gcc/tree-vrp.c
> @@ -2749,17 +2749,17 @@ extract_range_from_binary_expr_1 (value_range *vr,
> /* Sort the 4 products so that min is in prod0 and max is in
> prod3. */
> /* min0min1 > max0max1 */
> - if (wi::gts_p (prod0, prod3))
> + if (prod0 > prod3)
> std::swap (prod0, prod3);
>
> /* min0max1 > max0min1 */
> - if (wi::gts_p (prod1, prod2))
> + if (prod1 > prod2)
> std::swap (prod1, prod2);
>
> - if (wi::gts_p (prod0, prod1))
> + if (prod0 > prod1)
> std::swap (prod0, prod1);
>
> - if (wi::gts_p (prod2, prod3))
> + if (prod2 > prod3)
> std::swap (prod2, prod3);
>
> /* diff = max - min. */
> @@ -3775,7 +3775,7 @@ check_for_binary_op_overflow (enum tree_code subcode, tree type,
> /* If all values in [wmin, wmax] are smaller than
> [wtmin, wtmax] or all are larger than [wtmin, wtmax],
> the arithmetic operation will always overflow. */
> - if (wi::lts_p (wmax, wtmin) || wi::gts_p (wmin, wtmax))
> + if (wmax < wtmin || wmin > wtmax)
> return true;
> return false;
> }
> @@ -6587,7 +6587,7 @@ search_for_addr_array (tree t, location_t location)
>
> idx = mem_ref_offset (t);
> idx = wi::sdiv_trunc (idx, wi::to_offset (el_sz));
> - if (wi::lts_p (idx, 0))
> + if (idx < 0)
> {
> if (dump_file && (dump_flags & TDF_DETAILS))
> {
> @@ -6599,8 +6599,8 @@ search_for_addr_array (tree t, location_t location)
> "array subscript is below array bounds");
> TREE_NO_WARNING (t) = 1;
> }
> - else if (wi::gts_p (idx, (wi::to_offset (up_bound)
> - - wi::to_offset (low_bound) + 1)))
> + else if (idx > (wi::to_offset (up_bound)
> + - wi::to_offset (low_bound) + 1))
> {
> if (dump_file && (dump_flags & TDF_DETAILS))
> {
> Index: gcc/ubsan.c
> ===================================================================
> --- gcc/ubsan.c
> +++ gcc/ubsan.c
> @@ -911,8 +911,8 @@ ubsan_expand_objsize_ifn (gimple_stmt_iterator *gsi)
> /* Yes, __builtin_object_size couldn't determine the
> object size. */;
> else if (TREE_CODE (offset) == INTEGER_CST
> - && wi::ges_p (wi::to_widest (offset), -OBJSZ_MAX_OFFSET)
> - && wi::les_p (wi::to_widest (offset), -1))
> + && wi::to_widest (offset) >= -OBJSZ_MAX_OFFSET
> + && wi::to_widest (offset) <= -1)
> /* The offset is in range [-16K, -1]. */;
> else
> {
> @@ -928,8 +928,8 @@ ubsan_expand_objsize_ifn (gimple_stmt_iterator *gsi)
> /* If the offset is small enough, we don't need the second
> run-time check. */
> if (TREE_CODE (offset) == INTEGER_CST
> - && wi::ges_p (wi::to_widest (offset), 0)
> - && wi::les_p (wi::to_widest (offset), OBJSZ_MAX_OFFSET))
> + && wi::to_widest (offset) >= 0
> + && wi::to_widest (offset) <= OBJSZ_MAX_OFFSET)
> *gsi = gsi_after_labels (then_bb);
> else
> {