This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
signed multiplication for pointer offsets
- From: Marc Glisse <marc dot glisse at inria dot fr>
- To: gcc-patches at gcc dot gnu dot org
- Date: Sun, 21 May 2017 21:45:09 +0200 (CEST)
- Subject: signed multiplication for pointer offsets
- Authentication-results: sourceware.org; auth=none
Hello,
when we have a double*p, computing p+n, we multiply n by 8 (size of
double) then add it to p. According to the comment just below the modified
lines in the attached patch, the multiplication is supposed to happen in a
signed type. However, that does not match the current code which uses
sizetype. This is causing missed optimizations, for instance, when
comparing p+m and p+n, we might end up comparing 8*m to 8*n, which is not
the same as comparing m and n if overflow can happen.
I tried this patch to see how much breaks. And actually, not that much
does. There are a few fragile testcases that want to match "q_. " but it
is now "q_10 ", or they expect 3 simplifications and get 5 (same final
code). One was confused by a cast in the middle of x+cst1+cst2. A couple
were hit by the lack of CSE between (x+1)*8, x*8+8, and some variants with
casts in the middle, but that's already an issue without the patch. A few
vectorization testcases failed because SCEV did not recognize a simple
increment of a variable with NOP casts in the middle, that would be worth
fixing. The patch actually fixed another vectorization testcase.
I guess it would help if pointer_plus switched to a signed second
argument, to be consistent with this and reduce the number of casts.
I am not proposing the patch as is, but is this the direction we want to
follow, or should I just fix the comment to match what the code does?
--
Marc Glisse
Index: gcc/c-family/c-common.c
===================================================================
--- gcc/c-family/c-common.c (revision 248308)
+++ gcc/c-family/c-common.c (working copy)
@@ -3106,24 +3106,24 @@ pointer_int_sum (location_t loc, enum tr
because weird cases involving pointer arithmetic
can result in a sum or difference with different type args. */
ptrop = build_binary_op (EXPR_LOCATION (TREE_OPERAND (intop, 1)),
subcode, ptrop,
convert (int_type, TREE_OPERAND (intop, 1)), 1);
intop = convert (int_type, TREE_OPERAND (intop, 0));
}
/* Convert the integer argument to a type the same size as sizetype
so the multiply won't overflow spuriously. */
- if (TYPE_PRECISION (TREE_TYPE (intop)) != TYPE_PRECISION (sizetype)
- || TYPE_UNSIGNED (TREE_TYPE (intop)) != TYPE_UNSIGNED (sizetype))
- intop = convert (c_common_type_for_size (TYPE_PRECISION (sizetype),
- TYPE_UNSIGNED (sizetype)), intop);
+ if (TYPE_PRECISION (TREE_TYPE (intop)) != TYPE_PRECISION (ssizetype)
+ || TYPE_UNSIGNED (TREE_TYPE (intop)) != TYPE_UNSIGNED (ssizetype))
+ intop = convert (c_common_type_for_size (TYPE_PRECISION (ssizetype),
+ TYPE_UNSIGNED (ssizetype)), intop);
/* Replace the integer argument with a suitable product by the object size.
Do this multiplication as signed, then convert to the appropriate type
for the pointer operation and disregard an overflow that occurred only
because of the sign-extension change in the latter conversion. */
{
tree t = fold_build2_loc (loc, MULT_EXPR, TREE_TYPE (intop), intop,
convert (TREE_TYPE (intop), size_exp));
intop = convert (sizetype, t);
if (TREE_OVERFLOW_P (intop) && !TREE_OVERFLOW (t))