This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] Fix (X >> C1) & C2 folding (PR tree-optimization/61158)


Hi!

fold_binary_loc for (X >> C1) & C2, if X is zero extended narrower
value, decreases prec, but if the shift count is bigger
than the narrower prec, we then attempt to zerobits <<= negative_value.

In that case the result is necessarily zero though, all possibly non-zero
bits are shifted away, so this patch fixes this case by making sure zerobits
is all ones in that case, which results in (X, 0) folding a few lines below.

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk/4.9/4.8?

2014-05-15  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/61158
	* fold-const.c (fold_binary_loc): If X is zero-extended and
	shiftc >= prec, make sure zerobits is all ones instead of
	invoking undefined behavior.

	* gcc.dg/pr61158.c: New test.

--- gcc/fold-const.c.jj	2014-05-14 09:46:07.000000000 +0200
+++ gcc/fold-const.c	2014-05-14 11:35:19.593730226 +0200
@@ -11972,11 +11972,17 @@ fold_binary_loc (location_t loc,
 		      /* See if we can shorten the right shift.  */
 		      if (shiftc < prec)
 			shift_type = inner_type;
+		      /* Otherwise X >> C1 is all zeros, so we'll optimize
+			 it into (X, 0) later on by making sure zerobits
+			 is all ones.  */
 		    }
 		}
 	      zerobits = ~(unsigned HOST_WIDE_INT) 0;
-	      zerobits >>= HOST_BITS_PER_WIDE_INT - shiftc;
-	      zerobits <<= prec - shiftc;
+	      if (shiftc < prec)
+		{
+		  zerobits >>= HOST_BITS_PER_WIDE_INT - shiftc;
+		  zerobits <<= prec - shiftc;
+		}
 	      /* For arithmetic shift if sign bit could be set, zerobits
 		 can contain actually sign bits, so no transformation is
 		 possible, unless MASK masks them all away.  In that
@@ -11994,7 +12000,7 @@ fold_binary_loc (location_t loc,
 	  /* ((X << 16) & 0xff00) is (X, 0).  */
 	  if ((mask & zerobits) == mask)
 	    return omit_one_operand_loc (loc, type,
-				     build_int_cst (type, 0), arg0);
+					 build_int_cst (type, 0), arg0);
 
 	  newmask = mask | zerobits;
 	  if (newmask != mask && (newmask & (newmask + 1)) == 0)
--- gcc/testsuite/gcc.dg/pr61158.c.jj	2014-05-14 12:18:06.066817887 +0200
+++ gcc/testsuite/gcc.dg/pr61158.c	2014-05-14 12:18:48.046598622 +0200
@@ -0,0 +1,12 @@
+/* PR tree-optimization/61158 */
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-original" } */
+
+unsigned long long
+foo (unsigned int x)
+{
+  return ((unsigned long long) x & 0x00ff000000000000ULL) >> 40;
+}
+
+/* { dg-final { scan-tree-dump "return 0;" "original" { target { ilp32 || lp64 } } } } */
+/* { dg-final { cleanup-tree-dump "original" } } */

	Jakub


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]