[PATCH] Fix handling of out of range shifts in cse and rtlanal (PR bootstrap/39454)

Jakub Jelinek jakub@redhat.com
Fri Mar 13 13:57:00 GMT 2009


Hi!

As explained in bigger detail in the PR, fold_rtx sometimes creates shifts
with CONST_INT second argument being out of range on SHIFT_COUNT_TRUNCATED
targets.  Normally simplify_binary_operation canonicalizes them, but here
fold_rtx also did that canonicalization on const_arg1, but haven't updated
the original x, so simplify_binary_operation returned NULL (the arguments
were already canonical), yet x was not canonical.  Fixed by not
canonicalizing the original const_arg1, but another variable, so if we fall
through to simplify_binary_operation, we call it with the arguments x
actually has and it can do the canonicalization for us.

The rtlanal.c change is just in case some place similarly to fold_rtx
doesn't canonicalize them to avoid making any assumptions from the out of
range shifts.  While we could optimize more aggressively for
SHIFT_COUNT_TRUNCATED (see the first patch in the PR), if we think the out
of range shifts should be rare, it isn't worth the trouble.

Bootstrapped/regtested on x86_64-linux (as an example of
!SHIFT_COUNT_TRUNCATED arch), bootstrap/regtest pending on sparcv9-linux.
Ok for trunk if it succeeds?

2009-03-13  Jakub Jelinek  <jakub@redhat.com>

	PR bootstrap/39454
	* cse.c (fold_rtx): Don't modify original const_arg1 when
	canonicalizing SHIFT_COUNT_TRUNCATED shift count, do it on a
	separate variable instead.
	* rtlanal.c (nonzero_bits1) <case ASHIFTRT>: Don't assume anything
	from out of range shift counts.
	(num_sign_bit_copies1) <case ASHIFTRT, case ASHIFT>: Similarly.

--- gcc/cse.c.jj	2009-03-06 20:29:07.000000000 +0100
+++ gcc/cse.c	2009-03-13 11:44:11.000000000 +0100
@@ -3464,6 +3464,7 @@ fold_rtx (rtx x, rtx insn)
 	      int is_shift
 		= (code == ASHIFT || code == ASHIFTRT || code == LSHIFTRT);
 	      rtx y, inner_const, new_const;
+	      rtx canon_const_arg1 = const_arg1;
 	      enum rtx_code associate_code;
 
 	      if (is_shift
@@ -3471,8 +3472,9 @@ fold_rtx (rtx x, rtx insn)
 		      || INTVAL (const_arg1) < 0))
 		{
 		  if (SHIFT_COUNT_TRUNCATED)
-		    const_arg1 = GEN_INT (INTVAL (const_arg1)
-					  & (GET_MODE_BITSIZE (mode) - 1));
+		    canon_const_arg1 = GEN_INT (INTVAL (const_arg1)
+						& (GET_MODE_BITSIZE (mode)
+						   - 1));
 		  else
 		    break;
 		}
@@ -3531,7 +3533,8 @@ fold_rtx (rtx x, rtx insn)
 	      associate_code = (is_shift || code == MINUS ? PLUS : code);
 
 	      new_const = simplify_binary_operation (associate_code, mode,
-						     const_arg1, inner_const);
+						     canon_const_arg1,
+						     inner_const);
 
 	      if (new_const == 0)
 		break;
--- gcc/rtlanal.c.jj	2009-02-20 15:55:28.000000000 +0100
+++ gcc/rtlanal.c	2009-03-13 13:30:55.000000000 +0100
@@ -4061,7 +4061,8 @@ nonzero_bits1 (const_rtx x, enum machine
 	 low-order bits by left shifts.  */
       if (GET_CODE (XEXP (x, 1)) == CONST_INT
 	  && INTVAL (XEXP (x, 1)) >= 0
-	  && INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT)
+	  && INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT
+	  && INTVAL (XEXP (x, 1)) < GET_MODE_BITSIZE (GET_MODE (x)))
 	{
 	  enum machine_mode inner_mode = GET_MODE (x);
 	  unsigned int width = GET_MODE_BITSIZE (inner_mode);
@@ -4542,7 +4543,8 @@ num_sign_bit_copies1 (const_rtx x, enum 
       num0 = cached_num_sign_bit_copies (XEXP (x, 0), mode,
 					 known_x, known_mode, known_ret);
       if (GET_CODE (XEXP (x, 1)) == CONST_INT
-	  && INTVAL (XEXP (x, 1)) > 0)
+	  && INTVAL (XEXP (x, 1)) > 0
+	  && INTVAL (XEXP (x, 1)) < GET_MODE_BITSIZE (GET_MODE (x)))
 	num0 = MIN ((int) bitwidth, num0 + INTVAL (XEXP (x, 1)));
 
       return num0;
@@ -4551,7 +4553,8 @@ num_sign_bit_copies1 (const_rtx x, enum 
       /* Left shifts destroy copies.  */
       if (GET_CODE (XEXP (x, 1)) != CONST_INT
 	  || INTVAL (XEXP (x, 1)) < 0
-	  || INTVAL (XEXP (x, 1)) >= (int) bitwidth)
+	  || INTVAL (XEXP (x, 1)) >= (int) bitwidth
+	  || INTVAL (XEXP (x, 1)) >= GET_MODE_BITSIZE (GET_MODE (x)))
 	return 1;
 
       num0 = cached_num_sign_bit_copies (XEXP (x, 0), mode,

	Jakub



More information about the Gcc-patches mailing list