This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Canonicalising truncated shift counts


gcc.c-torture/execute/builtin-bitops-1.c is failing for -O3 -funroll-loops
on mipsisa32-elf.  The problem is a miscompilation of the inlined my_clzll:

int my_clz##suffix(type x) {						\
    int i;								\
    for (i = 0; i < CHAR_BIT * sizeof (type); i++)			\
	if (x & ((type) 1 << ((CHAR_BIT * sizeof (type)) - i - 1)))	\
	    break;							\
    return i;								\
}									\

The rtl loop optimiser unrolls the loop and creates a prologue that
performs the i == 0 and i == 1 cases.  Because of the way doubleword
shifts are expanded on SHIFT_COUNT_TRUNCATED targets, we end up with
lshiftrt:SIs of 63 and 62 respectively, each followed by an AND with
the constant 1.  Combine then wrongly optimises the result of these
ANDs to zero.

The bug comes from two separate places.  First, nonzero_bits
unconditionally applies the full shift count, regardless of the
shift mode:

      if (GET_CODE (XEXP (x, 1)) == CONST_INT
	  && INTVAL (XEXP (x, 1)) >= 0
	  && INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT)
	{
	  int count = INTVAL (XEXP (x, 1));
          [...]
	  if (code == LSHIFTRT)
	    inner >>= count;
          [...etc...]

And combine.c:force_to_mode does the same:

    case LSHIFTRT:
      [...]
      if (GET_CODE (XEXP (x, 1)) == CONST_INT
	  && INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT
	  && GET_MODE_BITSIZE (op_mode) <= HOST_BITS_PER_WIDE_INT)
	{
          [...]
	  /* Select the mask of the bits we need for the shift operand.  */
	  inner_mask = mask << INTVAL (XEXP (x, 1));

(with ASHIFT and ASHIFTRT cases being similarly affected).

I was going to add special SHIFT_COUNT_TRUNCATED handling to both
functions, such as by creating a new helper function that says
whether a shift count is constant and if so provides its value.
However, in the force_to_mode case -- where we're converting a shift
in one mode to a shift in another -- we'd then have to make sure that
we use the truncated shift count in the new shift.  It seems easier
to just declare that, on SHIFT_COUNT_TRUNCATED targets, the canonical
form of a constant shift count is to be truncated.  The patch below
adds this case to simplify_binary_operation_1.

The treatment for shift counts outside a word is probably still wrong
in the functions above.  My understanding is that such shifts are
target-dependent rather than undefined if !SHIFT_COUNT_TRUNCATED
(in particular, !SHIFT_COUNT_TRUNCATED doesn't imply that no shifts
are truncated).  However, I think the patch below is the correct
fix for SHIFT_COUNT_TRUNCATED targets like MIPS.

Also, if we enforce the proposed canonical form aggressively,
there is probably some target-specific code that should become
redundant, such as:

  if (GET_CODE (operands[2]) == CONST_INT)
    operands[2] = GEN_INT (INTVAL (operands[2])
			   & (GET_MODE_BITSIZE (<MODE>mode) - 1));

in the MIPS shift patterns.  If I'm brave enough, I might try
removing that, or turning it into a gcc_assert...

Regression-tested on mipsisa32-elf.  OK to install?

Richard


gcc/
	* simplify-rtx.c (simplify_binary_operation_1): Canonicalize
	truncated shift counts.

Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c	2007-10-02 13:05:41.000000000 +0100
+++ gcc/simplify-rtx.c	2007-10-02 13:05:44.000000000 +0100
@@ -2562,7 +2562,7 @@ simplify_binary_operation_1 (enum rtx_co
 	  && (unsigned HOST_WIDE_INT) INTVAL (trueop0) == GET_MODE_MASK (mode)
 	  && ! side_effects_p (op1))
 	return op0;
-      break;
+      goto check_truncated_shift;
 
     case ASHIFT:
     case SS_ASHIFT:
@@ -2571,6 +2571,13 @@ simplify_binary_operation_1 (enum rtx_co
 	return op0;
       if (trueop0 == CONST0_RTX (mode) && ! side_effects_p (op1))
 	return op0;
+    check_truncated_shift:
+      if (SHIFT_COUNT_TRUNCATED && GET_CODE (op1) == CONST_INT)
+	{
+	  val = INTVAL (op1) & (GET_MODE_BITSIZE (mode) - 1);
+	  if (val != INTVAL (op1))
+	    return simplify_gen_binary (code, mode, op0, GEN_INT (val));
+	}
       break;
 
     case LSHIFTRT:
@@ -2593,7 +2600,7 @@ simplify_binary_operation_1 (enum rtx_co
 	    return simplify_gen_relational (EQ, mode, imode,
 					    XEXP (op0, 0), const0_rtx);
 	}
-      break;
+      goto check_truncated_shift;
 
     case SMIN:
       if (width <= HOST_BITS_PER_WIDE_INT


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]