This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Further emit_group_store patch


This further patch fixes the compat test failures seen on SPARC64
following HJ's patches for PRs 36701 and 37318.

A struct containing a _Complex char was passed to a function in the
most significant two bytes of a 64-bit register.  The code in
emit_group_store that I move in this patch shifted it to the least
significant two bytes, with a comment justifying this by reference to
store_bit_field.  However, the move did not then go through
store_bit_field, but through the whole DImode register being stored in
an intermediate DImode memory location that was then moved into the
CQImode CONCAT destination.

When going via memory like this, the value should always be aligned to
the lowest-addressed bytes - most significant if big-endian, least
significant if little-endian.  As far as I can tell there are no
little-endian targets where BLOCK_REG_PADDING would cause a shift to
be needed at present to move a structure / trailing part of a
structure from the most significant to the least significant bits of a
register, and so the shift will never be correct when going via
memory.  Thus I moved the shifting code after the code that may go via
memory and then "break;", so it is not used in the cases where it is
incorrect.

Bootstrapped with no regressions on i686-pc-linux-gnu.  Tested with no
regressions with cross to sparc64-linux-gnu, where it fixes

FAIL: gcc.dg/compat/struct-by-value-11 c_compat_x_tst.o-c_compat_y_tst.o execute
FAIL: gcc.dg/compat/struct-by-value-12 c_compat_x_tst.o-c_compat_y_tst.o execute
FAIL: tmpdir-gcc.dg-struct-layout-1/t027 c_compat_x_tst.o-c_compat_y_tst.o execute
FAIL: 27_io/basic_filebuf/close/char/4879.cc execution test

(I don't know why that libstdc++ test would be fixed, but I'm
confident from repeated testing that it's not a test that passes or
fails at random but was reliably failing before this patch).  Tested
with no regressions with cross to arm-none-eabi.  OK to commit?

Testing for other targets would also be useful, especially SPU and PA
where compat failures related to this general area have also been
reported.

2008-09-17  Joseph Myers  <joseph@codesourcery.com>

	* expr.c (emit_group_store): Do not shift before moving via a
	stack slot.

Index: expr.c
===================================================================
--- expr.c	(revision 139909)
+++ expr.c	(working copy)
@@ -2039,33 +2039,17 @@
       HOST_WIDE_INT bytepos = INTVAL (XEXP (XVECEXP (src, 0, i), 1));
       enum machine_mode mode = GET_MODE (tmps[i]);
       unsigned int bytelen = GET_MODE_SIZE (mode);
+      unsigned int adj_bytelen = bytelen;
       rtx dest = dst;
 
       /* Handle trailing fragments that run over the size of the struct.  */
       if (ssize >= 0 && bytepos + (HOST_WIDE_INT) bytelen > ssize)
-	{
-	  /* store_bit_field always takes its value from the lsb.
-	     Move the fragment to the lsb if it's not already there.  */
-	  if (
-#ifdef BLOCK_REG_PADDING
-	      BLOCK_REG_PADDING (GET_MODE (orig_dst), type, i == start)
-	      == (BYTES_BIG_ENDIAN ? upward : downward)
-#else
-	      BYTES_BIG_ENDIAN
-#endif
-	      )
-	    {
-	      int shift = (bytelen - (ssize - bytepos)) * BITS_PER_UNIT;
-	      tmps[i] = expand_shift (RSHIFT_EXPR, mode, tmps[i],
-				      build_int_cst (NULL_TREE, shift),
-				      tmps[i], 0);
-	    }
-	  bytelen = ssize - bytepos;
-	}
+	adj_bytelen = ssize - bytepos;
 
       if (GET_CODE (dst) == CONCAT)
 	{
-	  if (bytepos + bytelen <= GET_MODE_SIZE (GET_MODE (XEXP (dst, 0))))
+	  if (bytepos + adj_bytelen
+	      <= GET_MODE_SIZE (GET_MODE (XEXP (dst, 0))))
 	    dest = XEXP (dst, 0);
 	  else if (bytepos >= GET_MODE_SIZE (GET_MODE (XEXP (dst, 0))))
 	    {
@@ -2103,6 +2087,27 @@
 	    }
 	}
 
+      if (ssize >= 0 && bytepos + (HOST_WIDE_INT) bytelen > ssize)
+	{
+	  /* store_bit_field always takes its value from the lsb.
+	     Move the fragment to the lsb if it's not already there.  */
+	  if (
+#ifdef BLOCK_REG_PADDING
+	      BLOCK_REG_PADDING (GET_MODE (orig_dst), type, i == start)
+	      == (BYTES_BIG_ENDIAN ? upward : downward)
+#else
+	      BYTES_BIG_ENDIAN
+#endif
+	      )
+	    {
+	      int shift = (bytelen - (ssize - bytepos)) * BITS_PER_UNIT;
+	      tmps[i] = expand_shift (RSHIFT_EXPR, mode, tmps[i],
+				      build_int_cst (NULL_TREE, shift),
+				      tmps[i], 0);
+	    }
+	  bytelen = adj_bytelen;
+	}
+
       /* Optimize the access just a bit.  */
       if (MEM_P (dest)
 	  && (! SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (dest))

-- 
Joseph S. Myers
joseph@codesourcery.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]