This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH, LRA] PR71680, Reload of slow mems


This is a patch for a problem in lra, triggered by the rs6000
backend not allowing SImode in floating point registers.  First, some
analysis.

pr71680.c -m64 -mcpu=power8 -O1 -mlra, ira output showing two problem
insns.
(insn 7 5 26 3 (set (reg:SI 159 [ a ])
        (mem/c:SI (reg/f:DI 158) [1 a+0 S4 A8])) pr71680.c:18 464 {*movsi_internal1}
     (expr_list:REG_EQUIV (mem/c:SI (reg/f:DI 158) [1 a+0 S4 A8])
        (nil)))
(insn 26 7 27 3 (set (reg:DI 162)
        (unspec:DI [
                (fix:SI (subreg:SF (reg:SI 159 [ a ]) 0))
            ] UNSPEC_FCTIWZ)) pr71680.c:18 372 {fctiwz_sf}
     (expr_list:REG_DEAD (reg:SI 159 [ a ])
        (nil)))
Insn 26 requires that reg 159 be of class FLOAT_REGS.

first lra action:
deleting insn with uid = 7.
Changing pseudo 159 in operand 1 of insn 26 on equiv [r158:DI]
      Creating newreg=164, assigning class ALL_REGS to subreg reg r164
   26: r162:DI=unspec[fix(r164:SI#0)] 7
      REG_DEAD r159:SI
    Inserting subreg reload before:
   30: r164:SI=[r158:DI]
[snip]
      Change to class FLOAT_REGS for r164

Well, that didn't do much.  lra tried the equiv mem, found that didn't
work, and had to reload.  Effectively getting back to the two original
insns but r159 replaced with r164.  simplify_operand_subreg did not do
anything in this case because SLOW_UNALIGNED_ACCESS was true (wrongly
for power8, but that's beside the point).  So now we have, using
abbreviated rtl notation:
r164:SI=[r158:DI]
r162:DI=unspec[fix(r164:SI)]
The problem here is that the first insn isn't valid, due to the rs6000
backend not supporting SImode in fprs, and r164 must be an fpr to make
the second insn valid.

next lra action:
      Creating newreg=165 from oldreg=164, assigning class GENERAL_REGS to r165
   30: r165:SI=[r158:DI]
    Inserting insn reload after:
   31: r164:SI=r165:SI
so now we have
r165:SI=[r158:DI]
r164:SI=r165:SI
r162:DI=unspec[fix(r164:SI)]

This ought to be good on power8, except for one little thing.
r165 is GENERAL_REGS so the first insn is good, a gpr load from mem.
r164 is FLOAT_REGS, making the last insn good, a fctiwz.
The second insn ought to be a sldi, mtvsrd, xscvspdpn combination, but
that is only supported for SFmode.  So lra continues on reloading the
second insn, but in vain because it never tries anything other than
SImode and as noted above, SImode is not valid in fprs.

What this patch does is arrange to emit the two reloads needed for the
SLOW_UNALIGNED_ACCESS case at once, moving the subreg to the second
insn in order to switch modes, producing:

r164:SI=[r158:DI]
r165:SF=r164:SI#0
r162:DI=unspec[fix(r165:SF)]

I've also tidied a couple of other things:
1) "old" is unnecessary as it duplicated "operand".
2) Rejecting mem subregs due to SLOW_UNALIGNED_ACCESS only makes sense
if the access in the original mode was fast.

Bootstrapped and regression tested powerpc64le-linux and
powerpc64-linux.  OK to apply?

	PR target/71680
	* lra-constraints.c (simplify_operand_subreg): Allow subreg
	mode for mem when SLOW_UNALIGNED_ACCESS if inner mode is also
	slow.  Emit two reloads for slow mem case, first loading in
	fast innermode, then converting to required mode.
testsuite/
	* gcc.target/powerpc/pr71680.c: New.

diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
index 45b6506..b7b30b1 100644
--- a/gcc/lra-constraints.c
+++ b/gcc/lra-constraints.c
@@ -1462,19 +1462,9 @@ simplify_operand_subreg (int nop, machine_mode reg_mode)
   reg = SUBREG_REG (operand);
   innermode = GET_MODE (reg);
   type = curr_static_id->operand[nop].type;
-  /* If we change address for paradoxical subreg of memory, the
-     address might violate the necessary alignment or the access might
-     be slow.  So take this into consideration.  We should not worry
-     about access beyond allocated memory for paradoxical memory
-     subregs as we don't substitute such equiv memory (see processing
-     equivalences in function lra_constraints) and because for spilled
-     pseudos we allocate stack memory enough for the biggest
-     corresponding paradoxical subreg.  */
-  if (MEM_P (reg)
-      && (! SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (reg))
-	  || MEM_ALIGN (reg) >= GET_MODE_ALIGNMENT (mode)))
+  if (MEM_P (reg))
     {
-      rtx subst, old = *curr_id->operand_loc[nop];
+      rtx subst;
 
       alter_subreg (curr_id->operand_loc[nop], false);
       subst = *curr_id->operand_loc[nop];
@@ -1482,27 +1472,78 @@ simplify_operand_subreg (int nop, machine_mode reg_mode)
       if (! valid_address_p (innermode, XEXP (reg, 0),
 			     MEM_ADDR_SPACE (reg))
 	  || valid_address_p (GET_MODE (subst), XEXP (subst, 0),
-			      MEM_ADDR_SPACE (subst)))
-	return true;
-      else if ((get_constraint_type (lookup_constraint
-				     (curr_static_id->operand[nop].constraint))
-		!= CT_SPECIAL_MEMORY)
-	       /* We still can reload address and if the address is
-		  valid, we can remove subreg without reloading its
-		  inner memory.  */
-	       && valid_address_p (GET_MODE (subst),
-				   regno_reg_rtx
-				   [ira_class_hard_regs
-				    [base_reg_class (GET_MODE (subst),
-						     MEM_ADDR_SPACE (subst),
-						     ADDRESS, SCRATCH)][0]],
-				   MEM_ADDR_SPACE (subst)))
-	return true;
+			      MEM_ADDR_SPACE (subst))
+	  || ((get_constraint_type (lookup_constraint
+				    (curr_static_id->operand[nop].constraint))
+	       != CT_SPECIAL_MEMORY)
+	      /* We still can reload address and if the address is
+		 valid, we can remove subreg without reloading its
+		 inner memory.  */
+	      && valid_address_p (GET_MODE (subst),
+				  regno_reg_rtx
+				  [ira_class_hard_regs
+				   [base_reg_class (GET_MODE (subst),
+						    MEM_ADDR_SPACE (subst),
+						    ADDRESS, SCRATCH)][0]],
+				  MEM_ADDR_SPACE (subst))))
+	{
+	  /* If we change address for paradoxical subreg of memory, the
+	     address might violate the necessary alignment or the access might
+	     be slow.  So take this into consideration.  We should not worry
+	     about access beyond allocated memory for paradoxical memory
+	     subregs as we don't substitute such equiv memory (see processing
+	     equivalences in function lra_constraints) and because for spilled
+	     pseudos we allocate stack memory enough for the biggest
+	     corresponding paradoxical subreg.  */
+	  if (!SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (reg))
+	      || SLOW_UNALIGNED_ACCESS (innermode, MEM_ALIGN (reg))
+	      || MEM_ALIGN (reg) >= GET_MODE_ALIGNMENT (mode))
+	    return true;
+
+	  /* INNERMODE is fast, MODE slow.  Reload the mem in INNERMODE.  */
+	  enum reg_class rclass
+	    = (enum reg_class) targetm.preferred_reload_class (reg, ALL_REGS);
+	  if (get_reload_reg (curr_static_id->operand[nop].type, innermode, reg,
+			      rclass, TRUE, "slow mem", &new_reg))
+	    {
+	      bool insert_before, insert_after;
+	      bitmap_set_bit (&lra_subreg_reload_pseudos, REGNO (new_reg));
+
+	      insert_before = (type != OP_OUT
+			       || GET_MODE_SIZE (innermode) > GET_MODE_SIZE (mode));
+	      insert_after = type != OP_IN;
+	      insert_move_for_subreg (insert_before ? &before : NULL,
+				      insert_after ? &after : NULL,
+				      reg, new_reg);
+	    }
+	  *curr_id->operand_loc[nop] = operand;
+	  SUBREG_REG (operand) = new_reg;
+
+	  /* Convert to MODE.  */
+	  reg = operand;
+	  rclass = (enum reg_class) targetm.preferred_reload_class (reg, ALL_REGS);
+	  if (get_reload_reg (curr_static_id->operand[nop].type, mode, reg,
+			      rclass, TRUE, "slow mem", &new_reg))
+	    {
+	      bool insert_before, insert_after;
+	      bitmap_set_bit (&lra_subreg_reload_pseudos, REGNO (new_reg));
+
+	      insert_before = type != OP_OUT;
+	      insert_after = type != OP_IN;
+	      insert_move_for_subreg (insert_before ? &before : NULL,
+				      insert_after ? &after : NULL,
+				      reg, new_reg);
+	    }
+	  *curr_id->operand_loc[nop] = new_reg;
+	  lra_process_new_insns (curr_insn, before, after,
+				 "Inserting slow mem reload");
+	  return true;
+	}
 
       /* If the address was valid and became invalid, prefer to reload
 	 the memory.  Typical case is when the index scale should
 	 correspond the memory.  */
-      *curr_id->operand_loc[nop] = old;
+      *curr_id->operand_loc[nop] = operand;
     }
   else if (REG_P (reg) && REGNO (reg) < FIRST_PSEUDO_REGISTER)
     {
diff --git a/gcc/testsuite/gcc.target/powerpc/pr71680.c b/gcc/testsuite/gcc.target/powerpc/pr71680.c
new file mode 100644
index 0000000..fe5260f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/powerpc/pr71680.c
@@ -0,0 +1,19 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-require-effective-target powerpc_vsx_ok } */
+/* { dg-skip-if "do not override -mcpu" { powerpc*-*-* } { "-mcpu=*" } { "-mcpu=power8" } } */
+/* { dg-options "-mcpu=power8 -O1 -mlra" } */
+
+#pragma pack(1)
+struct
+{
+  float f0;
+} a;
+
+extern void foo (int);
+
+int
+main (void)
+{
+  for (;;)
+    foo ((int) a.f0);
+}

-- 
Alan Modra
Australia Development Lab, IBM


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]