This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: RFA: Fix rtl-optimization/22258


Richard Henderson wrote:

On Thu, Jun 30, 2005 at 08:25:29PM +0100, Joern RENNECKE wrote:


+ /* Disregard parts of the return value that are set later. */
+ for (p = PREV_INSN (use); p != insn; p = PREV_INSN (p))
+ {
+ set = single_set (p);
+ if (!set || !REG_P (SET_DEST (set)))
+ continue;



I really don't like you creating a specialized lifetime analyzer.
Especially one that's so blatently wrong, completely ignoring
anything that's not single-set.


Well, return value copies are generally made up of single-sets. And if we miss
a part (or even all), all tha could happen is that we suppress an instruction
combination.


Even if we decide to go with a function like you suggest, you should
use one of the existing analyzers: note_stores or propagate_one_insn.


Although I don't think this is necessary, it is not too expensive to use
note_stores: the patch grows by a mere 20%. They way I used it will
also consider CLOBBERS, like the ones you get at the start of a
complex return value copy. This might be useful because then we get the mask
cleared even if we couldn't grok the partial copies because of funny SUBREGS.
I made two other modifications:
- The masks are calculated so that a register with 32 hard registers only gives
rise to a shift count of 31.
- The backwards search stops when the mask becomes empty. That ensures
that we don't spend lots of time scanning from the back of a large basic
block towards an instruction that is actually well before the start of the
return value copy.


bootstrapping / regtesting on i686-pc-linux-gnu.
2005-06-30  J"orn Rennecke <joern.rennecke@st.com>

	PR rtl-optimization/22258
	* combine.c (likely_spilled_retval_1, likely_spilled_retval_p):
	New functions.
	(try_combine): Use likely_spilled_retval_p.

Index: combine.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/combine.c,v
retrieving revision 1.495
diff -p -r1.495 combine.c
*** combine.c	25 Jun 2005 01:59:32 -0000	1.495
--- combine.c	1 Jul 2005 16:42:00 -0000
*************** cant_combine_insn_p (rtx insn)
*** 1555,1560 ****
--- 1555,1639 ----
    return 0;
  }
  
+ struct likely_spilled_retval_info
+ {
+   unsigned regno, nregs;
+   unsigned mask;
+ };
+ 
+ /* Called via note_stores by likely_spilled_retval_p.  remove from info->mask
+    hard registers that are known to be written to / clobbered in full.  */
+ static void
+ likely_spilled_retval_1 (rtx x, rtx set, void *data)
+ {
+   struct likely_spilled_retval_info *info = data;
+   unsigned regno, nregs;
+   unsigned new_mask;
+ 
+   if (!REG_P (XEXP (set, 0)))
+     return;
+   regno = REGNO (x);
+   if (regno >= info->regno + info->nregs)
+     return;
+   nregs = hard_regno_nregs[regno][GET_MODE (x)];
+   if (regno + nregs <= info->regno)
+     return;
+   new_mask = (2U << (nregs - 1)) - 1;
+   if (regno < info->regno)
+     new_mask >>= info->regno - regno;
+   else
+     new_mask <<= regno - info->regno;
+   info->mask &= new_mask;
+ }
+ 
+ /* Return nonzero iff part of the return value is live during INSN, and
+    it is likely spilled.  This can happen when more than one insn is needed
+    to copy the return value, e.g. when we consider to combine into the
+    second copy insn for a complex value.  */
+ 
+ static int
+ likely_spilled_retval_p (rtx insn)
+ {
+   rtx use = BB_END (this_basic_block);
+   rtx reg, p;
+   unsigned regno, nregs;
+   /* We assume here that no machine mode needs more than
+      32 hard registers when the value overlaps with a register
+      for which FUNCTION_VALUE_REGNO_P is true.  */
+   unsigned mask;
+   struct likely_spilled_retval_info info;
+ 
+   if (!NONJUMP_INSN_P (use) || GET_CODE (PATTERN (use)) != USE)
+     return 0;
+   reg = XEXP (PATTERN (use), 0);
+   if (!REG_P (reg) || !FUNCTION_VALUE_REGNO_P (REGNO (reg)))
+     return 0;
+   regno = REGNO (reg);
+   nregs = hard_regno_nregs[regno][GET_MODE (reg)];
+   if (nregs == 1)
+     return 0;
+   mask = (2U << (nregs - 1)) - 1;
+ 
+   /* Disregard parts of the return value that are set later.  */
+   info.regno = regno;
+   info.nregs = nregs;
+   info.mask = mask;
+   for (p = PREV_INSN (use); info.mask && p != insn; p = PREV_INSN (p))
+     note_stores (PATTERN (insn), likely_spilled_retval_1, &info);
+   mask = info.mask;
+ 
+   /* Check if any of the (probably) live return value registers is
+      likely spilled.  */
+   nregs --;
+   do
+     {
+       if ((mask & 1 << nregs)
+ 	  && CLASS_LIKELY_SPILLED_P (REGNO_REG_CLASS (regno + nregs)))
+ 	return 1;
+     } while (nregs--);
+   return 0;
+ }
+ 
  /* Adjust INSN after we made a change to its destination.
  
     Changing the destination can invalidate notes that say something about
*************** try_combine (rtx i3, rtx i2, rtx i1, int
*** 1642,1647 ****
--- 1721,1727 ----
    if (cant_combine_insn_p (i3)
        || cant_combine_insn_p (i2)
        || (i1 && cant_combine_insn_p (i1))
+       || likely_spilled_retval_p (i3)
        /* We also can't do anything if I3 has a
  	 REG_LIBCALL note since we don't want to disrupt the contiguity of a
  	 libcall.  */

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]