This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
RFA: Fix rtl-optimization/22258
- From: Joern RENNECKE <joern dot rennecke at st dot com>
- To: gcc-patches at gcc dot gnu dot org
- Date: Thu, 30 Jun 2005 20:25:29 +0100
- Subject: RFA: Fix rtl-optimization/22258
The problem in rtl-optimization/22258 is that we combine an instruction that
requires a spill into a return value copy to a likely spilled register.
If any part of the return value is live during i1, i2 and/or i3, it will be
live in i3. It is therefore sufficient to check liveness for i3.
For a value that can be held in a single machine mode, a HARD_REG_SET
would be overkill, so I use a bitmask in a single int, which the GNU coding
standard says can be assumed to contain at least 32 bits on any of our host
platforms.
Currently bootstrapping / regtesting on i686-pc-linux-gnu.
2005-06-30 J"orn Rennecke <joern.rennecke@st.com>
PR rtl-optimization/22258
* combine.c (likely_spilled_retval_p): New function.
(try_combine): Use it.
Index: combine.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/combine.c,v
retrieving revision 1.495
diff -p -r1.495 combine.c
*** combine.c 25 Jun 2005 01:59:32 -0000 1.495
--- combine.c 30 Jun 2005 19:09:57 -0000
*************** cant_combine_insn_p (rtx insn)
*** 1555,1560 ****
--- 1555,1615 ----
return 0;
}
+ /* Return nonzero iff part of the return value is live during INSN, and
+ it is likely spilled. This can happen when more than one insn is needed
+ to copy the return value, e.g. when we consider to combine into the
+ second copy insn for a complex value. */
+
+ static int
+ likely_spilled_retval_p (rtx insn)
+ {
+ rtx use = BB_END (this_basic_block);
+ rtx reg, p, set;
+ unsigned regno, nregs, p_regno, p_nregs;
+ /* We assume here that no machine mode needs more than 32 hard registers. */
+ unsigned mask, p_mask;
+
+ if (!NONJUMP_INSN_P (use) || GET_CODE (PATTERN (use)) != USE)
+ return 0;
+ reg = XEXP (PATTERN (use), 0);
+ if (!REG_P (reg) || !FUNCTION_VALUE_REGNO_P (REGNO (reg)))
+ return 0;
+ regno = REGNO (reg);
+ nregs = hard_regno_nregs[regno][GET_MODE (reg)];
+ if (nregs == 1)
+ return 0;
+ mask = (1U << nregs) - 1;
+ /* Disregard parts of the return value that are set later. */
+ for (p = PREV_INSN (use); p != insn; p = PREV_INSN (p))
+ {
+ set = single_set (p);
+ if (!set || !REG_P (SET_DEST (set)))
+ continue;
+ p_regno = REGNO (SET_DEST (set));
+ if (p_regno >= regno + nregs)
+ continue;
+ p_nregs = hard_regno_nregs[p_regno][GET_MODE (SET_DEST (set))];
+ if (p_regno + p_nregs <= regno)
+ continue;
+ p_mask = (1U << p_nregs) - 1;
+ if (p_regno < regno)
+ p_mask >>= regno - p_regno;
+ else
+ p_mask <<= p_regno - regno;
+ mask &= ~p_mask;
+ }
+ /* Check if any of the (probably) live return value registers is
+ likely spilled. */
+ nregs --;
+ do
+ {
+ if ((mask & 1 << nregs)
+ && CLASS_LIKELY_SPILLED_P (REGNO_REG_CLASS (regno + nregs)))
+ return 1;
+ } while (nregs--);
+ return 0;
+ }
+
/* Adjust INSN after we made a change to its destination.
Changing the destination can invalidate notes that say something about
*************** try_combine (rtx i3, rtx i2, rtx i1, int
*** 1642,1647 ****
--- 1697,1703 ----
if (cant_combine_insn_p (i3)
|| cant_combine_insn_p (i2)
|| (i1 && cant_combine_insn_p (i1))
+ || likely_spilled_retval_p (i3)
/* We also can't do anything if I3 has a
REG_LIBCALL note since we don't want to disrupt the contiguity of a
libcall. */