This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
new interference graph builder.
- From: Kenneth Zadeck <zadeck at naturalbridge dot com>
- To: Ian Lance Taylor <iant at google dot com>, Vladimir Makarov <vmakarov at redhat dot com>, "Park, Seongbae" <seongbae dot park at gmail dot com>, "Bonzini, Paolo" <bonzini at gnu dot org>, "Bergner, Peter" <bergner at vnet dot ibm dot com>, gcc-patches <gcc-patches at gcc dot gnu dot org>, "Zadeck, Kenneth" <zadeck at naturalbridge dot com>, "Pinski, Andrew" <andrew_pinski at playstation dot sony dot com>
- Date: Sat, 25 Aug 2007 21:31:03 -0400
- Subject: new interference graph builder.
This patch replaces the code in global.c that builds the interference
graph. The new technique differs from the old technique in several ways:
1) The new technique uses the DF_LIVE problem, which is the intersection
of the forwards live information and the backwards live information.
This is the dataflow problem that the rest of the back end of the
compiler uses. The existing implementation is based on a custom problem
(UREC) that was "converted" to use the df framework from
global.c:make_accurate_live_analysis, but was still a bastard problem.
A significant amount of this patch is the removal of the UREC dataflow
problem.
2) The new formulation uses a backwards scan rather than a forwards
scan. This is the way that every other compiler in the world builds the
interference graph, and for good reason. There is no use of REG_DEAD or
REG_UNUSED notes when building the graph. By using a backwards scan
this technique actually discovers fewer interferences:
consider the case where a multiword reg gets mapped into a series of
hard regs but only one of those hardregs is actually used. In a
forwards scan, any register live will interfere with ANY of those hard
regs. In a backwards scan, the uses cause regs to go live, so only the
hard regs actually used will create interferences.
There have been proposals to generate SUBREG_DEAD and SUBREG_UNUSED
notes but the proper thing to do is a backwards scan. It may be that
the current reload/global stack is not up to taking advantage of this
yet, but vlad has plans for replacing the ra, and hopefully the new one
will be equipped to use this.
3) Early clobbers are handled more precisely. In the UREC problem, used
in the existing implementation, as well as
global.c:make_accurate_live_analysis before the df commit, an early
clobber forces any input reg that could be allocated to the
early_clobbering output reg to be live, starting at the beginning of the
basic block. This is insane, especially when optimizations have been
performed to try to make large basic blocks.
The new implementation implements the exact definition of early
clobber: any input reg that dies in the insn and could be allocated to
the same register as the early clobber output is made to interfere with
that output.
4) The interference graph builder does not scan insns, it uses the
df_scan information.
5) There is nothing like the call record_conflicts in the new code.
Chaitin showed that this was useless 20 years ago.
6) The code that set the preferences has been moved into global itself
since this will most likely die in vlad's new implementation.
I put the new conflict builder in a new file ra-conflicts.c since this
code is likely to survive vlad's new ra; it seemed like this would make
that integration easier.
This code has been bootstrapped and regression tested on ppc-32, x86-64,
and ia-64.
Is this OK to commit?
Kenny
2007-08-25 Kenneth Zadeck <zadeck@naturalbridge.com>
* ra-conflict.c: New file.
* reload.c (push_reload, find_dummy_reload): Change DF_RA_LIVE
usage to DF_LIVE usage.
* rtlanal.c (subreg_nregs_with_regno): New function.
* regs.h (allocno, max_allocno, conflicts, allocno_row_words,
reg_allocno, EXECUTE_IF_SET_IN_ALLOCNO_SET): Moved from global.c
(conflicts): Changed to be HOST_BITS_PER_WIDE_INT.
* global.c (allocno, max_allocno, conflicts, allocno_row_words,
reg_allocno, EXECUTE_IF_SET_IN_ALLOCNO_SET): Moved to regs.h
(SET_ALLOCNO_LIVE, CLEAR_ALLOCNO_LIVE): Moved to ra-conflicts.c.
(regs_set, record_one_conflict, record_conflicts, mark_reg_store,
mark_reg_clobber, mark_reg_conflicts, mark_reg_death): Deleted.
(global_alloc): Turn off rescanning insns after call to
global_conflicts and added call to set_preferences.
(global_conflicts): Moved to ra-alloc.c.
(set_preferences_1, set_preferences): New function.
(mirror_conflicts): Changed types for various variables.
(mark_elimination): Change DF_RA_LIVE
usage to DF_LIVE usage.
* local-alloc.c (update_equiv_regs): Change DF_RA_LIVE
usage to DF_LIVE usage.
(rest_of_handle_local_alloc): Changed urec problem to live
problem and do not turn off df rescanning.
* df.h (DF_UREC, DF_UREC_BB_INFO, DF_LIVE_TOP, DF_RA_LIVE_IN,
DF_RA_LIVE_TOP, DF_RA_LIVE_OUT, df_urec_bb_info, df_urec,
df_urec_add_problem, df_urec_get_bb_info): Removed.
(DF_CHAIN, DF_NOTE, DF_CHAIN): Renumbered.
* init-regs.c (initialize_uninitialized_regs): Enhanced debugging
at -O1.
* rtl.h (subreg_nregs_with_regno): New function.
* df-problems.c: (df_get_live_out, df_get_live_in,
df_get_live_top): Removed reference to DF_RA_LIVE.
(df_lr_reset, df_lr_transfer_function, df_live_free_bb_info,
df_live_alloc, df_live_reset, df_live_local_finalize,
df_live_free): Make top set only if different from in set.
(df_lr_top_dump, df_live_top_dump): Only print top set if
different from in set.
(df_lr_bb_local_compute): Removed unnecessary check.
(df_urec_problem_data, df_urec_set_bb_info, df_urec_free_bb_info,
df_urec_alloc, df_urec_mark_reg_change, earlyclobber_regclass,
df_urec_check_earlyclobber, df_urec_mark_reg_use_for_earlyclobber,
df_urec_mark_reg_use_for_earlyclobber_1, df_urec_bb_local_compute,
df_urec_local_compute, df_urec_init, df_urec_local_finalize,
df_urec_confluence_n, df_urec_transfer_function, df_urec_free,
df_urec_top_dump, df_urec_bottom_dump, problem_UREC,
df_urec_add_problem): Removed.
* Makefile.in (ra-conflict.o): New dependencies.
* reload1.c (compute_use_by_pseudos): Change DF_RA_LIVE
usage to DF_LIVE usage.
Index: ra-conflict.c
===================================================================
--- ra-conflict.c (revision 0)
+++ ra-conflict.c (revision 0)
@@ -0,0 +1,789 @@
+/* Allocate registers for pseudo-registers that span basic blocks.
+ Copyright (C) 2007 Free Software Foundation, Inc.
+ Contributed by Kenneth Zadeck <zadeck@naturalbridge.com>
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3. If not see
+<http://www.gnu.org/licenses/>. */
+
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "machmode.h"
+#include "hard-reg-set.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "flags.h"
+#include "regs.h"
+#include "function.h"
+#include "insn-config.h"
+#include "recog.h"
+#include "reload.h"
+#include "output.h"
+#include "toplev.h"
+#include "tree-pass.h"
+#include "timevar.h"
+#include "df.h"
+#include "vecprim.h"
+
+/* Test, set or clear bit number I in allocnos_live,
+ a bit vector indexed by allocno. */
+
+#define SET_ALLOCNO_LIVE(A, I) \
+ (A[(unsigned) (I) / HOST_BITS_PER_WIDE_INT] \
+ |= ((HOST_WIDE_INT) 1 << ((unsigned) (I) % HOST_BITS_PER_WIDE_INT)))
+
+#define CLEAR_ALLOCNO_LIVE(A, I) \
+ (A [(unsigned) (I) / HOST_BITS_PER_WIDE_INT] \
+ &= ~((HOST_WIDE_INT) 1 << ((unsigned) (I) % HOST_BITS_PER_WIDE_INT)))
+
+#define GET_ALLOCNO_LIVE(A, I) \
+ (A [(unsigned) (I) / HOST_BITS_PER_WIDE_INT] \
+ & ((HOST_WIDE_INT) 1 << ((unsigned) (I) % HOST_BITS_PER_WIDE_INT)))
+
+/* Externs defined in regs.h. */
+
+int max_allocno;
+struct allocno *allocno;
+HOST_WIDE_INT *conflicts;
+int allocno_row_words;
+int *reg_allocno;
+
+typedef struct df_ref * df_ref_t;
+DEF_VEC_P(df_ref_t);
+DEF_VEC_ALLOC_P(df_ref_t,heap);
+
+/* Add a conflict between R1 and R2. */
+
+static void
+record_one_conflict_between_regnos (enum machine_mode mode1, int r1,
+ enum machine_mode mode2, int r2)
+{
+ if (dump_file)
+ fprintf (dump_file, " rocbr adding %d<=>%d\n", r1, r2);
+ if (reg_allocno[r1] >= 0 && reg_allocno[r2] >= 0)
+ {
+ int tr1 = reg_allocno[r1];
+ int tr2 = reg_allocno[r2];
+ int ialloc_prod = tr1 * allocno_row_words;
+
+ SET_ALLOCNO_LIVE ((&conflicts[ialloc_prod]), tr2);
+ }
+ else if (reg_allocno[r1] >= 0)
+ {
+ int tr1 = reg_allocno[r1];
+
+ if (r2 < FIRST_PSEUDO_REGISTER)
+ add_to_hard_reg_set (&allocno[tr1].hard_reg_conflicts, mode2, r2);
+ }
+ else if (reg_allocno[r2] >= 0)
+ {
+ int tr2 = reg_allocno[r2];
+
+ if (r1 < FIRST_PSEUDO_REGISTER)
+ add_to_hard_reg_set (&allocno[tr2].hard_reg_conflicts, mode1, r1);
+ }
+
+ /* Now, recursively handle the reg_renumber cases. */
+ if (reg_renumber[r1] >= 0)
+ record_one_conflict_between_regnos (mode1, reg_renumber[r1], mode2, r2);
+
+ if (reg_renumber[r2] >= 0)
+ record_one_conflict_between_regnos (mode1, r1, mode2, reg_renumber[r2]);
+}
+
+
+/* Record a conflict between register REGNO and everything currently
+ live. REGNO must not be a pseudo reg that was allocated by
+ local_alloc; such numbers must be translated through reg_renumber
+ before calling here. */
+
+static void
+record_one_conflict (HOST_WIDE_INT *allocnos_live,
+ HARD_REG_SET *hard_regs_live, int regno)
+{
+ int i;
+
+ if (regno < FIRST_PSEUDO_REGISTER)
+ /* When a hard register becomes live, record conflicts with live
+ pseudo regs. */
+ EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, i,
+ {
+ SET_HARD_REG_BIT (allocno[i].hard_reg_conflicts, regno);
+ if (dump_file)
+ fprintf (dump_file, " roc adding %d<=>%d\n", allocno[i].reg, regno);
+ });
+ else
+ /* When a pseudo-register becomes live, record conflicts first
+ with hard regs, then with other pseudo regs. */
+ {
+ int ialloc = reg_allocno[regno];
+ int ialloc_prod = ialloc * allocno_row_words;
+
+ if (dump_file)
+ {
+ fprintf (dump_file, " roc adding %d<=>(", regno);
+ for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+ if (TEST_HARD_REG_BIT (*hard_regs_live, i)
+ && !TEST_HARD_REG_BIT (allocno[ialloc].hard_reg_conflicts, i))
+ fprintf (dump_file, "%d ", i);
+ fprintf (dump_file, ")\n");
+ }
+
+ IOR_HARD_REG_SET (allocno[ialloc].hard_reg_conflicts, *hard_regs_live);
+
+ for (i = allocno_row_words - 1; i >= 0; i--)
+ conflicts[ialloc_prod + i] |= allocnos_live[i];
+ }
+}
+
+
+/* Handle the case where REG is set by the insn being scanned, during
+ the backward scan to accumulate conflicts. Record a conflict with
+ all other registers already live.
+
+ REG might actually be something other than a register; if so, we do
+ nothing. */
+
+static void
+mark_reg_store (HOST_WIDE_INT *allocnos_live,
+ HARD_REG_SET *hard_regs_live, struct df_ref *ref)
+{
+ rtx reg = DF_REF_REG (ref);
+ unsigned int regno = DF_REF_REGNO (ref);
+ enum machine_mode mode = GET_MODE (reg);
+
+ /* Either this is one of the max_allocno pseudo regs not allocated,
+ or it is or has a hardware reg. First handle the pseudo-regs. */
+ if (regno >= FIRST_PSEUDO_REGISTER && reg_allocno[regno] >= 0)
+ record_one_conflict (allocnos_live, hard_regs_live, regno);
+
+ if (reg_renumber[regno] >= 0)
+ regno = reg_renumber[regno];
+
+ /* Handle hardware regs (and pseudos allocated to hard regs). */
+ if (regno < FIRST_PSEUDO_REGISTER && ! fixed_regs[regno])
+ {
+ unsigned int start = regno;
+ unsigned int last = end_hard_regno (mode, regno);
+ if (GET_CODE (reg) == SUBREG)
+ {
+ start += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
+ SUBREG_BYTE (reg), GET_MODE (reg));
+ last = start + subreg_nregs_with_regno (regno, reg);
+ }
+
+ regno = start;
+ while (regno < last)
+ record_one_conflict (allocnos_live, hard_regs_live, regno++);
+ }
+}
+
+
+/* This function finds and stores register classes that could be early
+ clobbered for INSN in EARLYCLOBBER_REGCLASS. If any earlyclobber
+ classes are found, the function returns TRUE, in all other cases it
+ returns FALSE. */
+
+static bool
+check_insn_for_earlyclobber (bitmap earlyclobber_regclass, rtx insn)
+{
+ int opno;
+ bool found = false;
+
+ bitmap_clear (earlyclobber_regclass);
+ extract_insn (insn);
+
+ for (opno = 0; opno < recog_data.n_operands; opno++)
+ {
+ char c;
+ bool amp_p;
+ enum reg_class class;
+ const char *p = recog_data.constraints[opno];
+
+ class = NO_REGS;
+ amp_p = false;
+ for (;;)
+ {
+ c = *p;
+ switch (c)
+ {
+ case '=': case '+': case '?':
+ case '#': case '!':
+ case '*': case '%':
+ case 'm': case '<': case '>': case 'V': case 'o':
+ case 'E': case 'F': case 'G': case 'H':
+ case 's': case 'i': case 'n':
+ case 'I': case 'J': case 'K': case 'L':
+ case 'M': case 'N': case 'O': case 'P':
+ case 'X':
+ case '0': case '1': case '2': case '3': case '4':
+ case '5': case '6': case '7': case '8': case '9':
+ /* These don't say anything we care about. */
+ break;
+
+ case '&':
+ amp_p = true;
+ break;
+ case '\0':
+ case ',':
+ if (amp_p && class != NO_REGS)
+ {
+ found = true;
+ bitmap_set_bit (earlyclobber_regclass, (int)class);
+ if (dump_file)
+ fprintf (dump_file, " found early clobber class %d\n", (int)class);
+ }
+
+ amp_p = false;
+ class = NO_REGS;
+ break;
+
+ case 'r':
+ class = GENERAL_REGS;
+ break;
+
+ default:
+ class = REG_CLASS_FROM_CONSTRAINT (c, p);
+ break;
+ }
+ if (c == '\0')
+ break;
+ p += CONSTRAINT_LEN (c, p);
+ }
+ }
+
+ return found;
+}
+
+
+/* For each regclass RC in EARLYCLOBBER_REGCLASS, add an interference
+ between regs in DEF_REC (that may use RC), and regs in DYING_REGS
+ (that may be use RC). */
+
+static void
+set_conflicts_for_earlyclobber (struct df_ref **def_rec,
+ bitmap earlyclobber_regclass,
+ VEC (df_ref_t, heap) *dying_regs)
+{
+ unsigned int clobberclass_i;
+ bitmap_iterator bi;
+
+ /* First iterate over the set of reg_classes that are marked as
+ earlyclobber in the md. */
+
+ EXECUTE_IF_SET_IN_BITMAP (earlyclobber_regclass, 0, clobberclass_i, bi)
+ {
+ int j;
+ enum reg_class rc = (enum reg_class)clobberclass_i;
+
+ /* Now, check each of the dying_regs to see which ones might be
+ assigned to a regclass that has been marked as
+ earlyclobber. */
+ for (j = VEC_length (df_ref_t, dying_regs) - 1; j >= 0; j--)
+ {
+ struct df_ref *use = VEC_index (df_ref_t, dying_regs, j);
+ unsigned int uregno = DF_REF_REGNO (use);
+ enum machine_mode umode = GET_MODE (DF_REF_REG (use));
+ enum reg_class pref_class, alt_class;
+
+ if (uregno >= FIRST_PSEUDO_REGISTER)
+ {
+ pref_class = reg_preferred_class (uregno);
+ alt_class = reg_alternate_class (uregno);
+ if (reg_classes_intersect_p (rc, pref_class)
+ || (rc != NO_REGS && reg_classes_intersect_p (rc, alt_class)))
+ {
+ /* Now search the defs to see which one(s) match the
+ earlyclobber regclass and add a interference between
+ that reg and the dying reg. */
+ for (; *def_rec; def_rec++)
+ {
+ struct df_ref *def = *def_rec;
+ unsigned int dregno = DF_REF_REGNO (def);
+ enum machine_mode dmode = GET_MODE (DF_REF_REG (def));
+
+ if (dregno >= FIRST_PSEUDO_REGISTER)
+ {
+ pref_class = reg_preferred_class (dregno);
+ alt_class = reg_alternate_class (dregno);
+ if (reg_classes_intersect_p (rc, pref_class)
+ || (rc != NO_REGS && reg_classes_intersect_p (rc, alt_class)))
+ record_one_conflict_between_regnos (dmode, dregno, umode, uregno);
+ }
+ }
+ }
+ }
+ }
+ }
+ if (dump_file)
+ fprintf (dump_file, "finished early clobber conflicts.\n");
+}
+
+
+/* Clear bit REGNO with MODE from LIVE and HARD_REGS_LIVE. Regno may
+ get renumbered into a hard reg. */
+
+inline static void
+clear_reg_in_live (HOST_WIDE_INT *allocnos_live,
+ HARD_REG_SET *hard_regs_live,
+ rtx reg)
+{
+ unsigned int regno = (GET_CODE (reg) == SUBREG)
+ ? REGNO (SUBREG_REG (reg)): REGNO (reg);
+
+ if (reg_allocno[regno] >= 0)
+ CLEAR_ALLOCNO_LIVE (allocnos_live, reg_allocno[regno]);
+
+ /* For pseudo reg, see if it has been assigned a hardware reg. */
+ if (reg_renumber[regno] >= 0)
+ regno = reg_renumber[regno];
+
+ /* Handle hardware regs (and pseudos allocated to hard regs). */
+ if (regno < FIRST_PSEUDO_REGISTER && ! fixed_regs[regno])
+ {
+ unsigned int start = regno;
+ if (GET_CODE (reg) == SUBREG)
+ {
+ unsigned int last;
+ start += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
+ SUBREG_BYTE (reg), GET_MODE (reg));
+ last = start + subreg_nregs_with_regno (regno, reg);
+ regno = start;
+
+ while (regno < last)
+ {
+ CLEAR_HARD_REG_BIT (*hard_regs_live, regno);
+ regno++;
+ }
+ }
+ else
+ remove_from_hard_reg_set (hard_regs_live, GET_MODE (reg), regno);
+ }
+}
+
+
+
+/* Set bit REGNO from LIVE and HARD_REGS_LIVE. Regno may get
+ renumbered into a hard reg. */
+
+inline static void
+set_reg_in_live (HOST_WIDE_INT *allocnos_live,
+ HARD_REG_SET *hard_regs_live,
+ rtx reg)
+{
+ unsigned int regno = (GET_CODE (reg) == SUBREG)
+ ? REGNO (SUBREG_REG (reg)): REGNO (reg);
+
+ if (reg_allocno[regno] >= 0)
+ SET_ALLOCNO_LIVE (allocnos_live, reg_allocno[regno]);
+
+ /* For pseudo reg, see if it has been assigned a hardware reg. */
+ if (reg_renumber[regno] >= 0)
+ regno = reg_renumber[regno];
+
+ /* Handle hardware regs (and pseudos allocated to hard regs). */
+ if (regno < FIRST_PSEUDO_REGISTER)
+ {
+ if (GET_CODE (reg) == SUBREG)
+ {
+ unsigned int start = regno;
+ unsigned int last;
+
+ start += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
+ SUBREG_BYTE (reg), GET_MODE (reg));
+ last = start + subreg_nregs_with_regno (regno, reg);
+ regno = start;
+
+ while (regno < last)
+ {
+ SET_HARD_REG_BIT (*hard_regs_live, regno);
+ regno++;
+ }
+ }
+ else
+ add_to_hard_reg_set (hard_regs_live, GET_MODE (reg), regno);
+ }
+}
+
+
+/* Dump out a REF with its reg_renumber range to FILE using
+ PREFIX. */
+
+static void
+dump_ref (FILE *file, const char * prefix, struct df_ref *ref)
+{
+ rtx reg = DF_REF_REG (ref);
+ unsigned int regno = DF_REF_REGNO (ref);
+
+ fprintf (file, "%s %d", prefix, regno);
+ if (reg_renumber[regno] >= 0)
+ {
+ enum machine_mode mode = GET_MODE (reg);
+ unsigned int start;
+ unsigned int last;
+
+ regno = reg_renumber[regno];
+
+ start = regno;
+ last = end_hard_regno (mode, regno);
+ if (GET_CODE (reg) == SUBREG)
+ {
+ start += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
+ SUBREG_BYTE (reg), GET_MODE (reg));
+ last = start + subreg_nregs_with_regno (regno, reg);
+ }
+
+ if (start == last - 1)
+ fprintf (file, "(%d)", start);
+ else
+ fprintf (file, "([%d]%d..%d)", regno, start, last-1);
+ }
+ fprintf (file, "\n");
+}
+
+
+/* Scan the rtl code and record all conflicts and register preferences in the
+ conflict matrices and preference tables. */
+
+void
+global_conflicts (void)
+{
+ unsigned int i;
+ basic_block bb;
+ rtx insn;
+
+ /* Regs that have allocnos can be in either
+ hard_regs_live (if regno < FIRST_PSEUDO_REGISTER) or
+ allocnos_live (if regno >= FIRST_PSEUDO_REGISTER) or
+ both if local_alloc has preallocated it and reg_renumber >= 0. */
+
+ HARD_REG_SET hard_regs_live;
+ HOST_WIDE_INT *allocnos_live;
+ bitmap tmp = BITMAP_ALLOC (NULL);
+ VEC (df_ref_t, heap) *clobbers = NULL;
+ VEC (df_ref_t, heap) *dying_regs = NULL;
+ bitmap earlyclobber_regclass = BITMAP_ALLOC (NULL);
+
+ allocnos_live = XNEWVEC (HOST_WIDE_INT, allocno_row_words);
+
+ FOR_EACH_BB (bb)
+ {
+ bitmap_iterator bi;
+
+ bitmap_copy (tmp, DF_LIVE_OUT (bb));
+ df_simulate_artificial_refs_at_end (bb, tmp);
+
+ memset (allocnos_live, 0, allocno_row_words * sizeof (HOST_WIDE_INT));
+
+ /* Initialize allocnos_live and hard_regs_live for bottom of block. */
+ REG_SET_TO_HARD_REG_SET (hard_regs_live, tmp);
+ EXECUTE_IF_SET_IN_BITMAP (tmp, FIRST_PSEUDO_REGISTER, i, bi)
+ set_reg_in_live (allocnos_live, &hard_regs_live, regno_reg_rtx[i]);
+
+ if (dump_file)
+ fprintf (dump_file, "\nstarting basic block %d\n\n", bb->index);
+
+ FOR_BB_INSNS_REVERSE (bb, insn)
+ {
+ unsigned int uid = INSN_UID (insn);
+ struct df_ref **def_rec;
+ struct df_ref **use_rec;
+
+ if (!INSN_P (insn))
+ continue;
+
+ if (dump_file)
+ {
+ fprintf (dump_file, "insn = %d live =", uid);
+
+ for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+ if (TEST_HARD_REG_BIT (hard_regs_live, i))
+ fprintf (dump_file, "%d ", i);
+
+ EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, i,
+ {
+ fprintf (dump_file, "%d ", allocno[i].reg);
+ if (reg_renumber[allocno[i].reg] >= 0)
+ fprintf (dump_file, "(%d) ", reg_renumber[allocno[i].reg]);
+ });
+
+ fprintf (dump_file, "\n");
+ }
+
+ /* Add the defs into live. Most of them will already be
+ there, the ones that are missing are the unused ones and
+ the clobbers. */
+ for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
+ {
+ struct df_ref *def = *def_rec;
+
+ /* FIXME: Ignoring may clobbers is technically the wrong
+ thing to do. However the old version of the this
+ code ignores may clobbers (and instead has many
+ places in the register allocator to handle these
+ constraints). It is quite likely that with a new
+ allocator, the correct thing to do is to not ignore
+ the constraints and then do not put in the large
+ number of special checks. */
+ if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
+ {
+ set_reg_in_live (allocnos_live, &hard_regs_live,
+ DF_REF_REG (def));
+ if (dump_file)
+ dump_ref (dump_file, " adding def", def);
+ }
+ }
+
+ /* Add the interferences for the defs. */
+ for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
+ {
+ struct df_ref *def = *def_rec;
+ if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
+ mark_reg_store (allocnos_live, &hard_regs_live, def);
+ }
+
+ /* Remove the defs from the live sets. Leave the partial
+ and conditional defs in the set because they do not
+ kill. */
+ VEC_truncate (df_ref_t, clobbers, 0);
+ for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
+ {
+ struct df_ref *def = *def_rec;
+
+ if (!DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL
+ | DF_REF_CONDITIONAL | DF_REF_MAY_CLOBBER))
+ {
+ clear_reg_in_live (allocnos_live, &hard_regs_live,
+ DF_REF_REG (def));
+ if (dump_file)
+ dump_ref (dump_file, " clearing def", def);
+ }
+
+ if (DF_REF_FLAGS_IS_SET (def, DF_REF_MUST_CLOBBER))
+ VEC_safe_push (df_ref_t, heap, clobbers, def);
+ }
+
+ /* Add the uses to the live sets. Keep track of the regs
+ that are dying inside the insn, this set will be useful
+ later. */
+ VEC_truncate (df_ref_t, dying_regs, 0);
+ for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++)
+ {
+ struct df_ref *use = *use_rec;
+ unsigned int regno = DF_REF_REGNO (use);
+ bool added = false;
+
+ if (dump_file)
+ dump_ref (dump_file, " seeing use", use);
+
+ if ((reg_allocno[regno] >= 0)
+ && (GET_ALLOCNO_LIVE (allocnos_live, reg_allocno[regno]) == 0))
+ {
+ if (dump_file)
+ fprintf (dump_file, " dying pseudo\n");
+
+ SET_ALLOCNO_LIVE (allocnos_live, reg_allocno[regno]);
+ added = true;
+ }
+
+ if (reg_renumber[regno] >= 0)
+ regno = reg_renumber[regno];
+
+ if (regno < FIRST_PSEUDO_REGISTER)
+ {
+ rtx reg = DF_REF_REG (use);
+ unsigned int start = regno;
+ unsigned int last;
+ if (GET_CODE (reg) == SUBREG)
+ {
+ start += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
+ SUBREG_BYTE (reg), GET_MODE (reg));
+ last = start + subreg_nregs_with_regno (regno, reg);
+ }
+ else
+ last = end_hard_regno (GET_MODE (reg), regno);
+
+ regno = start;
+ while (regno < last)
+ {
+ if (dump_file)
+ fprintf (dump_file, " dying hard reg\n");
+
+ if (!TEST_HARD_REG_BIT (hard_regs_live, regno))
+ {
+ SET_HARD_REG_BIT (hard_regs_live, regno);
+ added = true;
+ }
+ regno++;
+ }
+ }
+ if (added)
+ VEC_safe_push (df_ref_t, heap, dying_regs, use);
+ }
+
+ /* These three cases are all closely related, they all deal
+ with some set of outputs of the insn need to conflict
+ with some of the registers that are used by the insn but
+ die within the insn. If no registers die within the insn,
+ the tests can be skipped. */
+
+ if (VEC_length (df_ref_t, dying_regs) > 0)
+ {
+ int k;
+ /* Real clobbers need to clobber the dying input
+ registers. */
+ for (k = VEC_length (df_ref_t, clobbers) - 1; k >= 0; k--)
+ {
+ struct df_ref *def = VEC_index (df_ref_t, clobbers, k);
+ int j;
+
+ for (j = VEC_length (df_ref_t, dying_regs) - 1; j >= 0; j--)
+ {
+ struct df_ref *use = VEC_index (df_ref_t, dying_regs, j);
+ record_one_conflict_between_regnos (GET_MODE (DF_REF_REG (def)),
+ DF_REF_REGNO (def),
+ GET_MODE (DF_REF_REG (use)),
+ DF_REF_REGNO (use));
+ }
+ }
+
+ /* Early clobbers, by definition, need to clobber the
+ dying registers. */
+ if (check_insn_for_earlyclobber (earlyclobber_regclass, insn))
+ set_conflicts_for_earlyclobber (DF_INSN_UID_DEFS (uid),
+ earlyclobber_regclass, dying_regs);
+
+ /* If INSN is a store with multiple outputs, then any
+ reg that dies here and is used inside of the address
+ of the output must conflict with the other outputs.
+
+ FIXME: There has been some discussion as to whether
+ this is right place to handle this issue. This is a
+ hold over from an early version global conflicts.
+
+ 1) There is some evidence that code only deals with a
+ bug that is only on the m68k. The conditions of this
+ test are such that this case only triggers for a very
+ peculiar insn, one that is a parallel where one of
+ the sets is a store and the other sets a reg that is
+ used in the address of the store. See
+ http://gcc.gnu.org/ml/gcc-patches/1998-12/msg00259.html
+
+ 2) The situation that this is addressing is a bug in
+ the part of reload that handles stores, adding this
+ conflict only hides the problem. (Of course no one
+ really wants to fix reload so it is understandable
+ why a bandaid was just added here.)
+
+ Just because an output is unused does not mean the
+ compiler can assume the side effect will not occur.
+ Consider if REG appears in the address of an output
+ and we reload the output. If we allocate REG to the
+ same hard register as an unused output we could set
+ the hard register before the output reload insn.
+
+ 3) This could actually be handled by making the other
+ (non store) operand of the insn be an early clobber.
+ This would insert the same conflict, even if it is
+ not technically an early clobber. */
+
+ /* It is unsafe to use !single_set here since it will ignore an
+ unused output. */
+ if (GET_CODE (PATTERN (insn)) == PARALLEL && multiple_sets (insn))
+ {
+ int j;
+ for (j = VEC_length (df_ref_t, dying_regs) - 1; j >= 0; j--)
+ {
+ int used_in_output = 0;
+ struct df_ref *use = VEC_index (df_ref_t, dying_regs, j);
+ rtx reg = DF_REF_REG (use);
+ int uregno = DF_REF_REGNO (use);
+ enum machine_mode umode = GET_MODE (DF_REF_REG (use));
+ int k;
+
+ for (k = XVECLEN (PATTERN (insn), 0) - 1; k >= 0; k--)
+ {
+ rtx set = XVECEXP (PATTERN (insn), 0, k);
+ if (GET_CODE (set) == SET
+ && !REG_P (SET_DEST (set))
+ && !rtx_equal_p (reg, SET_DEST (set))
+ && reg_overlap_mentioned_p (reg, SET_DEST (set)))
+ used_in_output = 1;
+ }
+ if (used_in_output)
+ for (k = XVECLEN (PATTERN (insn), 0) - 1; k >= 0; k--)
+ {
+ rtx set = XVECEXP (PATTERN (insn), 0, k);
+ if (GET_CODE (set) == SET
+ && REG_P (SET_DEST (set))
+ && !rtx_equal_p (reg, SET_DEST (set)))
+ record_one_conflict_between_regnos (GET_MODE (SET_DEST (set)),
+ REGNO (SET_DEST (set)),
+ umode, uregno);
+ }
+ }
+ }
+ }
+ }
+
+ if (bb_has_eh_pred (bb))
+ {
+ unsigned int i;
+
+#ifdef EH_RETURN_DATA_REGNO
+ for (i = 0; ; ++i)
+ {
+ unsigned int regno = EH_RETURN_DATA_REGNO (i);
+ if (regno == INVALID_REGNUM)
+ break;
+ record_one_conflict (allocnos_live, &hard_regs_live, regno);
+ }
+#endif
+
+#ifdef STACK_REGS
+ /* Pseudos can't go in stack regs at the start of a basic block that
+ is reached by an abnormal edge. Likewise for call clobbered regs,
+ because caller-save, fixup_abnormal_edges and possibly the table
+ driven EH machinery are not quite ready to handle such regs live
+ across such edges. */
+ EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, i,
+ {
+ allocno[i].no_stack_reg = 1;
+ });
+
+ for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
+ record_one_conflict (allocnos_live, &hard_regs_live, i);
+#endif
+
+ /* No need to record conflicts for call clobbered regs if we have
+ nonlocal labels around, as we don't ever try to allocate such
+ regs in this case. */
+ if (! current_function_has_nonlocal_label)
+ for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+ if (call_used_regs [i])
+ record_one_conflict (allocnos_live, &hard_regs_live, i);
+ }
+ }
+
+ /* Clean up. */
+ free (allocnos_live);
+ VEC_free (df_ref_t, heap, dying_regs);
+ VEC_free (df_ref_t, heap, clobbers);
+ BITMAP_FREE (tmp);
+ BITMAP_FREE (earlyclobber_regclass);
+}
Index: reload.c
===================================================================
--- reload.c (revision 127514)
+++ reload.c (working copy)
@@ -1521,7 +1521,7 @@ push_reload (rtx in, rtx out, rtx *inloc
/* Check that we don't use a hardreg for an uninitialized
pseudo. See also find_dummy_reload(). */
&& (ORIGINAL_REGNO (XEXP (note, 0)) < FIRST_PSEUDO_REGISTER
- || ! bitmap_bit_p (DF_RA_LIVE_OUT (ENTRY_BLOCK_PTR),
+ || ! bitmap_bit_p (DF_LIVE_OUT (ENTRY_BLOCK_PTR),
ORIGINAL_REGNO (XEXP (note, 0))))
&& ! refers_to_regno_for_reload_p (regno,
end_hard_regno (rel_mode,
@@ -2000,7 +2000,7 @@ find_dummy_reload (rtx real_in, rtx real
as they would clobber the other live pseudo using the same.
See also PR20973. */
&& (ORIGINAL_REGNO (in) < FIRST_PSEUDO_REGISTER
- || ! bitmap_bit_p (DF_RA_LIVE_OUT (ENTRY_BLOCK_PTR),
+ || ! bitmap_bit_p (DF_LIVE_OUT (ENTRY_BLOCK_PTR),
ORIGINAL_REGNO (in))))
{
unsigned int regno = REGNO (in) + in_offset;
Index: rtlanal.c
===================================================================
--- rtlanal.c (revision 127514)
+++ rtlanal.c (working copy)
@@ -3254,6 +3254,22 @@ subreg_nregs (const_rtx x)
return info.nregs;
}
+/* Return the number of registers that a subreg REG with REGNO
+ expression refers to. This is a copy of the rtlanal.c:subreg_nregs
+ changed so that the regno can be passed in. */
+
+unsigned int
+subreg_nregs_with_regno (unsigned int regno, const_rtx x)
+{
+ struct subreg_info info;
+ rtx subreg = SUBREG_REG (x);
+
+ subreg_get_info (regno, GET_MODE (subreg), SUBREG_BYTE (x), GET_MODE (x),
+ &info);
+ return info.nregs;
+}
+
+
struct parms_set_data
{
int nregs;
Index: regs.h
===================================================================
--- regs.h (revision 127514)
+++ regs.h (working copy)
@@ -346,4 +346,110 @@ overlaps_hard_reg_set_p (const HARD_REG_
return false;
}
+struct allocno
+{
+ int reg;
+ /* Gives the number of consecutive hard registers needed by that
+ pseudo reg. */
+ int size;
+
+ /* Number of calls crossed by each allocno. */
+ int calls_crossed;
+
+ /* Number of calls that might throw crossed by each allocno. */
+ int throwing_calls_crossed;
+
+ /* Number of refs to each allocno. */
+ int n_refs;
+
+ /* Frequency of uses of each allocno. */
+ int freq;
+
+ /* Guess at live length of each allocno.
+ This is actually the max of the live lengths of the regs. */
+ int live_length;
+
+ /* Set of hard regs conflicting with allocno N. */
+
+ HARD_REG_SET hard_reg_conflicts;
+
+ /* Set of hard regs preferred by allocno N.
+ This is used to make allocnos go into regs that are copied to or from them,
+ when possible, to reduce register shuffling. */
+
+ HARD_REG_SET hard_reg_preferences;
+
+ /* Similar, but just counts register preferences made in simple copy
+ operations, rather than arithmetic. These are given priority because
+ we can always eliminate an insn by using these, but using a register
+ in the above list won't always eliminate an insn. */
+
+ HARD_REG_SET hard_reg_copy_preferences;
+
+ /* Similar to hard_reg_preferences, but includes bits for subsequent
+ registers when an allocno is multi-word. The above variable is used for
+ allocation while this is used to build reg_someone_prefers, below. */
+
+ HARD_REG_SET hard_reg_full_preferences;
+
+ /* Set of hard registers that some later allocno has a preference for. */
+
+ HARD_REG_SET regs_someone_prefers;
+
+#ifdef STACK_REGS
+ /* Set to true if allocno can't be allocated in the stack register. */
+ bool no_stack_reg;
+#endif
+};
+extern struct allocno *allocno;
+
+/* In ra-conflict.c */
+
+/* Number of pseudo-registers which are candidates for allocation. */
+
+extern int max_allocno;
+
+/* max_allocno by max_allocno array of bits, recording whether two
+ allocno's conflict (can't go in the same hardware register).
+
+ `conflicts' is symmetric after the call to mirror_conflicts. */
+
+extern HOST_WIDE_INT *conflicts;
+
+/* Number of ints required to hold max_allocno bits.
+ This is the length of a row in `conflicts'. */
+
+extern int allocno_row_words;
+
+/* Indexed by (pseudo) reg number, gives the allocno, or -1
+ for pseudo registers which are not to be allocated. */
+
+extern int *reg_allocno;
+
+extern void global_conflicts (void);
+
+/* In global.c */
+
+/* For any allocno set in ALLOCNO_SET, set ALLOCNO to that allocno,
+ and execute CODE. */
+#define EXECUTE_IF_SET_IN_ALLOCNO_SET(ALLOCNO_SET, ALLOCNO, CODE) \
+do { \
+ int i_; \
+ int allocno_; \
+ HOST_WIDE_INT *p_ = (ALLOCNO_SET); \
+ \
+ for (i_ = allocno_row_words - 1, allocno_ = 0; i_ >= 0; \
+ i_--, allocno_ += HOST_BITS_PER_WIDE_INT) \
+ { \
+ unsigned HOST_WIDE_INT word_ = (unsigned HOST_WIDE_INT) *p_++; \
+ \
+ for ((ALLOCNO) = allocno_; word_; word_ >>= 1, (ALLOCNO)++) \
+ { \
+ if (word_ & 1) \
+ {CODE;} \
+ } \
+ } \
+} while (0)
+
+
#endif /* GCC_REGS_H */
Index: global.c
===================================================================
--- global.c (revision 127514)
+++ global.c (working copy)
@@ -62,10 +62,13 @@ along with GCC; see the file COPYING3.
reg numbers to allocnos and vice versa.
max_allocno gets the number of allocnos in use.
- 2. Allocate a max_allocno by max_allocno conflict bit matrix and clear it.
- Allocate a max_allocno by FIRST_PSEUDO_REGISTER conflict matrix
- for conflicts between allocnos and explicit hard register use
- (which includes use of pseudo-registers allocated by local_alloc).
+ 2. Allocate a max_allocno by max_allocno conflict bit matrix and
+ clear it. This is called "conflict".
+
+ Allocate a max_allocno by FIRST_PSEUDO_REGISTER conflict matrix for
+ conflicts between allocnos and explicit hard register use (which
+ includes use of pseudo-registers allocated by local_alloc). This
+ is the hard_reg_conflicts inside each allocno.
3. For each basic block
walk forward through the block, recording which
@@ -81,128 +84,16 @@ along with GCC; see the file COPYING3.
5. Allocate the variables in that order; each if possible into
a preferred register, else into another register. */
-/* Number of pseudo-registers which are candidates for allocation. */
-
-static int max_allocno;
-
-/* Indexed by (pseudo) reg number, gives the allocno, or -1
- for pseudo registers which are not to be allocated. */
-
-static int *reg_allocno;
-
-struct allocno
-{
- int reg;
- /* Gives the number of consecutive hard registers needed by that
- pseudo reg. */
- int size;
-
- /* Number of calls crossed by each allocno. */
- int calls_crossed;
-
- /* Number of calls that might throw crossed by each allocno. */
- int throwing_calls_crossed;
-
- /* Number of refs to each allocno. */
- int n_refs;
-
- /* Frequency of uses of each allocno. */
- int freq;
-
- /* Guess at live length of each allocno.
- This is actually the max of the live lengths of the regs. */
- int live_length;
-
- /* Set of hard regs conflicting with allocno N. */
-
- HARD_REG_SET hard_reg_conflicts;
-
- /* Set of hard regs preferred by allocno N.
- This is used to make allocnos go into regs that are copied to or from them,
- when possible, to reduce register shuffling. */
-
- HARD_REG_SET hard_reg_preferences;
-
- /* Similar, but just counts register preferences made in simple copy
- operations, rather than arithmetic. These are given priority because
- we can always eliminate an insn by using these, but using a register
- in the above list won't always eliminate an insn. */
-
- HARD_REG_SET hard_reg_copy_preferences;
-
- /* Similar to hard_reg_preferences, but includes bits for subsequent
- registers when an allocno is multi-word. The above variable is used for
- allocation while this is used to build reg_someone_prefers, below. */
-
- HARD_REG_SET hard_reg_full_preferences;
-
- /* Set of hard registers that some later allocno has a preference for. */
-
- HARD_REG_SET regs_someone_prefers;
-
-#ifdef STACK_REGS
- /* Set to true if allocno can't be allocated in the stack register. */
- bool no_stack_reg;
-#endif
-};
-
-static struct allocno *allocno;
-
/* A vector of the integers from 0 to max_allocno-1,
sorted in the order of first-to-be-allocated first. */
static int *allocno_order;
-/* Define the number of bits in each element of `conflicts' and what
- type that element has. We use the largest integer format on the
- host machine. */
-
-#define INT_BITS HOST_BITS_PER_WIDE_INT
-#define INT_TYPE HOST_WIDE_INT
-
-/* max_allocno by max_allocno array of bits,
- recording whether two allocno's conflict (can't go in the same
- hardware register).
-
- `conflicts' is symmetric after the call to mirror_conflicts. */
-
-static INT_TYPE *conflicts;
-
-/* Number of ints required to hold max_allocno bits.
- This is the length of a row in `conflicts'. */
-
-static int allocno_row_words;
-
/* Two macros to test or store 1 in an element of `conflicts'. */
#define CONFLICTP(I, J) \
- (conflicts[(I) * allocno_row_words + (unsigned) (J) / INT_BITS] \
- & ((INT_TYPE) 1 << ((unsigned) (J) % INT_BITS)))
-
-/* For any allocno set in ALLOCNO_SET, set ALLOCNO to that allocno,
- and execute CODE. */
-#define EXECUTE_IF_SET_IN_ALLOCNO_SET(ALLOCNO_SET, ALLOCNO, CODE) \
-do { \
- int i_; \
- int allocno_; \
- INT_TYPE *p_ = (ALLOCNO_SET); \
- \
- for (i_ = allocno_row_words - 1, allocno_ = 0; i_ >= 0; \
- i_--, allocno_ += INT_BITS) \
- { \
- unsigned INT_TYPE word_ = (unsigned INT_TYPE) *p_++; \
- \
- for ((ALLOCNO) = allocno_; word_; word_ >>= 1, (ALLOCNO)++) \
- { \
- if (word_ & 1) \
- {CODE;} \
- } \
- } \
-} while (0)
-
-/* Set of hard regs currently live (during scan of all insns). */
-
-static HARD_REG_SET hard_regs_live;
+ (conflicts[(I) * allocno_row_words + (unsigned) (J) / HOST_BITS_PER_WIDE_INT] \
+ & ((HOST_WIDE_INT) 1 << ((unsigned) (J) % HOST_BITS_PER_WIDE_INT)))
/* Set of registers that global-alloc isn't supposed to use. */
@@ -230,21 +121,6 @@ static int local_reg_live_length[FIRST_P
#define SET_REGBIT(TABLE, I, J) SET_HARD_REG_BIT (allocno[I].TABLE, J)
-/* Bit mask for allocnos live at current point in the scan. */
-
-static INT_TYPE *allocnos_live;
-
-/* Test, set or clear bit number I in allocnos_live,
- a bit vector indexed by allocno. */
-
-#define SET_ALLOCNO_LIVE(I) \
- (allocnos_live[(unsigned) (I) / INT_BITS] \
- |= ((INT_TYPE) 1 << ((unsigned) (I) % INT_BITS)))
-
-#define CLEAR_ALLOCNO_LIVE(I) \
- (allocnos_live[(unsigned) (I) / INT_BITS] \
- &= ~((INT_TYPE) 1 << ((unsigned) (I) % INT_BITS)))
-
/* This is turned off because it doesn't work right for DImode.
(And it is only used for DImode, so the other cases are worthless.)
The problem is that it isn't true that there is NO possibility of conflict;
@@ -270,12 +146,6 @@ static struct { int allocno1, allocno2;}
no_conflict_pairs[NUM_NO_CONFLICT_PAIRS];
#endif /* 0 */
-/* Record all regs that are set in any one insn.
- Communication from mark_reg_{store,clobber} and global_conflicts. */
-
-static VEC(rtx, heap) *regs_set;
-
-
/* Return true if *LOC contains an asm. */
static int
@@ -336,18 +206,11 @@ compute_regs_asm_clobbered (char *regs_a
static HARD_REG_SET eliminable_regset;
static int allocno_compare (const void *, const void *);
-static void global_conflicts (void);
static void mirror_conflicts (void);
static void expand_preferences (void);
static void prune_preferences (void);
+static void set_preferences (void);
static void find_reg (int, HARD_REG_SET, int, int, int);
-static void record_one_conflict (int);
-static void record_conflicts (int *, int);
-static void mark_reg_store (rtx, const_rtx, void *);
-static void mark_reg_clobber (rtx, const_rtx, void *);
-static void mark_reg_conflicts (rtx);
-static void mark_reg_death (rtx);
-static void set_preference (rtx, rtx);
static void dump_conflicts (FILE *);
static void reg_becomes_live (rtx, const_rtx, void *);
static void reg_dies (int, enum machine_mode, struct insn_chain *);
@@ -572,29 +435,33 @@ global_alloc (void)
fprintf (dump_file, " %d", (int)i);
fprintf (dump_file, "\n");
}
- allocno_row_words = (max_allocno + INT_BITS - 1) / INT_BITS;
+ allocno_row_words = (max_allocno + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT;
/* We used to use alloca here, but the size of what it would try to
allocate would occasionally cause it to exceed the stack limit and
cause unpredictable core dumps. Some examples were > 2Mb in size. */
- conflicts = XCNEWVEC (INT_TYPE, max_allocno * allocno_row_words);
-
- allocnos_live = XNEWVEC (INT_TYPE, allocno_row_words);
+ conflicts = XCNEWVEC (HOST_WIDE_INT, max_allocno * allocno_row_words);
/* If there is work to be done (at least one reg to allocate),
perform global conflict analysis and allocate the regs. */
if (max_allocno > 0)
{
- /* Make a vector that mark_reg_{store,clobber} will store in. */
- if (!regs_set)
- regs_set = VEC_alloc (rtx, heap, 10);
-
/* Scan all the insns and compute the conflicts among allocnos
and between allocnos and hard regs. */
global_conflicts ();
+ /* There is just too much going on in the register allocators to
+ keep things up to date. At the end we have to rescan anyway
+ because things change when the reload_completed flag is set.
+ So we just turn off scanning and we will rescan by hand.
+
+ However, we needed to do the rescanning before this point to
+ get the new insns scanned inserted by local_alloc scanned for
+ global_conflicts. */
+ df_set_flags (DF_NO_INSN_RESCAN);
+
mirror_conflicts ();
/* Eliminate conflicts between pseudos and eliminable registers. If
@@ -604,6 +471,8 @@ global_alloc (void)
So in either case, we can ignore the conflict. Likewise for
preferences. */
+ set_preferences ();
+
for (i = 0; i < (size_t) max_allocno; i++)
{
AND_COMPL_HARD_REG_SET (allocno[i].hard_reg_conflicts,
@@ -687,7 +556,6 @@ global_alloc (void)
free (reg_allocno);
free (allocno);
free (conflicts);
- free (allocnos_live);
return retval;
}
@@ -720,238 +588,6 @@ allocno_compare (const void *v1p, const
return v1 - v2;
}
-/* Scan the rtl code and record all conflicts and register preferences in the
- conflict matrices and preference tables. */
-
-static void
-global_conflicts (void)
-{
- unsigned i;
- basic_block b;
- rtx insn;
- int *block_start_allocnos;
-
- block_start_allocnos = XNEWVEC (int, max_allocno);
-
- FOR_EACH_BB (b)
- {
- memset (allocnos_live, 0, allocno_row_words * sizeof (INT_TYPE));
-
- /* Initialize table of registers currently live
- to the state at the beginning of this basic block.
- This also marks the conflicts among hard registers
- and any allocnos that are live.
-
- For pseudo-regs, there is only one bit for each one
- no matter how many hard regs it occupies.
- This is ok; we know the size from PSEUDO_REGNO_SIZE.
- For explicit hard regs, we cannot know the size that way
- since one hard reg can be used with various sizes.
- Therefore, we must require that all the hard regs
- implicitly live as part of a multi-word hard reg
- be explicitly marked in basic_block_live_at_start. */
-
- {
- int ax = 0;
- reg_set_iterator rsi;
-
- REG_SET_TO_HARD_REG_SET (hard_regs_live, DF_RA_LIVE_TOP (b));
- EXECUTE_IF_SET_IN_REG_SET (DF_RA_LIVE_TOP (b), FIRST_PSEUDO_REGISTER, i, rsi)
- {
- int a = reg_allocno[i];
- if (a >= 0)
- {
- SET_ALLOCNO_LIVE (a);
- block_start_allocnos[ax++] = a;
- }
- else if ((a = reg_renumber[i]) >= 0)
- add_to_hard_reg_set (&hard_regs_live, PSEUDO_REGNO_MODE (i), a);
- }
-
- /* Record that each allocno now live conflicts with each hard reg
- now live.
-
- It is not necessary to mark any conflicts between pseudos at
- this point, even for pseudos which are live at the start of
- the basic block.
-
- Given two pseudos X and Y and any point in the CFG P.
-
- On any path to point P where X and Y are live one of the
- following conditions must be true:
-
- 1. X is live at some instruction on the path that
- evaluates Y.
-
- 2. Y is live at some instruction on the path that
- evaluates X.
-
- 3. Either X or Y is not evaluated on the path to P
- (i.e. it is used uninitialized) and thus the
- conflict can be ignored.
-
- In cases #1 and #2 the conflict will be recorded when we
- scan the instruction that makes either X or Y become live. */
- record_conflicts (block_start_allocnos, ax);
-
-#ifdef EH_RETURN_DATA_REGNO
- if (bb_has_eh_pred (b))
- {
- unsigned int i;
-
- for (i = 0; ; ++i)
- {
- unsigned int regno = EH_RETURN_DATA_REGNO (i);
- if (regno == INVALID_REGNUM)
- break;
- record_one_conflict (regno);
- }
- }
-#endif
-
- /* Pseudos can't go in stack regs at the start of a basic block that
- is reached by an abnormal edge. Likewise for call clobbered regs,
- because caller-save, fixup_abnormal_edges and possibly the table
- driven EH machinery are not quite ready to handle such regs live
- across such edges. */
- {
- edge e;
- edge_iterator ei;
-
- FOR_EACH_EDGE (e, ei, b->preds)
- if (e->flags & EDGE_ABNORMAL)
- break;
-
- if (e != NULL)
- {
-#ifdef STACK_REGS
- EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, ax,
- {
- allocno[ax].no_stack_reg = 1;
- });
- for (ax = FIRST_STACK_REG; ax <= LAST_STACK_REG; ax++)
- record_one_conflict (ax);
-#endif
-
- /* No need to record conflicts for call clobbered regs if we have
- nonlocal labels around, as we don't ever try to allocate such
- regs in this case. */
- if (! current_function_has_nonlocal_label)
- for (ax = 0; ax < FIRST_PSEUDO_REGISTER; ax++)
- if (call_used_regs [ax])
- record_one_conflict (ax);
- }
- }
- }
-
- insn = BB_HEAD (b);
-
- /* Scan the code of this basic block, noting which allocnos
- and hard regs are born or die. When one is born,
- record a conflict with all others currently live. */
-
- while (1)
- {
- RTX_CODE code = GET_CODE (insn);
- rtx link;
-
- gcc_assert (VEC_empty (rtx, regs_set));
- if (code == INSN || code == CALL_INSN || code == JUMP_INSN)
- {
-#if 0
- int i = 0;
- for (link = REG_NOTES (insn);
- link && i < NUM_NO_CONFLICT_PAIRS;
- link = XEXP (link, 1))
- if (REG_NOTE_KIND (link) == REG_NO_CONFLICT)
- {
- no_conflict_pairs[i].allocno1
- = reg_allocno[REGNO (SET_DEST (PATTERN (insn)))];
- no_conflict_pairs[i].allocno2
- = reg_allocno[REGNO (XEXP (link, 0))];
- i++;
- }
-#endif /* 0 */
-
- /* Mark any registers clobbered by INSN as live,
- so they conflict with the inputs. */
-
- note_stores (PATTERN (insn), mark_reg_clobber, NULL);
-
-#ifdef AUTO_INC_DEC
- /* Auto-increment instructions clobber the base
- register. */
- for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
- if (REG_NOTE_KIND (link) == REG_INC)
- mark_reg_store (XEXP (link, 0), NULL_RTX, NULL);
-#endif
- /* Mark any registers dead after INSN as dead now. */
-
- for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
- if (REG_NOTE_KIND (link) == REG_DEAD)
- mark_reg_death (XEXP (link, 0));
-
- /* Mark any registers set in INSN as live,
- and mark them as conflicting with all other live regs.
- Clobbers are processed again, so they conflict with
- the registers that are set. */
-
- note_stores (PATTERN (insn), mark_reg_store, NULL);
-
- /* If INSN has multiple outputs, then any reg that dies here
- and is used inside of an output
- must conflict with the other outputs.
-
- It is unsafe to use !single_set here since it will ignore an
- unused output. Just because an output is unused does not mean
- the compiler can assume the side effect will not occur.
- Consider if REG appears in the address of an output and we
- reload the output. If we allocate REG to the same hard
- register as an unused output we could set the hard register
- before the output reload insn. */
- if (GET_CODE (PATTERN (insn)) == PARALLEL && multiple_sets (insn))
- for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
- if (REG_NOTE_KIND (link) == REG_DEAD)
- {
- int used_in_output = 0;
- int i;
- rtx reg = XEXP (link, 0);
-
- for (i = XVECLEN (PATTERN (insn), 0) - 1; i >= 0; i--)
- {
- rtx set = XVECEXP (PATTERN (insn), 0, i);
- if (GET_CODE (set) == SET
- && !REG_P (SET_DEST (set))
- && !rtx_equal_p (reg, SET_DEST (set))
- && reg_overlap_mentioned_p (reg, SET_DEST (set)))
- used_in_output = 1;
- }
- if (used_in_output)
- mark_reg_conflicts (reg);
- }
-
- /* Mark any registers set in INSN and then never used. */
-
- while (!VEC_empty (rtx, regs_set))
- {
- rtx reg = VEC_pop (rtx, regs_set);
- rtx note = find_regno_note (insn, REG_UNUSED,
- REGNO (reg));
- if (note)
- mark_reg_death (XEXP (note, 0));
- }
- }
-
- if (insn == BB_END (b))
- break;
- insn = NEXT_INSN (insn);
- }
- }
-
- /* Clean up. */
- free (block_start_allocnos);
-}
-
/* Expand the preference information by looking for cases where one allocno
dies in an insn that sets an allocno. If those two allocnos don't conflict,
merge any preferences between those allocnos. */
@@ -999,6 +635,150 @@ expand_preferences (void)
allocno[a1].hard_reg_full_preferences);
}
}
+
+
+/* Try to set a preference for an allocno to a hard register.
+ We are passed DEST and SRC which are the operands of a SET. It is known
+ that SRC is a register. If SRC or the first operand of SRC is a register,
+ try to set a preference. If one of the two is a hard register and the other
+ is a pseudo-register, mark the preference.
+
+ Note that we are not as aggressive as local-alloc in trying to tie a
+ pseudo-register to a hard register. */
+
+static void
+set_preference (rtx dest, rtx src)
+{
+ unsigned int src_regno, dest_regno, end_regno;
+ /* Amount to add to the hard regno for SRC, or subtract from that for DEST,
+ to compensate for subregs in SRC or DEST. */
+ int offset = 0;
+ unsigned int i;
+ int copy = 1;
+
+ if (GET_RTX_FORMAT (GET_CODE (src))[0] == 'e')
+ src = XEXP (src, 0), copy = 0;
+
+ /* Get the reg number for both SRC and DEST.
+ If neither is a reg, give up. */
+
+ if (REG_P (src))
+ src_regno = REGNO (src);
+ else if (GET_CODE (src) == SUBREG && REG_P (SUBREG_REG (src)))
+ {
+ src_regno = REGNO (SUBREG_REG (src));
+
+ if (REGNO (SUBREG_REG (src)) < FIRST_PSEUDO_REGISTER)
+ offset += subreg_regno_offset (REGNO (SUBREG_REG (src)),
+ GET_MODE (SUBREG_REG (src)),
+ SUBREG_BYTE (src),
+ GET_MODE (src));
+ else
+ offset += (SUBREG_BYTE (src)
+ / REGMODE_NATURAL_SIZE (GET_MODE (src)));
+ }
+ else
+ return;
+
+ if (REG_P (dest))
+ dest_regno = REGNO (dest);
+ else if (GET_CODE (dest) == SUBREG && REG_P (SUBREG_REG (dest)))
+ {
+ dest_regno = REGNO (SUBREG_REG (dest));
+
+ if (REGNO (SUBREG_REG (dest)) < FIRST_PSEUDO_REGISTER)
+ offset -= subreg_regno_offset (REGNO (SUBREG_REG (dest)),
+ GET_MODE (SUBREG_REG (dest)),
+ SUBREG_BYTE (dest),
+ GET_MODE (dest));
+ else
+ offset -= (SUBREG_BYTE (dest)
+ / REGMODE_NATURAL_SIZE (GET_MODE (dest)));
+ }
+ else
+ return;
+
+ /* Convert either or both to hard reg numbers. */
+
+ if (reg_renumber[src_regno] >= 0)
+ src_regno = reg_renumber[src_regno];
+
+ if (reg_renumber[dest_regno] >= 0)
+ dest_regno = reg_renumber[dest_regno];
+
+ /* Now if one is a hard reg and the other is a global pseudo
+ then give the other a preference. */
+
+ if (dest_regno < FIRST_PSEUDO_REGISTER && src_regno >= FIRST_PSEUDO_REGISTER
+ && reg_allocno[src_regno] >= 0)
+ {
+ dest_regno -= offset;
+ if (dest_regno < FIRST_PSEUDO_REGISTER)
+ {
+ if (copy)
+ SET_REGBIT (hard_reg_copy_preferences,
+ reg_allocno[src_regno], dest_regno);
+
+ SET_REGBIT (hard_reg_preferences,
+ reg_allocno[src_regno], dest_regno);
+ end_regno = end_hard_regno (GET_MODE (dest), dest_regno);
+ for (i = dest_regno; i < end_regno; i++)
+ SET_REGBIT (hard_reg_full_preferences, reg_allocno[src_regno], i);
+ }
+ }
+
+ if (src_regno < FIRST_PSEUDO_REGISTER && dest_regno >= FIRST_PSEUDO_REGISTER
+ && reg_allocno[dest_regno] >= 0)
+ {
+ src_regno += offset;
+ if (src_regno < FIRST_PSEUDO_REGISTER)
+ {
+ if (copy)
+ SET_REGBIT (hard_reg_copy_preferences,
+ reg_allocno[dest_regno], src_regno);
+
+ SET_REGBIT (hard_reg_preferences,
+ reg_allocno[dest_regno], src_regno);
+ end_regno = end_hard_regno (GET_MODE (src), src_regno);
+ for (i = src_regno; i < end_regno; i++)
+ SET_REGBIT (hard_reg_full_preferences, reg_allocno[dest_regno], i);
+ }
+ }
+}
+
+/* Helper function for set_preferences. */
+static void
+set_preferences_1 (rtx reg, const_rtx setter, void *data ATTRIBUTE_UNUSED)
+{
+ if (GET_CODE (reg) == SUBREG)
+ reg = SUBREG_REG (reg);
+
+ if (!REG_P (reg))
+ return;
+
+ gcc_assert (setter);
+ if (GET_CODE (setter) != CLOBBER)
+ set_preference (reg, SET_SRC (setter));
+}
+
+/* Scan all of the insns and initialize the preferences. */
+
+static void
+set_preferences (void)
+{
+ basic_block bb;
+ rtx insn;
+ FOR_EACH_BB (bb)
+ FOR_BB_INSNS_REVERSE (bb, insn)
+ {
+ if (!INSN_P (insn))
+ continue;
+
+ note_stores (PATTERN (insn), set_preferences_1, NULL);
+ }
+}
+
+
/* Prune the preferences for global registers to exclude registers that cannot
be used.
@@ -1446,62 +1226,16 @@ retry_global_alloc (int regno, HARD_REG_
}
}
-/* Record a conflict between register REGNO
- and everything currently live.
- REGNO must not be a pseudo reg that was allocated
- by local_alloc; such numbers must be translated through
- reg_renumber before calling here. */
-
-static void
-record_one_conflict (int regno)
-{
- int j;
-
- if (regno < FIRST_PSEUDO_REGISTER)
- /* When a hard register becomes live,
- record conflicts with live pseudo regs. */
- EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, j,
- {
- SET_HARD_REG_BIT (allocno[j].hard_reg_conflicts, regno);
- });
- else
- /* When a pseudo-register becomes live,
- record conflicts first with hard regs,
- then with other pseudo regs. */
- {
- int ialloc = reg_allocno[regno];
- int ialloc_prod = ialloc * allocno_row_words;
-
- IOR_HARD_REG_SET (allocno[ialloc].hard_reg_conflicts, hard_regs_live);
- for (j = allocno_row_words - 1; j >= 0; j--)
- conflicts[ialloc_prod + j] |= allocnos_live[j];
- }
-}
-
-/* Record all allocnos currently live as conflicting
- with all hard regs currently live.
-
- ALLOCNO_VEC is a vector of LEN allocnos, all allocnos that
- are currently live. Their bits are also flagged in allocnos_live. */
-
-static void
-record_conflicts (int *allocno_vec, int len)
-{
- while (--len >= 0)
- IOR_HARD_REG_SET (allocno[allocno_vec[len]].hard_reg_conflicts,
- hard_regs_live);
-}
-
/* If CONFLICTP (i, j) is true, make sure CONFLICTP (j, i) is also true. */
static void
mirror_conflicts (void)
{
int i, j;
int rw = allocno_row_words;
- int rwb = rw * INT_BITS;
- INT_TYPE *p = conflicts;
- INT_TYPE *q0 = conflicts, *q1, *q2;
- unsigned INT_TYPE mask;
+ int rwb = rw * HOST_BITS_PER_WIDE_INT;
+ HOST_WIDE_INT *p = conflicts;
+ HOST_WIDE_INT *q0 = conflicts, *q1, *q2;
+ unsigned HOST_WIDE_INT mask;
for (i = max_allocno - 1, mask = 1; i >= 0; i--, mask <<= 1)
{
@@ -1512,9 +1246,9 @@ mirror_conflicts (void)
}
for (j = allocno_row_words - 1, q1 = q0; j >= 0; j--, q1 += rwb)
{
- unsigned INT_TYPE word;
+ unsigned HOST_WIDE_INT word;
- for (word = (unsigned INT_TYPE) *p++, q2 = q1; word;
+ for (word = (unsigned HOST_WIDE_INT) *p++, q2 = q1; word;
word >>= 1, q2 += rw)
{
if (word & 1)
@@ -1524,252 +1258,6 @@ mirror_conflicts (void)
}
}
-/* Handle the case where REG is set by the insn being scanned,
- during the forward scan to accumulate conflicts.
- Store a 1 in regs_live or allocnos_live for this register, record how many
- consecutive hardware registers it actually needs,
- and record a conflict with all other registers already live.
-
- Note that even if REG does not remain alive after this insn,
- we must mark it here as live, to ensure a conflict between
- REG and any other regs set in this insn that really do live.
- This is because those other regs could be considered after this.
-
- REG might actually be something other than a register;
- if so, we do nothing.
-
- SETTER is 0 if this register was modified by an auto-increment (i.e.,
- a REG_INC note was found for it). */
-
-static void
-mark_reg_store (rtx reg, const_rtx setter, void *data ATTRIBUTE_UNUSED)
-{
- int regno;
-
- if (GET_CODE (reg) == SUBREG)
- reg = SUBREG_REG (reg);
-
- if (!REG_P (reg))
- return;
-
- VEC_safe_push (rtx, heap, regs_set, reg);
-
- if (setter && GET_CODE (setter) != CLOBBER)
- set_preference (reg, SET_SRC (setter));
-
- regno = REGNO (reg);
-
- /* Either this is one of the max_allocno pseudo regs not allocated,
- or it is or has a hardware reg. First handle the pseudo-regs. */
- if (regno >= FIRST_PSEUDO_REGISTER)
- {
- if (reg_allocno[regno] >= 0)
- {
- SET_ALLOCNO_LIVE (reg_allocno[regno]);
- record_one_conflict (regno);
- }
- }
-
- if (reg_renumber[regno] >= 0)
- regno = reg_renumber[regno];
-
- /* Handle hardware regs (and pseudos allocated to hard regs). */
- if (regno < FIRST_PSEUDO_REGISTER && ! fixed_regs[regno])
- {
- int last = end_hard_regno (GET_MODE (reg), regno);
- while (regno < last)
- {
- record_one_conflict (regno);
- SET_HARD_REG_BIT (hard_regs_live, regno);
- regno++;
- }
- }
-}
-
-/* Like mark_reg_store except notice just CLOBBERs; ignore SETs. */
-
-static void
-mark_reg_clobber (rtx reg, const_rtx setter, void *data)
-{
- if (GET_CODE (setter) == CLOBBER)
- mark_reg_store (reg, setter, data);
-}
-
-/* Record that REG has conflicts with all the regs currently live.
- Do not mark REG itself as live. */
-
-static void
-mark_reg_conflicts (rtx reg)
-{
- int regno;
-
- if (GET_CODE (reg) == SUBREG)
- reg = SUBREG_REG (reg);
-
- if (!REG_P (reg))
- return;
-
- regno = REGNO (reg);
-
- /* Either this is one of the max_allocno pseudo regs not allocated,
- or it is or has a hardware reg. First handle the pseudo-regs. */
- if (regno >= FIRST_PSEUDO_REGISTER)
- {
- if (reg_allocno[regno] >= 0)
- record_one_conflict (regno);
- }
-
- if (reg_renumber[regno] >= 0)
- regno = reg_renumber[regno];
-
- /* Handle hardware regs (and pseudos allocated to hard regs). */
- if (regno < FIRST_PSEUDO_REGISTER && ! fixed_regs[regno])
- {
- int last = end_hard_regno (GET_MODE (reg), regno);
- while (regno < last)
- {
- record_one_conflict (regno);
- regno++;
- }
- }
-}
-
-/* Mark REG as being dead (following the insn being scanned now).
- Store a 0 in regs_live or allocnos_live for this register. */
-
-static void
-mark_reg_death (rtx reg)
-{
- int regno = REGNO (reg);
-
- /* Either this is one of the max_allocno pseudo regs not allocated,
- or it is a hardware reg. First handle the pseudo-regs. */
- if (regno >= FIRST_PSEUDO_REGISTER)
- {
- if (reg_allocno[regno] >= 0)
- CLEAR_ALLOCNO_LIVE (reg_allocno[regno]);
- }
-
- /* For pseudo reg, see if it has been assigned a hardware reg. */
- if (reg_renumber[regno] >= 0)
- regno = reg_renumber[regno];
-
- /* Handle hardware regs (and pseudos allocated to hard regs). */
- if (regno < FIRST_PSEUDO_REGISTER && ! fixed_regs[regno])
- /* Pseudo regs already assigned hardware regs are treated
- almost the same as explicit hardware regs. */
- remove_from_hard_reg_set (&hard_regs_live, GET_MODE (reg), regno);
-}
-
-/* Try to set a preference for an allocno to a hard register.
- We are passed DEST and SRC which are the operands of a SET. It is known
- that SRC is a register. If SRC or the first operand of SRC is a register,
- try to set a preference. If one of the two is a hard register and the other
- is a pseudo-register, mark the preference.
-
- Note that we are not as aggressive as local-alloc in trying to tie a
- pseudo-register to a hard register. */
-
-static void
-set_preference (rtx dest, rtx src)
-{
- unsigned int src_regno, dest_regno, end_regno;
- /* Amount to add to the hard regno for SRC, or subtract from that for DEST,
- to compensate for subregs in SRC or DEST. */
- int offset = 0;
- unsigned int i;
- int copy = 1;
-
- if (GET_RTX_FORMAT (GET_CODE (src))[0] == 'e')
- src = XEXP (src, 0), copy = 0;
-
- /* Get the reg number for both SRC and DEST.
- If neither is a reg, give up. */
-
- if (REG_P (src))
- src_regno = REGNO (src);
- else if (GET_CODE (src) == SUBREG && REG_P (SUBREG_REG (src)))
- {
- src_regno = REGNO (SUBREG_REG (src));
-
- if (REGNO (SUBREG_REG (src)) < FIRST_PSEUDO_REGISTER)
- offset += subreg_regno_offset (REGNO (SUBREG_REG (src)),
- GET_MODE (SUBREG_REG (src)),
- SUBREG_BYTE (src),
- GET_MODE (src));
- else
- offset += (SUBREG_BYTE (src)
- / REGMODE_NATURAL_SIZE (GET_MODE (src)));
- }
- else
- return;
-
- if (REG_P (dest))
- dest_regno = REGNO (dest);
- else if (GET_CODE (dest) == SUBREG && REG_P (SUBREG_REG (dest)))
- {
- dest_regno = REGNO (SUBREG_REG (dest));
-
- if (REGNO (SUBREG_REG (dest)) < FIRST_PSEUDO_REGISTER)
- offset -= subreg_regno_offset (REGNO (SUBREG_REG (dest)),
- GET_MODE (SUBREG_REG (dest)),
- SUBREG_BYTE (dest),
- GET_MODE (dest));
- else
- offset -= (SUBREG_BYTE (dest)
- / REGMODE_NATURAL_SIZE (GET_MODE (dest)));
- }
- else
- return;
-
- /* Convert either or both to hard reg numbers. */
-
- if (reg_renumber[src_regno] >= 0)
- src_regno = reg_renumber[src_regno];
-
- if (reg_renumber[dest_regno] >= 0)
- dest_regno = reg_renumber[dest_regno];
-
- /* Now if one is a hard reg and the other is a global pseudo
- then give the other a preference. */
-
- if (dest_regno < FIRST_PSEUDO_REGISTER && src_regno >= FIRST_PSEUDO_REGISTER
- && reg_allocno[src_regno] >= 0)
- {
- dest_regno -= offset;
- if (dest_regno < FIRST_PSEUDO_REGISTER)
- {
- if (copy)
- SET_REGBIT (hard_reg_copy_preferences,
- reg_allocno[src_regno], dest_regno);
-
- SET_REGBIT (hard_reg_preferences,
- reg_allocno[src_regno], dest_regno);
- end_regno = end_hard_regno (GET_MODE (dest), dest_regno);
- for (i = dest_regno; i < end_regno; i++)
- SET_REGBIT (hard_reg_full_preferences, reg_allocno[src_regno], i);
- }
- }
-
- if (src_regno < FIRST_PSEUDO_REGISTER && dest_regno >= FIRST_PSEUDO_REGISTER
- && reg_allocno[dest_regno] >= 0)
- {
- src_regno += offset;
- if (src_regno < FIRST_PSEUDO_REGISTER)
- {
- if (copy)
- SET_REGBIT (hard_reg_copy_preferences,
- reg_allocno[dest_regno], src_regno);
-
- SET_REGBIT (hard_reg_preferences,
- reg_allocno[dest_regno], src_regno);
- end_regno = end_hard_regno (GET_MODE (src), src_regno);
- for (i = src_regno; i < end_regno; i++)
- SET_REGBIT (hard_reg_full_preferences, reg_allocno[dest_regno], i);
- }
- }
-}
-
/* Indicate that hard register number FROM was eliminated and replaced with
an offset from hard register number TO. The status of hard registers live
at the start of a basic block is updated by replacing a use of FROM with
@@ -1782,7 +1270,7 @@ mark_elimination (int from, int to)
FOR_EACH_BB (bb)
{
- regset r = DF_RA_LIVE_IN (bb);
+ regset r = DF_LIVE_IN (bb);
if (REGNO_REG_SET_P (r, from))
{
CLEAR_REGNO_REG_SET (r, from);
@@ -2055,6 +1543,11 @@ rest_of_handle_global_alloc (void)
failure = global_alloc ();
else
{
+ /* There is just too much going on in the register allocators to
+ keep things up to date. At the end we have to rescan anyway
+ because things change when the reload_completed flag is set.
+ So we just turn off scanning and we will rescan by hand. */
+ df_set_flags (DF_NO_INSN_RESCAN);
compute_regsets (&eliminable_regset, &no_global_alloc_regs);
build_insn_chain (get_insns ());
df_set_flags (DF_NO_INSN_RESCAN);
Index: local-alloc.c
===================================================================
--- local-alloc.c (revision 127514)
+++ local-alloc.c (working copy)
@@ -1208,13 +1213,9 @@ update_equiv_regs (void)
if (!bitmap_empty_p (cleared_regs))
FOR_EACH_BB (bb)
{
- bitmap_and_compl_into (DF_RA_LIVE_IN (bb), cleared_regs);
- if (DF_RA_LIVE_TOP (bb))
- bitmap_and_compl_into (DF_RA_LIVE_TOP (bb), cleared_regs);
- bitmap_and_compl_into (DF_RA_LIVE_OUT (bb), cleared_regs);
+ bitmap_and_compl_into (DF_LIVE_IN (bb), cleared_regs);
+ bitmap_and_compl_into (DF_LIVE_OUT (bb), cleared_regs);
bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
- if (DF_LR_TOP (bb))
- bitmap_and_compl_into (DF_LR_TOP (bb), cleared_regs);
bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
}
@@ -2509,12 +2510,11 @@ rest_of_handle_local_alloc (void)
df_note_add_problem ();
- if (optimize > 1)
- df_remove_problem (df_live);
- /* Create a new version of df that has the special version of UR if
- we are doing optimization. */
- if (optimize)
- df_urec_add_problem ();
+ if (optimize == 1)
+ {
+ df_live_add_problem ();
+ df_live_set_all_dirty ();
+ }
#ifdef ENABLE_CHECKING
df->changeable_flags |= DF_VERIFY_SCHEDULED;
#endif
@@ -2522,13 +2522,6 @@ rest_of_handle_local_alloc (void)
regstat_init_n_sets_and_refs ();
regstat_compute_ri ();
- /* There is just too much going on in the register allocators to
- keep things up to date. At the end we have to rescan anyway
- because things change when the reload_completed flag is set.
- So we just turn off scanning and we will rescan by hand. */
- df_set_flags (DF_NO_INSN_RESCAN);
-
-
/* If we are not optimizing, then this is the only place before
register allocation where dataflow is done. And that is needed
to generate these warnings. */
Index: df.h
===================================================================
--- df.h (revision 127514)
+++ df.h (working copy)
@@ -43,11 +43,9 @@ struct df_link;
#define DF_SCAN 0
#define DF_LR 1 /* Live Registers backward. */
#define DF_LIVE 2 /* Live Registers & Uninitialized Registers */
-
#define DF_RD 3 /* Reaching Defs. */
-#define DF_UREC 4 /* Uninitialized Registers with Early Clobber. */
-#define DF_CHAIN 5 /* Def-Use and/or Use-Def Chains. */
-#define DF_NOTE 6 /* REG_DEF and REG_UNUSED notes. */
+#define DF_CHAIN 4 /* Def-Use and/or Use-Def Chains. */
+#define DF_NOTE 5 /* REG_DEF and REG_UNUSED notes. */
#define DF_LAST_PROBLEM_PLUS1 (DF_NOTE + 1)
@@ -544,20 +542,14 @@ struct df
#define DF_SCAN_BB_INFO(BB) (df_scan_get_bb_info((BB)->index))
#define DF_RD_BB_INFO(BB) (df_rd_get_bb_info((BB)->index))
#define DF_LR_BB_INFO(BB) (df_lr_get_bb_info((BB)->index))
-#define DF_UREC_BB_INFO(BB) (df_urec_get_bb_info((BB)->index))
#define DF_LIVE_BB_INFO(BB) (df_live_get_bb_info((BB)->index))
/* Most transformations that wish to use live register analysis will
use these macros. This info is the and of the lr and live sets. */
#define DF_LIVE_IN(BB) (DF_LIVE_BB_INFO(BB)->in)
+#define DF_LIVE_TOP(BB) (DF_LIVE_BB_INFO(BB)->top)
#define DF_LIVE_OUT(BB) (DF_LIVE_BB_INFO(BB)->out)
-
-/* Live in for register allocation also takes into account several other factors. */
-#define DF_RA_LIVE_IN(BB) (DF_UREC_BB_INFO(BB)->in)
-#define DF_RA_LIVE_TOP(BB) (DF_UREC_BB_INFO(BB)->top)
-#define DF_RA_LIVE_OUT(BB) (DF_UREC_BB_INFO(BB)->out)
-
/* These macros are currently used by only reg-stack since it is not
tolerant of uninitialized variables. This intolerance should be
fixed because it causes other problems. */
@@ -685,11 +677,21 @@ extern bitmap df_invalidated_by_call;
/* One of these structures is allocated for every basic block. */
struct df_scan_bb_info
{
- /* Defs at the start of a basic block that is the target of an
- exception edge. */
+ /* The entry block has many artificial defs and these are at the
+ bottom of the block.
+
+ Blocks that are targets of exception edges may have some
+ artificial defs. These are logically located at the top of the
+ block.
+
+ Blocks that are the targets of non-local goto's have the hard
+ frame pointer defined at the top of the block. */
struct df_ref **artificial_defs;
- /* Uses of hard registers that are live at every block. */
+ /* Blocks that are targets of exception edges may have some
+ artificial uses. These are logically at the top of the block.
+
+ Most blocks have artificial uses at the bottom of the block. */
struct df_ref **artificial_uses;
};
@@ -714,11 +716,13 @@ struct df_rd_bb_info
df_lr_bb_info:IN is the "in" set of the traditional dataflow sense
which is the confluence of out sets of all predecessor blocks.
- The difference between IN and TOP is
- due to the artificial defs and uses at the top (DF_REF_TOP)
- (e.g. exception handling dispatch block, which can have
- a few registers defined by the runtime) - which is NOT included
- in the "in" set before this function but is included after.
+
+ The difference between IN and TOP is due to the artificial defs and
+ uses at the top (DF_REF_TOP) (e.g. exception handling dispatch
+ block and the targets of non-local gotos, which can have a few
+ registers defined by the runtime) - which is NOT included in the
+ "in" set before this function but is included after.
+
For the initial live set of forward scanning, TOP should be used
instead of IN - otherwise, artificial defs won't be in IN set
causing the bad transformation. TOP set can not simply be
@@ -726,7 +730,14 @@ struct df_rd_bb_info
because artificial defs might not be used at all,
in which case those defs are not live at any point
(except as a dangling def) - hence TOP has to be calculated
- during the LR problem computation and stored in df_lr_bb_info. */
+ during the LR problem computation and stored in df_lr_bb_info.
+
+ The adef and ause sets are defined only if the block has adefs or
+ auses. These are rare since they occur only in blocks that are the
+ targets of exception edges or non local gotos.
+
+ If there are no adefs or auses, the top set is not allocated and
+ points to the in set. */
struct df_lr_bb_info
{
@@ -757,23 +768,11 @@ struct df_live_bb_info
/* The results of the dataflow problem. */
bitmap in; /* At the top of the block. */
- bitmap out; /* At the bottom of the block. */
-};
-
-/* Uninitialized registers. All bitmaps are referenced by the register number. */
-struct df_urec_bb_info
-{
- /* Local sets to describe the basic blocks. */
- bitmap earlyclobber; /* The set of registers that are referenced
- with an early clobber mode. */
- /* Kill and gen are defined as in the UR problem. */
- bitmap kill;
- bitmap gen;
-
- /* The results of the dataflow problem. */
- bitmap in; /* Just before the block. */
- bitmap top; /* Just before the first insn in the block. */
+ /* After the artificial defs and uses at the top of the block. This
+ is only defined if it would be different from the in set. See
+ the comment at the top of df_lr_bb_info. */
+ bitmap top;
bitmap out; /* At the bottom of the block. */
};
@@ -786,7 +785,6 @@ extern struct df *df;
#define df_rd (df->problems_by_index[DF_RD])
#define df_lr (df->problems_by_index[DF_LR])
#define df_live (df->problems_by_index[DF_LIVE])
-#define df_urec (df->problems_by_index[DF_UREC])
#define df_chain (df->problems_by_index[DF_CHAIN])
#define df_note (df->problems_by_index[DF_NOTE])
@@ -870,7 +868,6 @@ extern void df_lr_verify_transfer_functi
extern void df_live_verify_transfer_functions (void);
extern void df_live_add_problem (void);
extern void df_live_set_all_dirty (void);
-extern void df_urec_add_problem (void);
extern void df_chain_add_problem (enum df_chain_flags);
extern void df_note_add_problem (void);
extern void df_simulate_find_defs (rtx, bitmap);
@@ -955,16 +952,6 @@ df_live_get_bb_info (unsigned int index)
return NULL;
}
-static inline struct df_urec_bb_info *
-df_urec_get_bb_info (unsigned int index)
-{
- if (index < df_urec->block_info_size)
- return (struct df_urec_bb_info *) df_urec->block_info[index];
- else
- return NULL;
-}
-
-
/* Get the artificial defs for a basic block. */
static inline struct df_ref **
Index: init-regs.c
===================================================================
--- init-regs.c (revision 127514)
+++ init-regs.c (working copy)
@@ -117,7 +117,11 @@ initialize_uninitialized_regs (void)
}
if (optimize == 1)
- df_remove_problem (df_live);
+ {
+ if (dump_file)
+ df_dump (dump_file);
+ df_remove_problem (df_live);
+ }
BITMAP_FREE (already_genned);
}
Index: rtl.h
===================================================================
--- rtl.h (revision 127514)
+++ rtl.h (working copy)
@@ -1045,6 +1045,7 @@ extern bool subreg_offset_representable_
unsigned int, enum machine_mode);
extern unsigned int subreg_regno (const_rtx);
extern unsigned int subreg_nregs (const_rtx);
+extern unsigned int subreg_nregs_with_regno (unsigned int, const_rtx);
extern unsigned HOST_WIDE_INT nonzero_bits (const_rtx, enum machine_mode);
extern unsigned int num_sign_bit_copies (const_rtx, enum machine_mode);
extern bool constant_pool_constant_p (rtx);
Index: df-problems.c
===================================================================
--- df-problems.c (revision 127514)
+++ df-problems.c (working copy)
@@ -71,9 +71,7 @@ df_get_live_out (basic_block bb)
{
gcc_assert (df_lr);
- if (df_urec)
- return DF_RA_LIVE_OUT (bb);
- else if (df_live)
+ if (df_live)
return DF_LIVE_OUT (bb);
else
return DF_LR_OUT (bb);
@@ -89,9 +87,7 @@ df_get_live_in (basic_block bb)
{
gcc_assert (df_lr);
- if (df_urec)
- return DF_RA_LIVE_IN (bb);
- else if (df_live)
+ if (df_live)
return DF_LIVE_IN (bb);
else
return DF_LR_IN (bb);
@@ -107,8 +103,8 @@ df_get_live_top (basic_block bb)
{
gcc_assert (df_lr);
- if (df_urec)
- return DF_RA_LIVE_TOP (bb);
+ if (df_live)
+ return DF_LIVE_TOP (bb);
else
return DF_LR_TOP (bb);
}
@@ -814,8 +810,9 @@ df_lr_reset (bitmap all_blocks)
struct df_lr_bb_info *bb_info = df_lr_get_bb_info (bb_index);
gcc_assert (bb_info);
bitmap_clear (bb_info->in);
+ if (bb_info->in != bb_info->top)
+ bitmap_clear (bb_info->top);
bitmap_clear (bb_info->out);
- bitmap_clear (bb_info->top);
}
}
@@ -879,12 +876,14 @@ df_lr_bb_local_compute (unsigned int bb_
bitmap_set_bit (bb_info->use, DF_REF_REGNO (use));
}
}
- /* Process the registers set in an exception handler. */
+
+ /* Process the registers set in an exception handler or the hard
+ frame pointer if this block is the target of a non local
+ goto. */
for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
{
struct df_ref *def = *def_rec;
- if ((DF_REF_FLAGS (def) & DF_REF_AT_TOP)
- && (!(DF_REF_FLAGS (def) & (DF_REF_PARTIAL | DF_REF_CONDITIONAL))))
+ if (DF_REF_FLAGS (def) & DF_REF_AT_TOP)
{
unsigned int dregno = DF_REF_REGNO (def);
if (bb_info->adef == NULL)
@@ -1042,13 +1041,13 @@ df_lr_transfer_function (int bb_index)
bitmap use = bb_info->use;
bitmap def = bb_info->def;
bitmap top = bb_info->top;
- bitmap ause = bb_info->ause;
- bitmap adef = bb_info->adef;
bool changed;
changed = bitmap_ior_and_compl (top, use, out, def);
if (in != top)
{
+ bitmap ause = bb_info->ause;
+ bitmap adef = bb_info->adef;
gcc_assert (ause && adef);
changed |= bitmap_ior_and_compl (in, ause, top, adef);
}
@@ -1138,6 +1137,11 @@ df_lr_top_dump (basic_block bb, FILE *fi
fprintf (file, ";; old in \t");
df_print_regset (file, problem_data->in[bb->index]);
}
+ if (bb_info->top != bb_info->in)
+ {
+ fprintf (file, ";; lr top \t");
+ df_print_regset (file, bb_info->top);
+ }
fprintf (file, ";; lr use \t");
df_print_regset (file, bb_info->use);
fprintf (file, ";; lr def \t");
@@ -1425,6 +1429,10 @@ df_live_free_bb_info (basic_block bb ATT
{
BITMAP_FREE (bb_info->gen);
BITMAP_FREE (bb_info->kill);
+ if (bb_info->in == bb_info->top)
+ bb_info->top = NULL;
+ else
+ BITMAP_FREE (bb_info->top);
BITMAP_FREE (bb_info->in);
BITMAP_FREE (bb_info->out);
pool_free (df_live->block_pool, bb_info);
@@ -1463,6 +1471,7 @@ df_live_alloc (bitmap all_blocks ATTRIBU
bb_info->gen = BITMAP_ALLOC (NULL);
bb_info->in = BITMAP_ALLOC (NULL);
bb_info->out = BITMAP_ALLOC (NULL);
+ bb_info->top = bb_info->in;
}
}
df_live->optional_p = (optimize <= 1);
@@ -1482,6 +1491,8 @@ df_live_reset (bitmap all_blocks)
struct df_lr_bb_info *bb_info = df_lr_get_bb_info (bb_index);
gcc_assert (bb_info);
bitmap_clear (bb_info->in);
+ if (bb_info->in != bb_info->top)
+ bitmap_clear (bb_info->top);
bitmap_clear (bb_info->out);
}
}
@@ -1633,11 +1644,19 @@ df_live_local_finalize (bitmap all_block
{
struct df_lr_bb_info *bb_lr_info = df_lr_get_bb_info (bb_index);
struct df_live_bb_info *bb_live_info = df_live_get_bb_info (bb_index);
-
+
/* No register may reach a location where it is not used. Thus
we trim the rr result to the places where it is used. */
bitmap_and_into (bb_live_info->in, bb_lr_info->in);
bitmap_and_into (bb_live_info->out, bb_lr_info->out);
+ if (bb_lr_info->in != bb_lr_info->top)
+ {
+ if (bb_live_info->top == bb_live_info->top)
+ bb_live_info->top = BITMAP_ALLOC (NULL);
+ bitmap_copy (bb_live_info->top, bb_live_info->in);
+ bitmap_ior_into (bb_live_info->top, bb_lr_info->adef);
+ bitmap_and_into (bb_live_info->top, bb_lr_info->top);
+ }
}
df_live->solutions_dirty = false;
@@ -1659,6 +1678,10 @@ df_live_free (void)
struct df_live_bb_info *bb_info = df_live_get_bb_info (i);
if (bb_info)
{
+ if (bb_info->top != bb_info->in)
+ BITMAP_FREE (bb_info->top);
+ else
+ bb_info->top = NULL;
BITMAP_FREE (bb_info->gen);
BITMAP_FREE (bb_info->kill);
BITMAP_FREE (bb_info->in);
@@ -1694,6 +1717,11 @@ df_live_top_dump (basic_block bb, FILE *
fprintf (file, ";; old in \t");
df_print_regset (file, problem_data->in[bb->index]);
}
+ if (bb_info->top != bb_info->in)
+ {
+ fprintf (file, ";; live top \t");
+ df_print_regset (file, bb_info->top);
+ }
fprintf (file, ";; live gen \t");
df_print_regset (file, bb_info->gen);
fprintf (file, ";; live kill\t");
@@ -1913,615 +1941,6 @@ df_live_verify_transfer_functions (void)
BITMAP_FREE (saved_kill);
BITMAP_FREE (all_blocks);
}
-
-
-
-/*----------------------------------------------------------------------------
- UNINITIALIZED REGISTERS WITH EARLYCLOBBER
-
- Find the set of uses for registers that are reachable from the entry
- block without passing thru a definition. In and out bitvectors are built
- for each basic block. The regnum is used to index into these sets.
- See df.h for details.
-
- This is a variant of the UR problem above that has a lot of special
- features just for the register allocation phase. This problem
- should go away if someone would fix the interference graph.
-
- ----------------------------------------------------------------------------*/
-
-/* Private data used to compute the solution for this problem. These
- data structures are not accessible outside of this module. */
-struct df_urec_problem_data
-{
- bool earlyclobbers_found; /* True if any instruction contains an
- earlyclobber. */
-#ifdef STACK_REGS
- bitmap stack_regs; /* Registers that may be allocated to a STACK_REGS. */
-#endif
-};
-
-
-/* Set basic block info. */
-
-static void
-df_urec_set_bb_info (unsigned int index,
- struct df_urec_bb_info *bb_info)
-{
- gcc_assert (df_urec);
- gcc_assert (index < df_urec->block_info_size);
- df_urec->block_info[index] = bb_info;
-}
-
-
-/* Free basic block info. */
-
-static void
-df_urec_free_bb_info (basic_block bb ATTRIBUTE_UNUSED,
- void *vbb_info)
-{
- struct df_urec_bb_info *bb_info = (struct df_urec_bb_info *) vbb_info;
- if (bb_info)
- {
- BITMAP_FREE (bb_info->gen);
- BITMAP_FREE (bb_info->kill);
- BITMAP_FREE (bb_info->in);
- BITMAP_FREE (bb_info->out);
- BITMAP_FREE (bb_info->earlyclobber);
- pool_free (df_urec->block_pool, bb_info);
- }
-}
-
-
-/* Allocate or reset bitmaps for DF_UREC blocks. The solution bits are
- not touched unless the block is new. */
-
-static void
-df_urec_alloc (bitmap all_blocks)
-
-{
- unsigned int bb_index;
- bitmap_iterator bi;
- struct df_urec_problem_data *problem_data
- = (struct df_urec_problem_data *) df_urec->problem_data;
-
- if (!df_urec->block_pool)
- df_urec->block_pool = create_alloc_pool ("df_urec_block pool",
- sizeof (struct df_urec_bb_info), 50);
-
- if (!df_urec->problem_data)
- {
- problem_data = XNEW (struct df_urec_problem_data);
- df_urec->problem_data = problem_data;
- }
- problem_data->earlyclobbers_found = false;
-
- df_grow_bb_info (df_urec);
-
- EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
- {
- struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
- if (bb_info)
- {
- bitmap_clear (bb_info->kill);
- bitmap_clear (bb_info->gen);
- bitmap_clear (bb_info->earlyclobber);
- }
- else
- {
- bb_info = (struct df_urec_bb_info *) pool_alloc (df_urec->block_pool);
- df_urec_set_bb_info (bb_index, bb_info);
- bb_info->kill = BITMAP_ALLOC (NULL);
- bb_info->gen = BITMAP_ALLOC (NULL);
- bb_info->in = BITMAP_ALLOC (NULL);
- bb_info->out = BITMAP_ALLOC (NULL);
- bb_info->top = BITMAP_ALLOC (NULL);
- bb_info->earlyclobber = BITMAP_ALLOC (NULL);
- }
- }
- df_urec->optional_p = true;
-}
-
-
-/* The function modifies local info for register REG being changed in
- SETTER. DATA is used to pass the current basic block info. */
-
-static void
-df_urec_mark_reg_change (rtx reg, const_rtx setter, void *data)
-{
- int regno;
- int endregno;
- int i;
- struct df_urec_bb_info *bb_info = (struct df_urec_bb_info*) data;
-
- if (GET_CODE (reg) == SUBREG)
- reg = SUBREG_REG (reg);
-
- if (!REG_P (reg))
- return;
-
- regno = REGNO (reg);
- if (regno < FIRST_PSEUDO_REGISTER)
- {
- endregno = END_HARD_REGNO (reg);
- for (i = regno; i < endregno; i++)
- {
- bitmap_set_bit (bb_info->kill, i);
-
- if (GET_CODE (setter) != CLOBBER)
- bitmap_set_bit (bb_info->gen, i);
- else
- bitmap_clear_bit (bb_info->gen, i);
- }
- }
- else
- {
- bitmap_set_bit (bb_info->kill, regno);
-
- if (GET_CODE (setter) != CLOBBER)
- bitmap_set_bit (bb_info->gen, regno);
- else
- bitmap_clear_bit (bb_info->gen, regno);
- }
-}
-/* Classes of registers which could be early clobbered in the current
- insn. */
-
-static VEC(int,heap) *earlyclobber_regclass;
-
-/* This function finds and stores register classes that could be early
- clobbered in INSN. If any earlyclobber classes are found, the function
- returns TRUE, in all other cases it returns FALSE. */
-
-static bool
-df_urec_check_earlyclobber (rtx insn)
-{
- int opno;
- bool found = false;
-
- extract_insn (insn);
-
- VEC_truncate (int, earlyclobber_regclass, 0);
- for (opno = 0; opno < recog_data.n_operands; opno++)
- {
- char c;
- bool amp_p;
- int i;
- enum reg_class class;
- const char *p = recog_data.constraints[opno];
-
- class = NO_REGS;
- amp_p = false;
- for (;;)
- {
- c = *p;
- switch (c)
- {
- case '=': case '+': case '?':
- case '#': case '!':
- case '*': case '%':
- case 'm': case '<': case '>': case 'V': case 'o':
- case 'E': case 'F': case 'G': case 'H':
- case 's': case 'i': case 'n':
- case 'I': case 'J': case 'K': case 'L':
- case 'M': case 'N': case 'O': case 'P':
- case 'X':
- case '0': case '1': case '2': case '3': case '4':
- case '5': case '6': case '7': case '8': case '9':
- /* These don't say anything we care about. */
- break;
-
- case '&':
- amp_p = true;
- break;
- case '\0':
- case ',':
- if (amp_p && class != NO_REGS)
- {
- int rc;
-
- found = true;
- for (i = 0;
- VEC_iterate (int, earlyclobber_regclass, i, rc);
- i++)
- {
- if (rc == (int) class)
- goto found_rc;
- }
-
- /* We use VEC_quick_push here because
- earlyclobber_regclass holds no more than
- N_REG_CLASSES elements. */
- VEC_quick_push (int, earlyclobber_regclass, (int) class);
- found_rc:
- ;
- }
-
- amp_p = false;
- class = NO_REGS;
- break;
-
- case 'r':
- class = GENERAL_REGS;
- break;
-
- default:
- class = REG_CLASS_FROM_CONSTRAINT (c, p);
- break;
- }
- if (c == '\0')
- break;
- p += CONSTRAINT_LEN (c, p);
- }
- }
-
- return found;
-}
-
-/* The function checks that pseudo-register *X has a class
- intersecting with the class of pseudo-register could be early
- clobbered in the same insn.
-
- This function is a no-op if earlyclobber_regclass is empty.
-
- Reload can assign the same hard register to uninitialized
- pseudo-register and early clobbered pseudo-register in an insn if
- the pseudo-register is used first time in given BB and not lived at
- the BB start. To prevent this we don't change life information for
- such pseudo-registers. */
-
-static int
-df_urec_mark_reg_use_for_earlyclobber (rtx *x, void *data)
-{
- enum reg_class pref_class, alt_class;
- int i, regno;
- struct df_urec_bb_info *bb_info = (struct df_urec_bb_info*) data;
-
- if (REG_P (*x) && REGNO (*x) >= FIRST_PSEUDO_REGISTER)
- {
- int rc;
-
- regno = REGNO (*x);
- if (bitmap_bit_p (bb_info->kill, regno)
- || bitmap_bit_p (bb_info->gen, regno))
- return 0;
- pref_class = reg_preferred_class (regno);
- alt_class = reg_alternate_class (regno);
- for (i = 0; VEC_iterate (int, earlyclobber_regclass, i, rc); i++)
- {
- if (reg_classes_intersect_p (rc, pref_class)
- || (rc != NO_REGS
- && reg_classes_intersect_p (rc, alt_class)))
- {
- bitmap_set_bit (bb_info->earlyclobber, regno);
- break;
- }
- }
- }
- return 0;
-}
-
-/* The function processes all pseudo-registers in *X with the aid of
- previous function. */
-
-static void
-df_urec_mark_reg_use_for_earlyclobber_1 (rtx *x, void *data)
-{
- for_each_rtx (x, df_urec_mark_reg_use_for_earlyclobber, data);
-}
-
-
-/* Compute local uninitialized register info for basic block BB. */
-
-static void
-df_urec_bb_local_compute (unsigned int bb_index)
-{
- basic_block bb = BASIC_BLOCK (bb_index);
- struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
- rtx insn;
- struct df_ref **def_rec;
-
- for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
- {
- struct df_ref *def = *def_rec;
- if (DF_REF_FLAGS (def) & DF_REF_AT_TOP)
- {
- unsigned int regno = DF_REF_REGNO (def);
- bitmap_set_bit (bb_info->gen, regno);
- }
- }
-
- FOR_BB_INSNS (bb, insn)
- {
- if (INSN_P (insn))
- {
- note_stores (PATTERN (insn), df_urec_mark_reg_change, bb_info);
- if (df_urec_check_earlyclobber (insn))
- {
- struct df_urec_problem_data *problem_data
- = (struct df_urec_problem_data *) df_urec->problem_data;
- problem_data->earlyclobbers_found = true;
- note_uses (&PATTERN (insn),
- df_urec_mark_reg_use_for_earlyclobber_1, bb_info);
- }
- }
- }
-
- for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
- {
- struct df_ref *def = *def_rec;
- if ((DF_REF_FLAGS (def) & DF_REF_AT_TOP) == 0)
- {
- unsigned int regno = DF_REF_REGNO (def);
- bitmap_set_bit (bb_info->gen, regno);
- }
- }
-}
-
-
-/* Compute local uninitialized register info. */
-
-static void
-df_urec_local_compute (bitmap all_blocks)
-{
- unsigned int bb_index;
- bitmap_iterator bi;
-#ifdef STACK_REGS
- int i;
- HARD_REG_SET stack_hard_regs, used;
- struct df_urec_problem_data *problem_data
- = (struct df_urec_problem_data *) df_urec->problem_data;
-
- /* Any register that MAY be allocated to a register stack (like the
- 387) is treated poorly. Each such register is marked as being
- live everywhere. This keeps the register allocator and the
- subsequent passes from doing anything useful with these values.
-
- FIXME: This seems like an incredibly poor idea. */
-
- CLEAR_HARD_REG_SET (stack_hard_regs);
- for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
- SET_HARD_REG_BIT (stack_hard_regs, i);
- problem_data->stack_regs = BITMAP_ALLOC (NULL);
- for (i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
- {
- COPY_HARD_REG_SET (used, reg_class_contents[reg_preferred_class (i)]);
- IOR_HARD_REG_SET (used, reg_class_contents[reg_alternate_class (i)]);
- AND_HARD_REG_SET (used, stack_hard_regs);
- if (!hard_reg_set_empty_p (used))
- bitmap_set_bit (problem_data->stack_regs, i);
- }
-#endif
-
- /* We know that earlyclobber_regclass holds no more than
- N_REG_CLASSES elements. See df_urec_check_earlyclobber. */
- earlyclobber_regclass = VEC_alloc (int, heap, N_REG_CLASSES);
-
- EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
- {
- df_urec_bb_local_compute (bb_index);
- }
-
- VEC_free (int, heap, earlyclobber_regclass);
-}
-
-
-/* Initialize the solution vectors. */
-
-static void
-df_urec_init (bitmap all_blocks)
-{
- unsigned int bb_index;
- bitmap_iterator bi;
-
- EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
- {
- struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
-
- bitmap_copy (bb_info->out, bb_info->gen);
- bitmap_clear (bb_info->in);
- }
-}
-
-
-/* Or in the stack regs, hard regs and early clobber regs into the
- urec_in sets of all of the blocks. */
-
-
-static void
-df_urec_local_finalize (bitmap all_blocks)
-{
- bitmap tmp = BITMAP_ALLOC (NULL);
- bitmap_iterator bi;
- unsigned int bb_index;
- struct df_urec_problem_data *problem_data
- = (struct df_urec_problem_data *) df_urec->problem_data;
-
- EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
- {
- struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
- struct df_lr_bb_info *bb_lr_info = df_lr_get_bb_info (bb_index);
-
- if (bb_index != ENTRY_BLOCK && bb_index != EXIT_BLOCK)
- {
- if (problem_data->earlyclobbers_found)
- bitmap_ior_into (bb_info->in, bb_info->earlyclobber);
-
-#ifdef STACK_REGS
- /* We can not use the same stack register for uninitialized
- pseudo-register and another living pseudo-register
- because if the uninitialized pseudo-register dies,
- subsequent pass reg-stack will be confused (it will
- believe that the other register dies). */
- bitmap_ior_into (bb_info->in, problem_data->stack_regs);
- bitmap_ior_into (bb_info->out, problem_data->stack_regs);
-#endif
- }
-
- /* No register may reach a location where it is not used. Thus
- we trim the rr result to the places where it is used. */
- bitmap_and_into (bb_info->in, bb_lr_info->in);
- bitmap_and_into (bb_info->out, bb_lr_info->out);
- bitmap_copy (bb_info->top, bb_info->in);
- if (bb_lr_info->adef)
- bitmap_ior_into (bb_info->top, bb_lr_info->adef);
- bitmap_and_into (bb_info->top, bb_lr_info->top);
-#if 0
- /* Hard registers may still stick in the ur_out set, but not
- be in the ur_in set, if their only mention was in a call
- in this block. This is because a call kills in the lr
- problem but does not kill in the rr problem. To clean
- this up, we execute the transfer function on the lr_in
- set and then use that to knock bits out of ur_out. */
- bitmap_ior_and_compl (tmp, bb_info->gen, bb_lr_info->in,
- bb_info->kill);
- bitmap_and_into (bb_info->out, tmp);
-#endif
- }
-
-#ifdef STACK_REGS
- BITMAP_FREE (problem_data->stack_regs);
-#endif
- BITMAP_FREE (tmp);
-}
-
-
-/* Confluence function that ignores fake edges. */
-
-static void
-df_urec_confluence_n (edge e)
-{
- bitmap op1 = df_urec_get_bb_info (e->dest->index)->in;
- bitmap op2 = df_urec_get_bb_info (e->src->index)->out;
-
- if (e->flags & EDGE_FAKE)
- return;
-
- bitmap_ior_into (op1, op2);
-}
-
-
-/* Transfer function. */
-
-static bool
-df_urec_transfer_function (int bb_index)
-{
- struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
- bitmap in = bb_info->in;
- bitmap out = bb_info->out;
- bitmap gen = bb_info->gen;
- bitmap kill = bb_info->kill;
-
- return bitmap_ior_and_compl (out, gen, in, kill);
-}
-
-
-/* Free all storage associated with the problem. */
-
-static void
-df_urec_free (void)
-{
- if (df_urec->block_info)
- {
- unsigned int i;
-
- for (i = 0; i < df_urec->block_info_size; i++)
- {
- struct df_urec_bb_info *bb_info = df_urec_get_bb_info (i);
- if (bb_info)
- {
- BITMAP_FREE (bb_info->gen);
- BITMAP_FREE (bb_info->kill);
- BITMAP_FREE (bb_info->in);
- BITMAP_FREE (bb_info->out);
- BITMAP_FREE (bb_info->earlyclobber);
- BITMAP_FREE (bb_info->top);
- }
- }
-
- free_alloc_pool (df_urec->block_pool);
-
- df_urec->block_info_size = 0;
- free (df_urec->block_info);
- free (df_urec->problem_data);
- }
- free (df_urec);
-}
-
-
-/* Debugging info at top of bb. */
-
-static void
-df_urec_top_dump (basic_block bb, FILE *file)
-{
- struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb->index);
- if (!bb_info || !bb_info->in)
- return;
-
- fprintf (file, ";; urec in \t");
- df_print_regset (file, bb_info->in);
- fprintf (file, ";; urec gen \t");
- df_print_regset (file, bb_info->gen);
- fprintf (file, ";; urec kill\t");
- df_print_regset (file, bb_info->kill);
- fprintf (file, ";; urec ec\t");
- df_print_regset (file, bb_info->earlyclobber);
-}
-
-
-/* Debugging info at bottom of bb. */
-
-static void
-df_urec_bottom_dump (basic_block bb, FILE *file)
-{
- struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb->index);
- if (!bb_info || !bb_info->out)
- return;
- fprintf (file, ";; urec out \t");
- df_print_regset (file, bb_info->out);
-}
-
-
-/* All of the information associated with every instance of the problem. */
-
-static struct df_problem problem_UREC =
-{
- DF_UREC, /* Problem id. */
- DF_FORWARD, /* Direction. */
- df_urec_alloc, /* Allocate the problem specific data. */
- NULL, /* Reset global information. */
- df_urec_free_bb_info, /* Free basic block info. */
- df_urec_local_compute, /* Local compute function. */
- df_urec_init, /* Init the solution specific data. */
- df_worklist_dataflow, /* Worklist solver. */
- NULL, /* Confluence operator 0. */
- df_urec_confluence_n, /* Confluence operator n. */
- df_urec_transfer_function, /* Transfer function. */
- df_urec_local_finalize, /* Finalize function. */
- df_urec_free, /* Free all of the problem information. */
- df_urec_free, /* Remove this problem from the stack of dataflow problems. */
- NULL, /* Debugging. */
- df_urec_top_dump, /* Debugging start block. */
- df_urec_bottom_dump, /* Debugging end block. */
- NULL, /* Incremental solution verify start. */
- NULL, /* Incremental solution verify end. */
- &problem_LR, /* Dependent problem. */
- TV_DF_UREC, /* Timing variable. */
- false /* Reset blocks on dropping out of blocks_to_analyze. */
-};
-
-
-/* Create a new DATAFLOW instance and add it to an existing instance
- of DF. The returned structure is what is used to get at the
- solution. */
-
-void
-df_urec_add_problem (void)
-{
- df_add_problem (&problem_UREC);
-}
-
-
/*----------------------------------------------------------------------------
CREATE DEF_USE (DU) and / or USE_DEF (UD) CHAINS
Index: Makefile.in
===================================================================
--- Makefile.in (revision 127514)
+++ Makefile.in (working copy)
@@ -1080,6 +1080,7 @@ OBJS-common = \
print-rtl.o \
print-tree.o \
profile.o \
+ ra-conflict.o \
real.o \
recog.o \
reg-stack.o \
@@ -2674,6 +2675,10 @@ bitmap.o : bitmap.c $(CONFIG_H) $(SYSTEM
global.o : global.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
$(FLAGS_H) reload.h $(FUNCTION_H) $(RECOG_H) $(REGS_H) hard-reg-set.h \
insn-config.h output.h toplev.h $(TM_P_H) $(MACHMODE_H) tree-pass.h \
+ $(TIMEVAR_H) vecprim.h $(DF_H) $(DBGCNT_H)
+ra-conflict.o : ra-conflict.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
+ $(FLAGS_H) reload.h $(FUNCTION_H) $(RECOG_H) $(REGS_H) hard-reg-set.h \
+ insn-config.h output.h toplev.h $(TM_P_H) $(MACHMODE_H) tree-pass.h \
$(TIMEVAR_H) vecprim.h $(DF_H)
varray.o : varray.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(GGC_H) \
$(HASHTAB_H) $(BCONFIG_H) $(VARRAY_H) toplev.h
Index: reload1.c
===================================================================
--- reload1.c (revision 127514)
+++ reload1.c (working copy)
@@ -547,7 +547,7 @@ compute_use_by_pseudos (HARD_REG_SET *to
if (r < 0)
{
/* reload_combine uses the information from
- DF_RA_LIVE_IN (BASIC_BLOCK), which might still
+ DF_LIVE_IN (BASIC_BLOCK), which might still
contain registers that have not actually been allocated
since they have an equivalence. */
gcc_assert (reload_completed);