This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [trunk]: patch to properly build the conflict graph


Seongbae Park (???, ???) wrote:
> On 9/30/07, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
>   
>> This is an updated patch based on several comments that were made by Richard Sandiford.
>>
>> There are two significant changes as well as several comments that were added or fixed.
>>
>> 1) Several bugs were found in ra-conflict.c:set_renumbers_live that had to do with the way that offsets were computed.  These are now fixed
>>
>> 2) At richard's request the code in check_insn_for_earlyclobber and set_conflicts_for_earlyclobber has been completely replaced with a new function set_conflicts_for_earlyclobber that works completely differently.
>>
>> The old code originally came from global.c:make_accurate_live_analysis and was modified once to go into the df_problems.c for the UREC problem.  Richard thought this code had exceeded the crufty limit as well as it was overly conservative.  The new code considers each alternative of the insn separately rather than merging all of the alternatives and analyzing that.  In this way fewer interferences will be added if the earlyclobbers only appear on some alternatives and the sets of registers that the inputs can be allocated to vary between alternatives.
>>
>> I do not know how much this makes a difference in practice.  I assume that it is highly platform dependent.
>>
>> I have tested this i86-{64,32} as well as the ia-64 and ppc-32.  Peter is checking this on the ppc-64 and I have asked richard to check this on the mips.
>>
>> Kenny
>>     
>
> ra-conflict.c:273-287 is puzzling to me.
> Can you explain (and add comments on) why inner "if" is necessary
> and what it does (especially in regard to recog_data.operand[use+1]) ?
>
> Seongbae
>   
Comment added and committed as revision 128957.

This patch was tested on mips-64, ppc-(64,32) x86-(64,32),ia-64,spu and
spark.
I hope this is good enough so that there are no problems.

Kenny

2007-09-02 Kenneth Zadeck <zadeck@naturalbridge.com>

* ra-conflict.c: New file.
* ra.h: New file.
* reload.c (push_reload, find_dummy_reload): Change DF_RA_LIVE
usage to DF_LIVE usage.
* rtlanal.c (subreg_nregs_with_regno): New function.
* df-scan.c (df_def_record_1, df_uses_record): Add code to set
DF_REF_EXTRACT, DF_REF_STRICT_LOWER_PART, and DF_REF_SUBREG flags.
(df_has_eh_preds): Removed.
(df_bb_refs_collect, df_bb_refs_collect, df_bb_refs_collect,
df_exit_block_uses_collect): Changed call from df_has_eh_preds to
bb_has_eh_pred.
* global.c (allocno, max_allocno, conflicts, allocno_row_words,
reg_allocno, EXECUTE_IF_SET_IN_ALLOCNO_SET): Moved to ra.h
(SET_ALLOCNO_LIVE, CLEAR_ALLOCNO_LIVE): Moved to ra-conflicts.c.
(regs_set, record_one_conflict, record_conflicts, mark_reg_store,
mark_reg_clobber, mark_reg_conflicts, mark_reg_death): Deleted.
(global_alloc): Turn off rescanning insns after call to
global_conflicts and added call to set_preferences.
(global_conflicts): Moved to ra-alloc.c.
(set_preferences_1, set_preferences): New function.
(mirror_conflicts): Changed types for various variables.
(mark_elimination): Change DF_RA_LIVE
usage to DF_LIVE usage.
(build_insn_chain): Rewritten from scratch and made local.
(print_insn_chain, print_insn_chains): New functions.
(dump_conflicts): Do not print conflicts for fixed_regs.
(rest_of_handle_global_alloc): Turn off insn rescanning.
* hard-reg-set.h: Fixed comment.
* local-alloc.c (update_equiv_regs): Change DF_RA_LIVE
usage to DF_LIVE usage and delete refs to TOP sets.
(block_alloc): Mark regs as live if they are in the artificial
defs at top of block.
(find_stack_regs): New function.
(rest_of_handle_local_alloc): Changed urec problem to live
problem and do not turn off df rescanning.
* df.h (DF_UREC, DF_UREC_BB_INFO, DF_LIVE_TOP, DF_RA_LIVE_IN,
DF_RA_LIVE_TOP, DF_RA_LIVE_OUT, df_urec_bb_info, df_urec,
df_urec_add_problem, df_urec_get_bb_info, df_has_eh_preds): Removed.
(DF_CHAIN, DF_NOTE, DF_CHAIN): Renumbered.
(DF_REF_EXTRACT, DF_REF_STRICT_LOWER_PART, DF_REF_SUBREG): New
fields in df_ref_flags. The rest have been renumbered.
* init-regs.c (initialize_uninitialized_regs): Enhanced debugging
at -O1.
* rtl.h (subreg_nregs_with_regno): New function.
* df-problems.c: (df_get_live_out, df_get_live_in,
df_get_live_top): Removed reference to DF_RA_LIVE.
(df_lr_reset, df_lr_transfer_function, df_live_free_bb_info,
df_live_alloc, df_live_reset, df_live_local_finalize,
df_live_free): Make top set only if different from in set.
(df_lr_top_dump, df_live_top_dump): Only print top set if
different from in set.
(df_lr_bb_local_compute): Removed unnecessary check.
(df_urec_problem_data, df_urec_set_bb_info, df_urec_free_bb_info,
df_urec_alloc, df_urec_mark_reg_change, earlyclobber_regclass,
df_urec_check_earlyclobber, df_urec_mark_reg_use_for_earlyclobber,
df_urec_mark_reg_use_for_earlyclobber_1, df_urec_bb_local_compute,
df_urec_local_compute, df_urec_init, df_urec_local_finalize,
df_urec_confluence_n, df_urec_transfer_function, df_urec_free,
df_urec_top_dump, df_urec_bottom_dump, problem_UREC,
df_urec_add_problem): Removed.
(df_simulate_fixup_sets): Changed call from df_has_eh_preds to
bb_has_eh_pred.
* Makefile.in (ra-conflict.o, ra.h): New dependencies.
* basic_block.h (bb_has_abnormal_pred): New function.
* reload1.c (compute_use_by_pseudos): Change DF_RA_LIVE
usage to DF_LIVE usage.




Index: ra-conflict.c
===================================================================
--- ra-conflict.c	(revision 0)
+++ ra-conflict.c	(revision 0)
@@ -0,0 +1,1120 @@
+/* Allocate registers for pseudo-registers that span basic blocks.
+   Copyright (C) 2007 Free Software Foundation, Inc.
+   Contributed by Kenneth Zadeck <zadeck@naturalbridge.com>
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "machmode.h"
+#include "hard-reg-set.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "flags.h"
+#include "regs.h"
+#include "function.h"
+#include "insn-config.h"
+#include "recog.h"
+#include "reload.h"
+#include "output.h"
+#include "toplev.h"
+#include "tree-pass.h"
+#include "timevar.h"
+#include "df.h"
+#include "vecprim.h"
+#include "ra.h"
+#include "sbitmap.h"
+
+/* Test, set or clear bit number I in allocnos_live,
+   a bit vector indexed by allocno.  */
+
+#define SET_ALLOCNO_LIVE(A, I)			\
+  ((A)[(unsigned) (I) / HOST_BITS_PER_WIDE_INT]				\
+     |= ((HOST_WIDE_INT) 1 << ((unsigned) (I) % HOST_BITS_PER_WIDE_INT)))
+
+#define CLEAR_ALLOCNO_LIVE(A, I)		\
+  ((A)[(unsigned) (I) / HOST_BITS_PER_WIDE_INT]			\
+     &= ~((HOST_WIDE_INT) 1 << ((unsigned) (I) % HOST_BITS_PER_WIDE_INT)))
+
+#define GET_ALLOCNO_LIVE(A, I)		\
+  ((A)[(unsigned) (I) / HOST_BITS_PER_WIDE_INT]			\
+     & ((HOST_WIDE_INT) 1 << ((unsigned) (I) % HOST_BITS_PER_WIDE_INT)))
+
+/* Externs defined in regs.h.  */
+
+int max_allocno;
+struct allocno *allocno;
+HOST_WIDE_INT *conflicts;
+int allocno_row_words;
+int *reg_allocno;
+
+typedef struct df_ref * df_ref_t;
+DEF_VEC_P(df_ref_t);
+DEF_VEC_ALLOC_P(df_ref_t,heap);
+
+/* Add a conflict between R1 and R2.  */
+
+static void
+record_one_conflict_between_regnos (enum machine_mode mode1, int r1, 
+				    enum machine_mode mode2, int r2)
+{
+  if (dump_file)
+    fprintf (dump_file, "  rocbr adding %d<=>%d\n", r1, r2);
+  if (reg_allocno[r1] >= 0 && reg_allocno[r2] >= 0)
+    {
+      int tr1 = reg_allocno[r1];
+      int tr2 = reg_allocno[r2];
+      int ialloc_prod = tr1 * allocno_row_words;
+
+      SET_ALLOCNO_LIVE ((&conflicts[ialloc_prod]), tr2);
+    }
+  else if (reg_allocno[r1] >= 0)
+    {
+      int tr1 = reg_allocno[r1];
+
+      if (r2 < FIRST_PSEUDO_REGISTER)
+	add_to_hard_reg_set (&allocno[tr1].hard_reg_conflicts, mode2, r2);
+    }
+  else if (reg_allocno[r2] >= 0)
+    {
+      int tr2 = reg_allocno[r2];
+
+      if (r1 < FIRST_PSEUDO_REGISTER)
+        add_to_hard_reg_set (&allocno[tr2].hard_reg_conflicts, mode1, r1);
+    }
+
+  /* Now, recursively handle the reg_renumber cases.  */
+  if (reg_renumber[r1] >= 0)
+    record_one_conflict_between_regnos (mode1, reg_renumber[r1], mode2, r2);
+
+  if (reg_renumber[r2] >= 0)
+    record_one_conflict_between_regnos (mode1, r1, mode2, reg_renumber[r2]);
+}
+
+
+/* Record a conflict between register REGNO and everything currently
+   live.  REGNO must not be a pseudo reg that was allocated by
+   local_alloc; such numbers must be translated through reg_renumber
+   before calling here.  */
+
+static void
+record_one_conflict (HOST_WIDE_INT *allocnos_live, 
+		     HARD_REG_SET *hard_regs_live, int regno)
+{
+  int i;
+
+  if (regno < FIRST_PSEUDO_REGISTER)
+    /* When a hard register becomes live, record conflicts with live
+       pseudo regs.  */
+    EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, i,
+      {
+	SET_HARD_REG_BIT (allocno[i].hard_reg_conflicts, regno);
+	if (dump_file)
+	  fprintf (dump_file, "  roc adding %d<=>%d\n", allocno[i].reg, regno);
+      });
+  else
+    /* When a pseudo-register becomes live, record conflicts first
+       with hard regs, then with other pseudo regs.  */
+    {
+      int ialloc = reg_allocno[regno];
+      int ialloc_prod = ialloc * allocno_row_words;
+
+      if (dump_file)
+	{
+	  fprintf (dump_file, "  roc adding %d<=>(", regno);
+	  for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+	    if (TEST_HARD_REG_BIT (*hard_regs_live, i) 
+		&& !TEST_HARD_REG_BIT (allocno[ialloc].hard_reg_conflicts, i))
+	      fprintf (dump_file, "%d ", i);
+	  
+	  EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, i,
+	    {
+	      if (!GET_ALLOCNO_LIVE (&conflicts[ialloc_prod], i))
+		fprintf (dump_file, "%d ", allocno[i].reg);
+	    });
+	  fprintf (dump_file, ")\n");
+	}
+
+      IOR_HARD_REG_SET (allocno[ialloc].hard_reg_conflicts, *hard_regs_live);
+
+      for (i = allocno_row_words - 1; i >= 0; i--)
+	conflicts[ialloc_prod + i] |= allocnos_live[i];
+    }
+}
+
+
+/* Handle the case where REG is set by the insn being scanned, during
+   the backward scan to accumulate conflicts.  Record a conflict with
+   all other registers already live.
+
+   REG might actually be something other than a register; if so, we do
+   nothing.  */
+
+static void
+mark_reg_store (HOST_WIDE_INT *allocnos_live, 
+		HARD_REG_SET *hard_regs_live, 
+		struct df_ref *ref)
+{
+  rtx reg = DF_REF_REG (ref);
+  unsigned int regno = DF_REF_REGNO (ref);
+  enum machine_mode mode = GET_MODE (reg);
+
+  /* Either this is one of the max_allocno pseudo regs not allocated,
+     or it is or has a hardware reg.  First handle the pseudo-regs.  */
+  if (regno >= FIRST_PSEUDO_REGISTER && reg_allocno[regno] >= 0)
+    record_one_conflict (allocnos_live, hard_regs_live, regno);
+
+  if (reg_renumber[regno] >= 0)
+    regno = reg_renumber[regno];
+
+  /* Handle hardware regs (and pseudos allocated to hard regs).  */
+  if (regno < FIRST_PSEUDO_REGISTER && ! fixed_regs[regno])
+    {
+      unsigned int start = regno;
+      unsigned int last = end_hard_regno (mode, regno);
+      if ((GET_CODE (reg) == SUBREG) && !DF_REF_FLAGS_IS_SET (ref, DF_REF_EXTRACT))
+	{
+	  start += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
+					SUBREG_BYTE (reg), GET_MODE (reg));
+	  last = start + subreg_nregs_with_regno (regno, reg);
+	}
+
+      regno = start;
+      while (regno < last)
+	record_one_conflict (allocnos_live, hard_regs_live, regno++);
+    }
+}
+
+
+/* Return true if REGNO with MODE can be assigned to a register in
+   CL.  */
+
+static bool
+may_overlap_class_p (enum machine_mode mode, unsigned int regno, 
+		     enum reg_class rc)
+{
+  if (regno >= FIRST_PSEUDO_REGISTER)
+    {
+      enum reg_class pref_class = reg_preferred_class (regno);
+      enum reg_class alt_class = reg_alternate_class (regno);
+      return (reg_classes_intersect_p (rc, pref_class)
+	      || reg_classes_intersect_p (rc, alt_class));
+    }
+  else
+    return in_hard_reg_set_p (reg_class_contents[rc], mode, regno);
+}
+
+
+/* SRC is an input operand to an instruction in which register DEST is
+   an output operand.  SRC may be bound to a member of class SRC_CLASS
+   and DEST may be bound to an earlyclobbered register that overlaps
+   SRC_CLASS.  If SRC is a register that might be allocated a member
+   of SRC_CLASS, add a conflict between it and DEST.  */
+
+static void
+add_conflicts_for_earlyclobber (rtx dest, enum reg_class src_class, rtx src)
+{
+  if (GET_CODE (src) == SUBREG)
+    src = SUBREG_REG (src);
+  if (REG_P (src)
+      && may_overlap_class_p (GET_MODE (src), REGNO (src), src_class))
+    record_one_conflict_between_regnos (GET_MODE (src), REGNO (src),
+                                        GET_MODE (dest), REGNO (dest));
+}
+
+
+/* Look at the defs in INSN and determine if any of them are marked as
+   early clobber.  If they are marked as early clobber, add a conflict
+   between any input operand that could be allocated to the same
+   register.  */
+
+static void
+set_conflicts_for_earlyclobber (rtx insn)
+{
+  int alt;
+  int def;
+  int use;
+
+  extract_insn (insn);
+  preprocess_constraints ();
+
+  if (dump_file) 
+    fprintf (dump_file, "  starting early clobber conflicts.\n");
+
+  for (alt = 0; alt < recog_data.n_alternatives; alt++)
+    for (def = 0; def < recog_data.n_operands; def++)
+      if ((recog_op_alt[def][alt].earlyclobber)
+	  && (recog_op_alt[def][alt].cl != NO_REGS))
+	{
+	  rtx dreg = recog_data.operand[def];
+	  enum machine_mode dmode = recog_data.operand_mode[def];
+	  if (GET_CODE (dreg) == SUBREG)
+	    dreg = SUBREG_REG (dreg);
+	  if (REG_P (dreg)
+	      &&  may_overlap_class_p (dmode, REGNO (dreg), recog_op_alt[def][alt].cl))
+
+	    for (use = 0; use < recog_data.n_operands; use++)
+	      if (use != def
+		  && recog_data.operand_type[use] != OP_OUT
+		  && reg_classes_intersect_p (recog_op_alt[def][alt].cl,
+					      recog_op_alt[use][alt].cl))
+		{
+		  add_conflicts_for_earlyclobber (dreg,
+						  recog_op_alt[use][alt].cl,
+						  recog_data.operand[use]);
+		  /*  Reload may end up swapping commutative operands,
+		      so you have to take both orderings into account.
+		      The constraints for the two operands can be
+		      completely different.  (Indeed, if the
+		      constraints for the two operands are the same
+		      for all alternatives, there's no point marking
+		      them as commutative.)  */
+		  if (use < recog_data.n_operands + 1
+		      && recog_data.constraints[use][0] == '%')
+		    add_conflicts_for_earlyclobber (dreg,
+						    recog_op_alt[use][alt].cl,
+						    recog_data.operand[use + 1]);
+		}
+	}
+
+  if (dump_file) 
+    fprintf (dump_file, "  finished early clobber conflicts.\n");
+}
+
+
+/* Init LIVE_SUBREGS[ALLOCNUM] and LIVE_SUBREGS_USED[ALLOCNUM] using
+   REG to the the number of nregs, and INIT_VALUE to get the
+   initialization.  ALLOCNUM need not be the regno of REG.  */
+
+void
+ra_init_live_subregs (bool init_value, 
+		      sbitmap *live_subregs, 
+		      int *live_subregs_used,
+		      int allocnum, 
+		      rtx reg)
+{
+  unsigned int regno = REGNO (SUBREG_REG (reg));
+  int size = GET_MODE_SIZE (GET_MODE (regno_reg_rtx[regno]));
+
+  gcc_assert (size > 0);
+
+  /* Been there, done that.  */
+  if (live_subregs_used[allocnum])
+    return;
+
+  /* Create a new one with zeros.  */
+  if (live_subregs[allocnum] == NULL)
+    live_subregs[allocnum] = sbitmap_alloc (size);
+
+  /* If the entire reg was live before blasting into subregs, we need
+     to init all of the subregs to ones else init to 0.  */
+  if (init_value)
+    sbitmap_ones (live_subregs[allocnum]);
+  else 
+    sbitmap_zero (live_subregs[allocnum]);
+
+  /* Set the number of bits that we really want.  */
+  live_subregs_used[allocnum] = size;
+}
+
+
+/* Set REG to be not live in the sets ALLOCNOS_LIVE, LIVE_SUBREGS,
+   HARD_REGS_LIVE.  If EXTRACT is false, assume that the entire reg is
+   set not live even if REG is a subreg.  */
+
+inline static void
+clear_reg_in_live (HOST_WIDE_INT *allocnos_live,
+		   sbitmap *live_subregs, 
+		   int *live_subregs_used,
+		   HARD_REG_SET *hard_regs_live, 
+		   rtx reg,
+		   bool extract)
+{
+  unsigned int regno = (GET_CODE (reg) == SUBREG) 
+    ? REGNO (SUBREG_REG (reg)): REGNO (reg);
+  int allocnum = reg_allocno[regno];
+
+  if (allocnum >= 0)
+    {
+      if ((GET_CODE (reg) == SUBREG) && !extract)
+
+	{
+	  unsigned int start = SUBREG_BYTE (reg);
+	  unsigned int last = start + GET_MODE_SIZE (GET_MODE (reg));
+
+	  ra_init_live_subregs (GET_ALLOCNO_LIVE (allocnos_live, allocnum) != 0, 
+				live_subregs, live_subregs_used, allocnum, reg);
+
+	  /* Ignore the paradoxical bits.  */
+	  if ((int)last > live_subregs_used[allocnum])
+	    last = live_subregs_used[allocnum];
+
+	  while (start < last)
+	    {
+	      RESET_BIT (live_subregs[allocnum], start);
+	      start++;
+	    }
+
+	  if (sbitmap_empty_p (live_subregs[allocnum]))
+	    {
+	      live_subregs_used[allocnum] = 0;
+	      CLEAR_ALLOCNO_LIVE (allocnos_live, allocnum);
+	    }
+	  else
+	    /* Set the allocnos live here because that bit has to be
+	       true to get us to look at the live_subregs fields.  */
+	    SET_ALLOCNO_LIVE (allocnos_live, allocnum);
+	}
+      else
+	{
+	  /* Resetting the live_subregs_used is effectively saying do not use the 
+	     subregs because we are writing the whole pseudo.  */
+	  live_subregs_used[allocnum] = 0;
+	  CLEAR_ALLOCNO_LIVE (allocnos_live, allocnum);
+	}
+    }
+
+  if (regno >= FIRST_PSEUDO_REGISTER)
+    return;
+
+  /* Handle hardware regs (and pseudos allocated to hard regs).  */
+  if (! fixed_regs[regno])
+    {
+      unsigned int start = regno;
+      if ((GET_CODE (reg) == SUBREG) && !extract)
+	{
+	  unsigned int last;
+	  start += SUBREG_BYTE (reg);
+	  last = start + subreg_nregs_with_regno (regno, reg);
+	  regno = start;
+
+	  while (regno < last)
+	    {
+	      CLEAR_HARD_REG_BIT (*hard_regs_live, regno);
+	      regno++;
+	    }
+	}
+      else
+	remove_from_hard_reg_set (hard_regs_live, GET_MODE (reg), regno);
+    }
+}
+
+
+
+/* Set REG to be live in the sets ALLOCNOS_LIVE, LIVE_SUBREGS,
+   HARD_REGS_LIVE.  If EXTRACT is false, assume that the entire reg is
+   set live even if REG is a subreg.  */
+
+inline static void
+set_reg_in_live (HOST_WIDE_INT *allocnos_live, 
+		 sbitmap *live_subregs, 
+		 int *live_subregs_used,
+		 HARD_REG_SET *hard_regs_live, 
+		 rtx reg,
+		 bool extract)
+{
+  unsigned int regno = (GET_CODE (reg) == SUBREG) 
+    ? REGNO (SUBREG_REG (reg)): REGNO (reg);
+  int allocnum = reg_allocno[regno];
+
+  if (allocnum >= 0)
+    {
+      if ((GET_CODE (reg) == SUBREG) && !extract)
+	{
+	  unsigned int start = SUBREG_BYTE (reg);
+	  unsigned int last = start + GET_MODE_SIZE (GET_MODE (reg));
+
+	  ra_init_live_subregs (GET_ALLOCNO_LIVE (allocnos_live, allocnum) != 0, 
+				live_subregs, live_subregs_used, allocnum, reg);
+	  
+	  /* Ignore the paradoxical bits.  */
+	  if ((int)last > live_subregs_used[allocnum])
+	    last = live_subregs_used[allocnum];
+
+	  while (start < last)
+	    {
+	      SET_BIT (live_subregs[allocnum], start);
+	      start++;
+	    }
+	}
+      else
+	/* Resetting the live_subregs_used is effectively saying do not use the 
+	   subregs because we are writing the whole pseudo.  */
+	  live_subregs_used[allocnum] = 0;
+     
+      SET_ALLOCNO_LIVE (allocnos_live, allocnum);
+    }
+      
+  if (regno >= FIRST_PSEUDO_REGISTER)
+    return;
+
+  /* Handle hardware regs (and pseudos allocated to hard regs).  */
+  if (! fixed_regs[regno])
+    {
+      if ((GET_CODE (reg) == SUBREG) && !extract)
+	{
+	  unsigned int start = regno;
+	  unsigned int last;
+
+	  start += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
+					SUBREG_BYTE (reg), GET_MODE (reg));
+	  last = start + subreg_nregs_with_regno (regno, reg);
+	  regno = start;
+
+	  while (regno < last)
+	    {
+	      SET_HARD_REG_BIT (*hard_regs_live, regno);
+	      regno++;
+	    }
+	}
+      else
+	add_to_hard_reg_set (hard_regs_live, GET_MODE (reg), regno);
+    }
+}
+
+
+/* Add hard reg conflicts to RENUMBERS_LIVE assuming that pseudo in
+   allocno[ALLOCNUM] is allocated to a set of hard regs starting at
+   RENUMBER.
+
+   We are smart about the case where only subregs of REG have been
+   set, as indicated by LIVE_SUBREGS[ALLOCNUM] and
+   LIVE_SUBREGS_USED[ALLOCNUM].  See global_conflicts for description
+   of LIVE_SUBREGS and LIVE_SUBREGS_USED.  */
+
+inline static void
+set_renumbers_live (HARD_REG_SET *renumbers_live,   
+		    sbitmap *live_subregs, 
+		    int *live_subregs_used,
+		    int allocnum, int renumber)
+{
+  /* The width of the pseudo.  */
+  int nbytes = live_subregs_used[allocnum];
+  int regno = allocno[allocnum].reg;
+  enum machine_mode mode = GET_MODE (regno_reg_rtx[regno]);
+
+  if (dump_file)
+    fprintf (dump_file, "  set_renumbers_live %d->%d ", 
+	     regno, renumber);
+
+  if (nbytes > 0)
+    {
+      int i;
+      sbitmap live_subs = live_subregs[allocnum];
+
+      /* First figure out how many hard regs we are considering using.  */
+      int target_nregs = hard_regno_nregs[renumber][mode];
+
+      /* Now figure out the number of bytes per hard reg.  Note that
+	 this may be different that what would be obtained by looking
+	 at the mode in the pseudo.  For instance, a complex number
+	 made up of 2 32-bit parts gets mapped to 2 hard regs, even if
+	 the hardregs are 64-bit floating point values.  */
+      int target_width = nbytes / target_nregs;
+      
+      if (dump_file)
+	fprintf (dump_file, "target_nregs=%d target_width=%d nbytes=%d", 
+		 target_nregs, target_width, nbytes);
+
+      for (i = 0; i < target_nregs; i++)
+	{
+	  int j;
+	  bool set = false;
+	  for (j = 0; j < target_width; j++)
+	    {
+	      int reg_start = i * target_width;
+	      if (reg_start + j >= nbytes)
+		break;
+	      set |= TEST_BIT (live_subs, reg_start + j);
+	    }
+
+	  if (set)
+	    SET_HARD_REG_BIT (*renumbers_live, renumber + i);
+	}
+    }
+  else
+    add_to_hard_reg_set (renumbers_live, mode, renumber);
+
+  if (dump_file)
+    fprintf (dump_file, "\n");
+}
+
+/* Dump out a REF with its reg_renumber range to FILE using
+   PREFIX.  */
+
+static void
+dump_ref (FILE *file, 
+	  const char * prefix, 
+	  const char * suffix, 
+	  rtx reg,
+	  unsigned int regno,
+	  sbitmap *live_subregs, 
+	  int *live_subregs_used
+)
+{
+  int allocnum = reg_allocno[regno];
+
+  fprintf (file, "%s %d", prefix, regno);
+  if (allocnum >= 0 
+      && live_subregs_used[allocnum] > 0)
+    {
+      int j;
+      char s = '[';
+      
+      for (j = 0; j < live_subregs_used[allocnum]; j++)
+	if (TEST_BIT (live_subregs[allocnum], j))
+	  {
+	    fprintf (dump_file, "%c%d", s, j);
+	    s = ',';
+	  }
+      fprintf (dump_file, "]");
+    }
+
+  if (reg_renumber[regno] >= 0)
+    {
+      enum machine_mode mode = GET_MODE (reg);
+      unsigned int start;
+      unsigned int last;
+
+      regno = reg_renumber[regno];
+
+      start = regno;
+      last = end_hard_regno (mode, regno);
+      if (GET_CODE (reg) == SUBREG)
+	{
+	  start += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
+					SUBREG_BYTE (reg), GET_MODE (reg));
+	  last = start + subreg_nregs_with_regno (regno, reg);
+	}
+
+      if (start == last - 1)
+	fprintf (file, "(%d)", start);
+      else 
+	fprintf (file, "(%d:%d..%d)", regno, start, last-1);
+    }
+  fprintf (file, suffix);
+}
+
+
+/* Scan the rtl code and record all conflicts and register preferences in the
+   conflict matrices and preference tables.  */
+
+void
+global_conflicts (void)
+{
+  unsigned int i;
+  basic_block bb;
+  rtx insn;
+
+  /* Regs that have allocnos can be in either 
+     hard_regs_live (if regno < FIRST_PSEUDO_REGISTER) or 
+     allocnos_live (if regno >= FIRST_PSEUDO_REGISTER) or 
+     both if local_alloc has preallocated it and reg_renumber >= 0.  */
+
+  HARD_REG_SET hard_regs_live;
+  HARD_REG_SET renumbers_live;
+  HOST_WIDE_INT *allocnos_live;
+  bitmap live = BITMAP_ALLOC (NULL);
+  VEC (df_ref_t, heap) *clobbers = NULL;
+  VEC (df_ref_t, heap) *dying_regs = NULL;
+
+  /* live_subregs is a vector used to keep accurate information about
+     which hardregs are live in multiword pseudos.  live_subregs and
+     live_subregs_used are indexed by reg_allocno.  The live_subreg
+     entry for a particular pseudo is a bitmap with one bit per byte
+     of the register.  It is only used if the corresponding element is
+     non zero in live_subregs_used.  The value in live_subregs_used is
+     number of bytes that the pseudo can occupy.  */
+  sbitmap *live_subregs = XCNEWVEC (sbitmap, max_allocno);
+  int *live_subregs_used = XNEWVEC (int, max_allocno);
+
+  if (dump_file)
+    {
+      fprintf (dump_file, "fixed registers : "); 
+      for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+	if (fixed_regs[i])
+	  fprintf (dump_file, "%d ", i);
+      fprintf (dump_file, "\n");
+    }
+
+  allocnos_live = XNEWVEC (HOST_WIDE_INT, allocno_row_words);
+
+  FOR_EACH_BB (bb)
+    {
+      bitmap_iterator bi;
+
+      bitmap_copy (live, DF_LIVE_OUT (bb));
+      df_simulate_artificial_refs_at_end (bb, live);
+
+      memset (allocnos_live, 0, allocno_row_words * sizeof (HOST_WIDE_INT));
+      memset (live_subregs_used, 0, max_allocno * sizeof (int));
+      CLEAR_HARD_REG_SET (hard_regs_live);
+      CLEAR_HARD_REG_SET (renumbers_live);
+
+      /* Initialize allocnos_live and hard_regs_live for bottom of block.  */
+      EXECUTE_IF_SET_IN_BITMAP (live, 0, i, bi)
+	{
+	  if (i >= FIRST_PSEUDO_REGISTER)
+	    break;
+	  if (! fixed_regs[i])
+	    SET_HARD_REG_BIT (hard_regs_live, i);
+	}
+    
+      EXECUTE_IF_SET_IN_BITMAP (live, FIRST_PSEUDO_REGISTER, i, bi)
+	{
+	  int allocnum = reg_allocno[i];
+
+	  if (allocnum >= 0)
+	    {
+	      int renumber = reg_renumber[i];
+	      rtx reg = regno_reg_rtx[i];
+
+	      set_reg_in_live (allocnos_live, live_subregs, live_subregs_used, 
+			       &hard_regs_live, reg, false);
+	      if (renumber >= 0 && renumber < FIRST_PSEUDO_REGISTER)
+		set_renumbers_live (&renumbers_live, live_subregs, live_subregs_used, 
+				    allocnum, renumber);
+	    }
+	} 
+
+      if (dump_file)
+	fprintf (dump_file, "\nstarting basic block %d\n\n", bb->index);
+
+      FOR_BB_INSNS_REVERSE (bb, insn)
+	{
+	  unsigned int uid = INSN_UID (insn);
+	  struct df_ref **def_rec;
+	  struct df_ref **use_rec;
+
+	  if (!INSN_P (insn))
+	    continue;	
+
+	  if (dump_file)
+	    {
+	      fprintf (dump_file, "insn = %d live = hardregs [", uid);
+	      
+	      for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+		if (TEST_HARD_REG_BIT (hard_regs_live, i))
+		  fprintf (dump_file, "%d ", i);
+
+	      fprintf (dump_file, "] renumbered [");
+	      for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+		if (TEST_HARD_REG_BIT (renumbers_live, i))
+		  fprintf (dump_file, "%d ", i);
+
+	      fprintf (dump_file, "] pseudos [");
+	      EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, i,
+		{
+		  dump_ref (dump_file, " ", "", regno_reg_rtx[allocno[i].reg],
+			    allocno[i].reg, live_subregs, live_subregs_used);
+		});
+	      fprintf (dump_file, "]\n");
+	    }
+
+	  /* Add the defs into live.  Most of them will already be
+	     there, the ones that are missing are the unused ones and
+	     the clobbers.  We do this in order to make sure that
+	     interferences are added between every def and everything
+	     that is live across the insn.  These defs will be removed
+	     later.  */
+	  for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
+	    {
+	      struct df_ref *def = *def_rec;
+
+	      /* FIXME: Ignoring may clobbers is technically the wrong
+		 thing to do.  However the old version of the this
+		 code ignores may clobbers (and instead has many
+		 places in the register allocator to handle these
+		 constraints).  It is quite likely that with a new
+		 allocator, the correct thing to do is to not ignore
+		 the constraints and then do not put in the large
+		 number of special checks.  */
+	      if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
+		{
+		  rtx reg = DF_REF_REG (def);
+		  set_reg_in_live (allocnos_live, live_subregs, live_subregs_used, 
+				   &hard_regs_live, reg, 
+				   DF_REF_FLAGS_IS_SET (def, DF_REF_EXTRACT));
+		  if (dump_file)
+		    dump_ref (dump_file, "  adding def", "\n",
+			      reg, DF_REF_REGNO (def), live_subregs, live_subregs_used);
+		}
+	    }
+	  
+	  /* Add the hardregs into renumbers_live to build the
+	     interferences.  Renumbers_live will be rebuilt in the
+	     next step from scratch, so corrupting it here is no
+	     problem.  */
+	  IOR_HARD_REG_SET (renumbers_live, hard_regs_live);
+
+	  /* Add the interferences for the defs.  */
+	  for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
+	    {
+	      struct df_ref *def = *def_rec;
+	      if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
+		mark_reg_store (allocnos_live, &renumbers_live, def);
+	    }
+	  
+	  /* Remove the defs from the live sets.  Leave the partial
+	     and conditional defs in the set because they do not
+	     kill.  */
+	  VEC_truncate (df_ref_t, clobbers, 0);
+	  for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
+	    {
+	      struct df_ref *def = *def_rec;
+
+	      if (!DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL))
+		{
+		  rtx reg = DF_REF_REG (def);
+
+		  clear_reg_in_live (allocnos_live, live_subregs, live_subregs_used,
+				     &hard_regs_live, reg,
+				     DF_REF_FLAGS_IS_SET (def, DF_REF_EXTRACT));
+		  if (dump_file)
+		    dump_ref (dump_file, "  clearing def", "\n", 
+			      reg, DF_REF_REGNO (def), live_subregs, live_subregs_used);
+		}
+	      
+	      if (DF_REF_FLAGS_IS_SET (def, DF_REF_MUST_CLOBBER))
+		VEC_safe_push (df_ref_t, heap, clobbers, def);
+	    }
+	  
+	  /* Go thru all of the live pseudos and reset renumbers_live.
+	     We must start from scratch here because there could have
+	     been several pseudos alive that have the same
+	     reg_renumber and if we see a clobber for one of them, we
+	     cannot not want to kill the renumbers from the other
+	     pseudos.  */
+	  CLEAR_HARD_REG_SET (renumbers_live);
+	  EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, i,
+	    {
+	      unsigned int regno = allocno[i].reg;
+	      int renumber = reg_renumber[regno];
+
+	      if (renumber >= 0 && renumber < FIRST_PSEUDO_REGISTER)
+		set_renumbers_live (&renumbers_live, live_subregs, live_subregs_used, 
+				    i, renumber);
+	    }); 
+					 
+	  /* Add the uses to the live sets.  Keep track of the regs
+	     that are dying inside the insn, this set will be useful
+	     later.  */
+	  VEC_truncate (df_ref_t, dying_regs, 0);
+	  for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++)
+	    {
+	      struct df_ref *use = *use_rec;
+	      unsigned int regno = DF_REF_REGNO (use);
+	      bool added = false;
+	      int renumber = reg_renumber[regno];
+	      int allocnum = reg_allocno[regno];
+	      bool renumbering = false;
+	      rtx reg = DF_REF_REG (use);
+
+	      /* DF_REF_READ_WRITE on a use means that this use is
+		 fabricated from a def that is a partial set to a
+		 multiword reg.  Here, we only model the subreg case
+		 precisely so we do not need to look at the fabricated
+		 use unless that set also happens to wrapped in a
+		 ZERO_EXTRACT. */
+	      if (DF_REF_FLAGS_IS_SET (use, DF_REF_READ_WRITE) 
+		  && (!DF_REF_FLAGS_IS_SET (use, DF_REF_EXTRACT)) 
+		  && DF_REF_FLAGS_IS_SET (use, DF_REF_SUBREG))
+		continue;
+	      
+	      if (dump_file)
+		dump_ref (dump_file, "  seeing use", "\n",
+			  reg, regno, live_subregs, live_subregs_used);
+
+	      if (allocnum >= 0)
+		{
+		  if (GET_CODE (reg) == SUBREG
+		      && !DF_REF_FLAGS_IS_SET (use, DF_REF_EXTRACT)) 
+		    {
+		      unsigned int start = SUBREG_BYTE (reg);
+		      unsigned int last = start + GET_MODE_SIZE (GET_MODE (reg));
+
+		      ra_init_live_subregs (GET_ALLOCNO_LIVE (allocnos_live, allocnum) != 0, 
+					    live_subregs, live_subregs_used, allocnum, reg);
+		      
+		      /* Ignore the paradoxical bits.  */
+		      if ((int)last > live_subregs_used[allocnum])
+			last = live_subregs_used[allocnum];
+		      
+		      while (start < last)
+			{
+			  if (!TEST_BIT (live_subregs[allocnum], start)) 
+			    {
+			      if (dump_file)
+				fprintf (dump_file, "    dying pseudo subreg %d[%d]\n", regno, start);
+			      SET_BIT (live_subregs[allocnum], start);
+			      
+			      added = true;
+			    }
+			  start++;
+			}
+		      
+		      SET_ALLOCNO_LIVE (allocnos_live, allocnum);
+		      if (renumber >= 0 && renumber < FIRST_PSEUDO_REGISTER)
+			set_renumbers_live (&renumbers_live, live_subregs, live_subregs_used, 
+					    allocnum, renumber);
+		    }
+		  
+		  else if (GET_ALLOCNO_LIVE (allocnos_live, allocnum) == 0)
+		    {
+		      if (dump_file)
+			fprintf (dump_file, "    dying pseudo\n");
+		      
+		      /* Resetting the live_subregs_used is
+			 effectively saying do not use the subregs
+			 because we are reading the whole pseudo.  */
+		      live_subregs_used[allocnum] = 0;
+		      SET_ALLOCNO_LIVE (allocnos_live, allocnum);
+		      if (renumber >= 0 && renumber < FIRST_PSEUDO_REGISTER)
+			set_renumbers_live (&renumbers_live, live_subregs, live_subregs_used, 
+					    allocnum, renumber);
+		      added = true;
+		    }
+		}
+
+	      if (renumber >= 0 && renumber < FIRST_PSEUDO_REGISTER)
+		{
+		  regno = renumber;
+		  renumbering = true;
+		}
+	      
+	      if (regno < FIRST_PSEUDO_REGISTER)
+		{
+		  unsigned int start = regno;
+		  unsigned int last;
+		  if (GET_CODE (reg) == SUBREG)
+		    {
+		      start += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
+						    SUBREG_BYTE (reg), GET_MODE (reg));
+		      last = start + subreg_nregs_with_regno (regno, reg);
+		    }
+		  else
+		    last = end_hard_regno (GET_MODE (reg), regno);
+		  
+		  regno = start;
+		  while (regno < last)
+		    {
+		      if ((!TEST_HARD_REG_BIT (hard_regs_live, regno)) 
+			  && (!TEST_HARD_REG_BIT (renumbers_live, regno)) 
+			  && ! fixed_regs[regno])
+			{
+			  if (dump_file)
+			    fprintf (dump_file, "    dying hard reg %d\n", regno);
+			  if (renumbering)
+			    SET_HARD_REG_BIT (renumbers_live, regno);
+			  else
+			    SET_HARD_REG_BIT (hard_regs_live, regno);
+			  
+			  added = true;
+			}
+		      regno++;
+		    }
+		}
+	      if (added)
+		VEC_safe_push (df_ref_t, heap, dying_regs, use);
+	    }
+	  
+	  /* These three cases are all closely related, they all deal
+             with some set of outputs of the insn need to conflict
+             with some of the registers that are used by the insn but
+             die within the insn. If no registers die within the insn,
+             the tests can be skipped. */
+
+	  if (VEC_length (df_ref_t, dying_regs) > 0)
+	    {
+	      int k;
+	      /* There appears to be an ambiguity as to what a clobber
+		 means in an insn.  In some cases, the clobber happens
+		 within the processing of the insn and in some cases
+		 it happens at the end of processing the insn.  There
+		 is currently no way to distinguish these two cases so
+		 this code causes real clobbers to interfere with
+		 registers that die within an insn.
+
+		 This is consistent with the prior version of
+		 interference graph builder but is was discovered
+		 while developing this version of the code, that on
+		 some architectures such as the x86-64, the clobbers
+		 only appear to happen at the end of the insn.
+		 However, the ppc-32 contains clobbers for which these
+		 interferences are necessary.
+
+		 FIXME: We should consider either adding a new kind of
+		 clobber, or adding a flag to the clobber distinguish
+		 these two cases.  */
+	      for (k = VEC_length (df_ref_t, clobbers) - 1; k >= 0; k--)
+		{
+		  struct df_ref *def = VEC_index (df_ref_t, clobbers, k);
+		  int j;
+
+		  for (j = VEC_length (df_ref_t, dying_regs) - 1; j >= 0; j--)
+		    {
+		      struct df_ref *use = VEC_index (df_ref_t, dying_regs, j);
+		      record_one_conflict_between_regnos (GET_MODE (DF_REF_REG (def)),
+							  DF_REF_REGNO (def),
+							  GET_MODE (DF_REF_REG (use)),
+							  DF_REF_REGNO (use));
+		    }
+		}
+
+	      /* Early clobbers, by definition, need to not only
+		 clobber the registers that are live accross the insn
+		 but need to clobber the registers that die within the
+		 insn.  The clobbering for registers live across the
+		 insn is handled above.  */ 
+	      set_conflicts_for_earlyclobber (insn);
+
+	      /* If INSN is a store with multiple outputs, then any
+		 reg that dies here and is used inside of the address
+		 of the output must conflict with the other outputs.
+
+		 FIXME: There has been some discussion as to whether
+		 this is right place to handle this issue.  This is a
+		 hold over from an early version global conflicts.
+
+		 1) There is some evidence that code only deals with a
+		 bug that is only on the m68k.  The conditions of this
+		 test are such that this case only triggers for a very
+		 peculiar insn, one that is a parallel where one of
+		 the sets is a store and the other sets a reg that is
+		 used in the address of the store.  See
+		 http://gcc.gnu.org/ml/gcc-patches/1998-12/msg00259.html
+
+		 2) The situation that this is addressing is a bug in
+		 the part of reload that handles stores, adding this
+		 conflict only hides the problem.  (Of course no one
+		 really wants to fix reload so it is understandable
+		 why a bandaid was just added here.)
+
+		 Just because an output is unused does not mean the
+		 compiler can assume the side effect will not occur.
+		 Consider if REG appears in the address of an output
+		 and we reload the output.  If we allocate REG to the
+		 same hard register as an unused output we could set
+		 the hard register before the output reload insn.
+
+		 3) This could actually be handled by making the other
+		 (non store) operand of the insn be an early clobber.
+		 This would insert the same conflict, even if it is
+		 not technically an early clobber.  */
+
+	      /* It is unsafe to use !single_set here since it will ignore an
+		 unused output.  */
+	      if (GET_CODE (PATTERN (insn)) == PARALLEL && multiple_sets (insn))
+		{ 
+		  int j;
+		  for (j = VEC_length (df_ref_t, dying_regs) - 1; j >= 0; j--)
+		    {
+		      int used_in_output = 0;
+		      struct df_ref *use = VEC_index (df_ref_t, dying_regs, j);
+		      rtx reg = DF_REF_REG (use);
+		      int uregno = DF_REF_REGNO (use);
+		      enum machine_mode umode = GET_MODE (DF_REF_REG (use));
+		      int k;
+
+		      for (k = XVECLEN (PATTERN (insn), 0) - 1; k >= 0; k--)
+			{
+			  rtx set = XVECEXP (PATTERN (insn), 0, k);
+			  if (GET_CODE (set) == SET
+			      && !REG_P (SET_DEST (set))
+			      && !rtx_equal_p (reg, SET_DEST (set))
+			      && reg_overlap_mentioned_p (reg, SET_DEST (set)))
+			    used_in_output = 1;
+			}
+		      if (used_in_output)
+			for (k = XVECLEN (PATTERN (insn), 0) - 1; k >= 0; k--)
+			  {
+			    rtx set = XVECEXP (PATTERN (insn), 0, k);
+			    if (GET_CODE (set) == SET
+				&& REG_P (SET_DEST (set))
+				&& !rtx_equal_p (reg, SET_DEST (set)))
+			      record_one_conflict_between_regnos (GET_MODE (SET_DEST (set)),
+								  REGNO (SET_DEST (set)), 
+								  umode, uregno);
+			  }
+		    }
+		}
+	    }
+	}
+
+	    /* Add the renumbers live to the hard_regs_live for the next few
+	 calls.  All of this gets recomputed at the top of the loop so
+	 there is no harm.  */
+      IOR_HARD_REG_SET (hard_regs_live, renumbers_live);
+	  
+#ifdef EH_RETURN_DATA_REGNO
+      if (bb_has_eh_pred (bb))
+	{
+	  unsigned int i;
+    
+	  for (i = 0; ; ++i)
+	    {
+	      unsigned int regno = EH_RETURN_DATA_REGNO (i);
+	      if (regno == INVALID_REGNUM)
+		break;
+	      record_one_conflict (allocnos_live, &hard_regs_live, regno);
+	    }
+	}
+#endif
+
+      if (bb_has_abnormal_pred (bb))
+	{
+	  unsigned int i;
+#ifdef STACK_REGS
+	  /* Pseudos can't go in stack regs at the start of a basic block that
+	     is reached by an abnormal edge. Likewise for call clobbered regs,
+	     because caller-save, fixup_abnormal_edges and possibly the table
+	     driven EH machinery are not quite ready to handle such regs live
+	     across such edges.  */
+	  EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, i,
+					 {
+					   allocno[i].no_stack_reg = 1;
+					 });
+
+	  for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
+	    record_one_conflict (allocnos_live, &hard_regs_live, i);
+#endif
+	  
+	  /* No need to record conflicts for call clobbered regs if we have
+	     nonlocal labels around, as we don't ever try to allocate such
+	     regs in this case.  */
+	  if (! current_function_has_nonlocal_label)
+	    for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+	      if (call_used_regs [i])
+		record_one_conflict (allocnos_live, &hard_regs_live, i);
+	}
+    }
+  
+  for (i = 0; i < (unsigned int)max_allocno; i++)
+    if (live_subregs[i])
+      free (live_subregs[i]);
+
+  /* Clean up.  */
+  free (allocnos_live);
+  free (live_subregs);
+  free (live_subregs_used);
+  VEC_free (df_ref_t, heap, dying_regs);
+  VEC_free (df_ref_t, heap, clobbers);
+  BITMAP_FREE (live);
+}
Index: reload.c
===================================================================
--- reload.c	(revision 128885)
+++ reload.c	(working copy)
@@ -1521,7 +1521,7 @@ push_reload (rtx in, rtx out, rtx *inloc
 	    /* Check that we don't use a hardreg for an uninitialized
 	       pseudo.  See also find_dummy_reload().  */
 	    && (ORIGINAL_REGNO (XEXP (note, 0)) < FIRST_PSEUDO_REGISTER
-		|| ! bitmap_bit_p (DF_RA_LIVE_OUT (ENTRY_BLOCK_PTR),
+		|| ! bitmap_bit_p (DF_LIVE_OUT (ENTRY_BLOCK_PTR),
 				   ORIGINAL_REGNO (XEXP (note, 0))))
 	    && ! refers_to_regno_for_reload_p (regno,
 					       end_hard_regno (rel_mode,
@@ -2000,7 +2000,7 @@ find_dummy_reload (rtx real_in, rtx real
 	   as they would clobber the other live pseudo using the same.
 	   See also PR20973.  */
       && (ORIGINAL_REGNO (in) < FIRST_PSEUDO_REGISTER
-          || ! bitmap_bit_p (DF_RA_LIVE_OUT (ENTRY_BLOCK_PTR),
+          || ! bitmap_bit_p (DF_LIVE_OUT (ENTRY_BLOCK_PTR),
 			     ORIGINAL_REGNO (in))))
     {
       unsigned int regno = REGNO (in) + in_offset;
Index: rtlanal.c
===================================================================
--- rtlanal.c	(revision 128885)
+++ rtlanal.c	(working copy)
@@ -3263,15 +3263,25 @@ subreg_regno (const_rtx x)
 unsigned int
 subreg_nregs (const_rtx x)
 {
+  return subreg_nregs_with_regno (REGNO (SUBREG_REG (x)), x);
+}
+
+/* Return the number of registers that a subreg REG with REGNO
+   expression refers to.  This is a copy of the rtlanal.c:subreg_nregs
+   changed so that the regno can be passed in. */
+
+unsigned int
+subreg_nregs_with_regno (unsigned int regno, const_rtx x)
+{
   struct subreg_info info;
   rtx subreg = SUBREG_REG (x);
-  int regno = REGNO (subreg);
 
   subreg_get_info (regno, GET_MODE (subreg), SUBREG_BYTE (x), GET_MODE (x),
 		   &info);
   return info.nregs;
 }
 
+
 struct parms_set_data
 {
   int nregs;
Index: df-scan.c
===================================================================
--- df-scan.c	(revision 128885)
+++ df-scan.c	(working copy)
@@ -2752,23 +2752,37 @@ df_def_record_1 (struct df_collection_re
 	 || GET_CODE (dst) == ZERO_EXTRACT)
     {
       flags |= DF_REF_READ_WRITE | DF_REF_PARTIAL;
+      if (GET_CODE (dst) == ZERO_EXTRACT)
+	flags |= DF_REF_EXTRACT;
+      else
+	flags |= DF_REF_STRICT_LOWER_PART;
+
       loc = &XEXP (dst, 0);
       dst = *loc;
     }
 
-  if (df_read_modify_subreg_p (dst))
-    flags |= DF_REF_READ_WRITE | DF_REF_PARTIAL;
+  /* At this point if we do not have a reg or a subreg, just return.  */
+  if (REG_P (dst))
+    {
+      df_ref_record (collection_rec, 
+		     dst, loc, bb, insn, DF_REF_REG_DEF, flags);
 
-  if (REG_P (dst)
-      || (GET_CODE (dst) == SUBREG && REG_P (SUBREG_REG (dst))))
-    df_ref_record (collection_rec, 
-                   dst, loc, bb, insn, DF_REF_REG_DEF, flags);
-
-  /* We want to keep sp alive everywhere - by making all
-     writes to sp also use of sp. */
-  if (REG_P (dst) && REGNO (dst) == STACK_POINTER_REGNUM)
-    df_ref_record (collection_rec,
-               dst, NULL, bb, insn, DF_REF_REG_USE, flags);
+      /* We want to keep sp alive everywhere - by making all
+	 writes to sp also use of sp. */
+      if (REGNO (dst) == STACK_POINTER_REGNUM)
+	df_ref_record (collection_rec,
+		       dst, NULL, bb, insn, DF_REF_REG_USE, flags);
+    }
+  else if (GET_CODE (dst) == SUBREG && REG_P (SUBREG_REG (dst)))
+    {
+      if (df_read_modify_subreg_p (dst))
+	flags |= DF_REF_READ_WRITE | DF_REF_PARTIAL;
+
+      flags |= DF_REF_SUBREG;
+
+      df_ref_record (collection_rec, 
+		     dst, loc, bb, insn, DF_REF_REG_DEF, flags);
+    }
 }
 
 
@@ -2880,7 +2894,8 @@ df_uses_record (struct df_collection_rec
 	      if (df_read_modify_subreg_p (dst))
 		{
 		  df_uses_record (collection_rec, &SUBREG_REG (dst), 
-				  DF_REF_REG_USE, bb, insn, flags | DF_REF_READ_WRITE);
+				  DF_REF_REG_USE, bb, insn, 
+				  flags | DF_REF_READ_WRITE | DF_REF_SUBREG);
 		  break;
 		}
 	      /* Fall through.  */
@@ -2902,13 +2917,15 @@ df_uses_record (struct df_collection_rec
 		dst = XEXP (dst, 0);
 		df_uses_record (collection_rec, 
 				(GET_CODE (dst) == SUBREG) ? &SUBREG_REG (dst) : temp, 
-				DF_REF_REG_USE, bb, insn, DF_REF_READ_WRITE);
+				DF_REF_REG_USE, bb, insn, 
+				DF_REF_READ_WRITE | DF_REF_STRICT_LOWER_PART);
 	      }
 	      break;
 	    case ZERO_EXTRACT:
 	    case SIGN_EXTRACT:
 	      df_uses_record (collection_rec, &XEXP (dst, 0), 
-			      DF_REF_REG_USE, bb, insn, DF_REF_READ_WRITE);
+			      DF_REF_REG_USE, bb, insn, 
+			      DF_REF_READ_WRITE | DF_REF_EXTRACT);
 	      df_uses_record (collection_rec, &XEXP (dst, 1), 
 			      DF_REF_REG_USE, bb, insn, flags);
 	      df_uses_record (collection_rec, &XEXP (dst, 2), 
@@ -3180,23 +3197,6 @@ df_insn_refs_collect (struct df_collecti
   df_canonize_collection_rec (collection_rec);
 }
 
-/* Return true if any pred of BB is an eh.  */
-
-bool
-df_has_eh_preds (basic_block bb)
-{
-  edge e;
-  edge_iterator ei;
-
-  FOR_EACH_EDGE (e, ei, bb->preds)
-    {
-      if (e->flags & EDGE_EH)
-	return true;
-    }
-  return false;
-}
-
-
 /* Recompute the luids for the insns in BB.  */
 
 void
@@ -3261,7 +3261,7 @@ df_bb_refs_collect (struct df_collection
     }
 
 #ifdef EH_RETURN_DATA_REGNO
-  if (df_has_eh_preds (bb))
+  if (bb_has_eh_pred (bb))
     {
       unsigned int i;
       /* Mark the registers that will contain data for the handler.  */
@@ -3278,7 +3278,7 @@ df_bb_refs_collect (struct df_collection
 
 
 #ifdef EH_USES
-  if (df_has_eh_preds (bb))
+  if (bb_has_eh_pred (bb))
     {
       unsigned int i;
       /* This code is putting in an artificial ref for the use at the
@@ -3310,7 +3310,7 @@ df_bb_refs_collect (struct df_collection
     {
       bitmap_iterator bi;
       unsigned int regno;
-      bitmap au = df_has_eh_preds (bb) 
+      bitmap au = bb_has_eh_pred (bb) 
 	? df->eh_block_artificial_uses 
 	: df->regular_block_artificial_uses;
 
@@ -3481,8 +3481,6 @@ df_mark_reg (rtx reg, void *vset)
 }
 
 
-
-
 /* Set the bit for regs that are considered being defined at the entry. */
 
 static void
@@ -3780,7 +3778,7 @@ df_exit_block_uses_collect (struct df_co
      I do not know why.  */
   if (reload_completed 
       && !bitmap_bit_p (exit_block_uses, ARG_POINTER_REGNUM)
-      && df_has_eh_preds (EXIT_BLOCK_PTR)
+      && bb_has_eh_pred (EXIT_BLOCK_PTR)
       && fixed_regs[ARG_POINTER_REGNUM])
     df_ref_record (collection_rec, regno_reg_rtx[ARG_POINTER_REGNUM], NULL,
 		   EXIT_BLOCK_PTR, NULL, DF_REF_REG_USE, 0);
Index: global.c
===================================================================
--- global.c	(revision 128885)
+++ global.c	(working copy)
@@ -41,6 +41,7 @@ along with GCC; see the file COPYING3.  
 #include "df.h"
 #include "vecprim.h"
 #include "dbgcnt.h"
+#include "ra.h"
 
 /* This pass of the compiler performs global register allocation.
    It assigns hard register numbers to all the pseudo registers
@@ -62,10 +63,13 @@ along with GCC; see the file COPYING3.  
    reg numbers to allocnos and vice versa.
    max_allocno gets the number of allocnos in use.
 
-   2. Allocate a max_allocno by max_allocno conflict bit matrix and clear it.
-   Allocate a max_allocno by FIRST_PSEUDO_REGISTER conflict matrix
-   for conflicts between allocnos and explicit hard register use
-   (which includes use of pseudo-registers allocated by local_alloc).
+   2. Allocate a max_allocno by max_allocno conflict bit matrix and
+   clear it.  This is called "conflict".
+
+   Allocate a max_allocno by FIRST_PSEUDO_REGISTER conflict matrix for
+   conflicts between allocnos and explicit hard register use (which
+   includes use of pseudo-registers allocated by local_alloc).  This
+   is the hard_reg_conflicts inside each allocno.
 
    3. For each basic block
     walk forward through the block, recording which
@@ -81,128 +85,16 @@ along with GCC; see the file COPYING3.  
    5. Allocate the variables in that order; each if possible into
    a preferred register, else into another register.  */
 
-/* Number of pseudo-registers which are candidates for allocation.  */
-
-static int max_allocno;
-
-/* Indexed by (pseudo) reg number, gives the allocno, or -1
-   for pseudo registers which are not to be allocated.  */
-
-static int *reg_allocno;
-
-struct allocno
-{
-  int reg;
-  /* Gives the number of consecutive hard registers needed by that
-     pseudo reg.  */
-  int size;
-
-  /* Number of calls crossed by each allocno.  */
-  int calls_crossed;
-
-  /* Number of calls that might throw crossed by each allocno.  */
-  int throwing_calls_crossed;
-
-  /* Number of refs to each allocno.  */
-  int n_refs;
-
-  /* Frequency of uses of each allocno.  */
-  int freq;
-
-  /* Guess at live length of each allocno.
-     This is actually the max of the live lengths of the regs.  */
-  int live_length;
-
-  /* Set of hard regs conflicting with allocno N.  */
-
-  HARD_REG_SET hard_reg_conflicts;
-
-  /* Set of hard regs preferred by allocno N.
-     This is used to make allocnos go into regs that are copied to or from them,
-     when possible, to reduce register shuffling.  */
-
-  HARD_REG_SET hard_reg_preferences;
-
-  /* Similar, but just counts register preferences made in simple copy
-     operations, rather than arithmetic.  These are given priority because
-     we can always eliminate an insn by using these, but using a register
-     in the above list won't always eliminate an insn.  */
-
-  HARD_REG_SET hard_reg_copy_preferences;
-
-  /* Similar to hard_reg_preferences, but includes bits for subsequent
-     registers when an allocno is multi-word.  The above variable is used for
-     allocation while this is used to build reg_someone_prefers, below.  */
-
-  HARD_REG_SET hard_reg_full_preferences;
-
-  /* Set of hard registers that some later allocno has a preference for.  */
-
-  HARD_REG_SET regs_someone_prefers;
-
-#ifdef STACK_REGS
-  /* Set to true if allocno can't be allocated in the stack register.  */
-  bool no_stack_reg;
-#endif
-};
-
-static struct allocno *allocno;
-
 /* A vector of the integers from 0 to max_allocno-1,
    sorted in the order of first-to-be-allocated first.  */
 
 static int *allocno_order;
 
-/* Define the number of bits in each element of `conflicts' and what
-   type that element has.  We use the largest integer format on the
-   host machine.  */
-
-#define INT_BITS HOST_BITS_PER_WIDE_INT
-#define INT_TYPE HOST_WIDE_INT
-
-/* max_allocno by max_allocno array of bits,
-   recording whether two allocno's conflict (can't go in the same
-   hardware register).
-
-   `conflicts' is symmetric after the call to mirror_conflicts.  */
-
-static INT_TYPE *conflicts;
-
-/* Number of ints required to hold max_allocno bits.
-   This is the length of a row in `conflicts'.  */
-
-static int allocno_row_words;
-
 /* Two macros to test or store 1 in an element of `conflicts'.  */
 
 #define CONFLICTP(I, J) \
- (conflicts[(I) * allocno_row_words + (unsigned) (J) / INT_BITS]	\
-  & ((INT_TYPE) 1 << ((unsigned) (J) % INT_BITS)))
-
-/* For any allocno set in ALLOCNO_SET, set ALLOCNO to that allocno,
-   and execute CODE.  */
-#define EXECUTE_IF_SET_IN_ALLOCNO_SET(ALLOCNO_SET, ALLOCNO, CODE)	\
-do {									\
-  int i_;								\
-  int allocno_;								\
-  INT_TYPE *p_ = (ALLOCNO_SET);						\
-									\
-  for (i_ = allocno_row_words - 1, allocno_ = 0; i_ >= 0;		\
-       i_--, allocno_ += INT_BITS)					\
-    {									\
-      unsigned INT_TYPE word_ = (unsigned INT_TYPE) *p_++;		\
-									\
-      for ((ALLOCNO) = allocno_; word_; word_ >>= 1, (ALLOCNO)++)	\
-	{								\
-	  if (word_ & 1)						\
-	    {CODE;}							\
-	}								\
-    }									\
-} while (0)
-
-/* Set of hard regs currently live (during scan of all insns).  */
-
-static HARD_REG_SET hard_regs_live;
+ (conflicts[(I) * allocno_row_words + (unsigned) (J) / HOST_BITS_PER_WIDE_INT]	\
+  & ((HOST_WIDE_INT) 1 << ((unsigned) (J) % HOST_BITS_PER_WIDE_INT)))
 
 /* Set of registers that global-alloc isn't supposed to use.  */
 
@@ -230,21 +122,6 @@ static int local_reg_live_length[FIRST_P
 
 #define SET_REGBIT(TABLE, I, J)  SET_HARD_REG_BIT (allocno[I].TABLE, J)
 
-/* Bit mask for allocnos live at current point in the scan.  */
-
-static INT_TYPE *allocnos_live;
-
-/* Test, set or clear bit number I in allocnos_live,
-   a bit vector indexed by allocno.  */
-
-#define SET_ALLOCNO_LIVE(I)				\
-  (allocnos_live[(unsigned) (I) / INT_BITS]		\
-     |= ((INT_TYPE) 1 << ((unsigned) (I) % INT_BITS)))
-
-#define CLEAR_ALLOCNO_LIVE(I)				\
-  (allocnos_live[(unsigned) (I) / INT_BITS]		\
-     &= ~((INT_TYPE) 1 << ((unsigned) (I) % INT_BITS)))
-
 /* This is turned off because it doesn't work right for DImode.
    (And it is only used for DImode, so the other cases are worthless.)
    The problem is that it isn't true that there is NO possibility of conflict;
@@ -270,12 +147,6 @@ static struct { int allocno1, allocno2;}
   no_conflict_pairs[NUM_NO_CONFLICT_PAIRS];
 #endif /* 0 */
 
-/* Record all regs that are set in any one insn.
-   Communication from mark_reg_{store,clobber} and global_conflicts.  */
-
-static VEC(rtx, heap) *regs_set;
-
-
 /* Return true if *LOC contains an asm.  */
 
 static int
@@ -336,23 +207,13 @@ compute_regs_asm_clobbered (char *regs_a
 static HARD_REG_SET eliminable_regset;
 
 static int allocno_compare (const void *, const void *);
-static void global_conflicts (void);
 static void mirror_conflicts (void);
 static void expand_preferences (void);
 static void prune_preferences (void);
+static void set_preferences (void);
 static void find_reg (int, HARD_REG_SET, int, int, int);
-static void record_one_conflict (int);
-static void record_conflicts (int *, int);
-static void mark_reg_store (rtx, const_rtx, void *);
-static void mark_reg_clobber (rtx, const_rtx, void *);
-static void mark_reg_conflicts (rtx);
-static void mark_reg_death (rtx);
-static void set_preference (rtx, rtx);
 static void dump_conflicts (FILE *);
-static void reg_becomes_live (rtx, const_rtx, void *);
-static void reg_dies (int, enum machine_mode, struct insn_chain *);
-
-
+static void build_insn_chain (void);
 
 
 /* Look through the list of eliminable registers.  Set ELIM_SET to the
@@ -572,29 +433,33 @@ global_alloc (void)
 	  fprintf (dump_file, " %d", (int)i);
       fprintf (dump_file, "\n");
     }
-  allocno_row_words = (max_allocno + INT_BITS - 1) / INT_BITS;
+  allocno_row_words = (max_allocno + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT;
 
   /* We used to use alloca here, but the size of what it would try to
      allocate would occasionally cause it to exceed the stack limit and
      cause unpredictable core dumps.  Some examples were > 2Mb in size.  */
-  conflicts = XCNEWVEC (INT_TYPE, max_allocno * allocno_row_words);
-
-  allocnos_live = XNEWVEC (INT_TYPE, allocno_row_words);
+  conflicts = XCNEWVEC (HOST_WIDE_INT, max_allocno * allocno_row_words);
 
   /* If there is work to be done (at least one reg to allocate),
      perform global conflict analysis and allocate the regs.  */
 
   if (max_allocno > 0)
     {
-      /* Make a vector that mark_reg_{store,clobber} will store in.  */
-      if (!regs_set)
-	regs_set = VEC_alloc (rtx, heap, 10);
-
       /* Scan all the insns and compute the conflicts among allocnos
 	 and between allocnos and hard regs.  */
 
       global_conflicts ();
 
+      /* There is just too much going on in the register allocators to
+	 keep things up to date.  At the end we have to rescan anyway
+	 because things change when the reload_completed flag is set.  
+	 So we just turn off scanning and we will rescan by hand.  
+
+	 However, we needed to do the rescanning before this point to
+	 get the new insns scanned inserted by local_alloc scanned for
+	 global_conflicts.  */
+      df_set_flags (DF_NO_INSN_RESCAN);
+
       mirror_conflicts ();
 
       /* Eliminate conflicts between pseudos and eliminable registers.  If
@@ -604,6 +469,8 @@ global_alloc (void)
 	 So in either case, we can ignore the conflict.  Likewise for
 	 preferences.  */
 
+      set_preferences ();
+
       for (i = 0; i < (size_t) max_allocno; i++)
 	{
 	  AND_COMPL_HARD_REG_SET (allocno[i].hard_reg_conflicts,
@@ -679,7 +546,7 @@ global_alloc (void)
   if (n_basic_blocks > NUM_FIXED_BLOCKS)
 #endif
     {
-      build_insn_chain (get_insns ());
+      build_insn_chain ();
       retval = reload (get_insns (), 1);
     }
 
@@ -687,7 +554,6 @@ global_alloc (void)
   free (reg_allocno);
   free (allocno);
   free (conflicts);
-  free (allocnos_live);
 
   return retval;
 }
@@ -720,238 +586,6 @@ allocno_compare (const void *v1p, const 
   return v1 - v2;
 }
 
-/* Scan the rtl code and record all conflicts and register preferences in the
-   conflict matrices and preference tables.  */
-
-static void
-global_conflicts (void)
-{
-  unsigned i;
-  basic_block b;
-  rtx insn;
-  int *block_start_allocnos;
-
-  block_start_allocnos = XNEWVEC (int, max_allocno);
-
-  FOR_EACH_BB (b)
-    {
-      memset (allocnos_live, 0, allocno_row_words * sizeof (INT_TYPE));
-
-      /* Initialize table of registers currently live
-	 to the state at the beginning of this basic block.
-	 This also marks the conflicts among hard registers
-	 and any allocnos that are live.
-
-	 For pseudo-regs, there is only one bit for each one
-	 no matter how many hard regs it occupies.
-	 This is ok; we know the size from PSEUDO_REGNO_SIZE.
-	 For explicit hard regs, we cannot know the size that way
-	 since one hard reg can be used with various sizes.
-	 Therefore, we must require that all the hard regs
-	 implicitly live as part of a multi-word hard reg
-	 be explicitly marked in basic_block_live_at_start.  */
-
-      {
-	int ax = 0;
-	reg_set_iterator rsi;
-
-	REG_SET_TO_HARD_REG_SET (hard_regs_live, DF_RA_LIVE_TOP (b));
-	EXECUTE_IF_SET_IN_REG_SET (DF_RA_LIVE_TOP (b), FIRST_PSEUDO_REGISTER, i, rsi)
-	  {
-	    int a = reg_allocno[i];
-	    if (a >= 0)
-	      {
-		SET_ALLOCNO_LIVE (a);
-		block_start_allocnos[ax++] = a;
-	      }
-	    else if ((a = reg_renumber[i]) >= 0)
-	      add_to_hard_reg_set (&hard_regs_live, PSEUDO_REGNO_MODE (i), a);
-	  }
-
-	/* Record that each allocno now live conflicts with each hard reg
-	   now live.
-
-	   It is not necessary to mark any conflicts between pseudos at
-	   this point, even for pseudos which are live at the start of
-	   the basic block.
-
-	     Given two pseudos X and Y and any point in the CFG P.
-
-	     On any path to point P where X and Y are live one of the
-	     following conditions must be true:
-
-		1. X is live at some instruction on the path that
-		   evaluates Y.
-
-		2. Y is live at some instruction on the path that
-		   evaluates X.
-
-		3. Either X or Y is not evaluated on the path to P
-		   (i.e. it is used uninitialized) and thus the
-		   conflict can be ignored.
-
-	    In cases #1 and #2 the conflict will be recorded when we
-	    scan the instruction that makes either X or Y become live.  */
-	record_conflicts (block_start_allocnos, ax);
-
-#ifdef EH_RETURN_DATA_REGNO
-	if (bb_has_eh_pred (b))
-	  {
-	    unsigned int i;
-	    
-	    for (i = 0; ; ++i)
-	      {
-		unsigned int regno = EH_RETURN_DATA_REGNO (i);
-		if (regno == INVALID_REGNUM)
-		  break;
-		record_one_conflict (regno);
-	      }
-	  }
-#endif
-
-	/* Pseudos can't go in stack regs at the start of a basic block that
-	   is reached by an abnormal edge. Likewise for call clobbered regs,
-	   because caller-save, fixup_abnormal_edges and possibly the table
-	   driven EH machinery are not quite ready to handle such regs live
-	   across such edges.  */
-	{
-	  edge e;
-	  edge_iterator ei;
-
-	  FOR_EACH_EDGE (e, ei, b->preds)
-	    if (e->flags & EDGE_ABNORMAL)
-	      break;
-
-	  if (e != NULL)
-	    {
-#ifdef STACK_REGS
-	      EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, ax,
-					     {
-					       allocno[ax].no_stack_reg = 1;
-					     });
-	      for (ax = FIRST_STACK_REG; ax <= LAST_STACK_REG; ax++)
-		record_one_conflict (ax);
-#endif
-
-	      /* No need to record conflicts for call clobbered regs if we have
-		 nonlocal labels around, as we don't ever try to allocate such
-		 regs in this case.  */
-	      if (! current_function_has_nonlocal_label)
-		for (ax = 0; ax < FIRST_PSEUDO_REGISTER; ax++)
-		  if (call_used_regs [ax])
-		    record_one_conflict (ax);
-	    }
-	}
-      }
-
-      insn = BB_HEAD (b);
-
-      /* Scan the code of this basic block, noting which allocnos
-	 and hard regs are born or die.  When one is born,
-	 record a conflict with all others currently live.  */
-
-      while (1)
-	{
-	  RTX_CODE code = GET_CODE (insn);
-	  rtx link;
-
-	  gcc_assert (VEC_empty (rtx, regs_set));
-	  if (code == INSN || code == CALL_INSN || code == JUMP_INSN)
-	    {
-#if 0
-	      int i = 0;
-	      for (link = REG_NOTES (insn);
-		   link && i < NUM_NO_CONFLICT_PAIRS;
-		   link = XEXP (link, 1))
-		if (REG_NOTE_KIND (link) == REG_NO_CONFLICT)
-		  {
-		    no_conflict_pairs[i].allocno1
-		      = reg_allocno[REGNO (SET_DEST (PATTERN (insn)))];
-		    no_conflict_pairs[i].allocno2
-		      = reg_allocno[REGNO (XEXP (link, 0))];
-		    i++;
-		  }
-#endif /* 0 */
-
-	      /* Mark any registers clobbered by INSN as live,
-		 so they conflict with the inputs.  */
-
-	      note_stores (PATTERN (insn), mark_reg_clobber, NULL);
-
-#ifdef AUTO_INC_DEC
-	      /* Auto-increment instructions clobber the base
-		 register.  */
-	      for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
-		if (REG_NOTE_KIND (link) == REG_INC)
-		  mark_reg_store (XEXP (link, 0), NULL_RTX, NULL);
-#endif
-	      /* Mark any registers dead after INSN as dead now.  */
-
-	      for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
-		if (REG_NOTE_KIND (link) == REG_DEAD)
-		  mark_reg_death (XEXP (link, 0));
-
-	      /* Mark any registers set in INSN as live,
-		 and mark them as conflicting with all other live regs.
-		 Clobbers are processed again, so they conflict with
-		 the registers that are set.  */
-
-	      note_stores (PATTERN (insn), mark_reg_store, NULL);
-
-	      /* If INSN has multiple outputs, then any reg that dies here
-		 and is used inside of an output
-		 must conflict with the other outputs.
-
-		 It is unsafe to use !single_set here since it will ignore an
-		 unused output.  Just because an output is unused does not mean
-		 the compiler can assume the side effect will not occur.
-		 Consider if REG appears in the address of an output and we
-		 reload the output.  If we allocate REG to the same hard
-		 register as an unused output we could set the hard register
-		 before the output reload insn.  */
-	      if (GET_CODE (PATTERN (insn)) == PARALLEL && multiple_sets (insn))
-		for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
-		  if (REG_NOTE_KIND (link) == REG_DEAD)
-		    {
-		      int used_in_output = 0;
-		      int i;
-		      rtx reg = XEXP (link, 0);
-
-		      for (i = XVECLEN (PATTERN (insn), 0) - 1; i >= 0; i--)
-			{
-			  rtx set = XVECEXP (PATTERN (insn), 0, i);
-			  if (GET_CODE (set) == SET
-			      && !REG_P (SET_DEST (set))
-			      && !rtx_equal_p (reg, SET_DEST (set))
-			      && reg_overlap_mentioned_p (reg, SET_DEST (set)))
-			    used_in_output = 1;
-			}
-		      if (used_in_output)
-			mark_reg_conflicts (reg);
-		    }
-
-	      /* Mark any registers set in INSN and then never used.  */
-
-	      while (!VEC_empty (rtx, regs_set))
-		{
-		  rtx reg = VEC_pop (rtx, regs_set);
-		  rtx note = find_regno_note (insn, REG_UNUSED,
-					      REGNO (reg));
-		  if (note)
-		    mark_reg_death (XEXP (note, 0));
-		}
-	    }
-
-	  if (insn == BB_END (b))
-	    break;
-	  insn = NEXT_INSN (insn);
-	}
-    }
-
-  /* Clean up.  */
-  free (block_start_allocnos);
-}
-
 /* Expand the preference information by looking for cases where one allocno
    dies in an insn that sets an allocno.  If those two allocnos don't conflict,
    merge any preferences between those allocnos.  */
@@ -999,6 +633,150 @@ expand_preferences (void)
 			      allocno[a1].hard_reg_full_preferences);
 	  }
 }
+
+
+/* Try to set a preference for an allocno to a hard register.
+   We are passed DEST and SRC which are the operands of a SET.  It is known
+   that SRC is a register.  If SRC or the first operand of SRC is a register,
+   try to set a preference.  If one of the two is a hard register and the other
+   is a pseudo-register, mark the preference.
+
+   Note that we are not as aggressive as local-alloc in trying to tie a
+   pseudo-register to a hard register.  */
+
+static void
+set_preference (rtx dest, rtx src)
+{
+  unsigned int src_regno, dest_regno, end_regno;
+  /* Amount to add to the hard regno for SRC, or subtract from that for DEST,
+     to compensate for subregs in SRC or DEST.  */
+  int offset = 0;
+  unsigned int i;
+  int copy = 1;
+
+  if (GET_RTX_FORMAT (GET_CODE (src))[0] == 'e')
+    src = XEXP (src, 0), copy = 0;
+
+  /* Get the reg number for both SRC and DEST.
+     If neither is a reg, give up.  */
+
+  if (REG_P (src))
+    src_regno = REGNO (src);
+  else if (GET_CODE (src) == SUBREG && REG_P (SUBREG_REG (src)))
+    {
+      src_regno = REGNO (SUBREG_REG (src));
+
+      if (REGNO (SUBREG_REG (src)) < FIRST_PSEUDO_REGISTER)
+	offset += subreg_regno_offset (REGNO (SUBREG_REG (src)),
+				       GET_MODE (SUBREG_REG (src)),
+				       SUBREG_BYTE (src),
+				       GET_MODE (src));
+      else
+	offset += (SUBREG_BYTE (src)
+		   / REGMODE_NATURAL_SIZE (GET_MODE (src)));
+    }
+  else
+    return;
+
+  if (REG_P (dest))
+    dest_regno = REGNO (dest);
+  else if (GET_CODE (dest) == SUBREG && REG_P (SUBREG_REG (dest)))
+    {
+      dest_regno = REGNO (SUBREG_REG (dest));
+
+      if (REGNO (SUBREG_REG (dest)) < FIRST_PSEUDO_REGISTER)
+	offset -= subreg_regno_offset (REGNO (SUBREG_REG (dest)),
+				       GET_MODE (SUBREG_REG (dest)),
+				       SUBREG_BYTE (dest),
+				       GET_MODE (dest));
+      else
+	offset -= (SUBREG_BYTE (dest)
+		   / REGMODE_NATURAL_SIZE (GET_MODE (dest)));
+    }
+  else
+    return;
+
+  /* Convert either or both to hard reg numbers.  */
+
+  if (reg_renumber[src_regno] >= 0)
+    src_regno = reg_renumber[src_regno];
+
+  if (reg_renumber[dest_regno] >= 0)
+    dest_regno = reg_renumber[dest_regno];
+
+  /* Now if one is a hard reg and the other is a global pseudo
+     then give the other a preference.  */
+
+  if (dest_regno < FIRST_PSEUDO_REGISTER && src_regno >= FIRST_PSEUDO_REGISTER
+      && reg_allocno[src_regno] >= 0)
+    {
+      dest_regno -= offset;
+      if (dest_regno < FIRST_PSEUDO_REGISTER)
+	{
+	  if (copy)
+	    SET_REGBIT (hard_reg_copy_preferences,
+			reg_allocno[src_regno], dest_regno);
+
+	  SET_REGBIT (hard_reg_preferences,
+		      reg_allocno[src_regno], dest_regno);
+	  end_regno = end_hard_regno (GET_MODE (dest), dest_regno);
+	  for (i = dest_regno; i < end_regno; i++)
+	    SET_REGBIT (hard_reg_full_preferences, reg_allocno[src_regno], i);
+	}
+    }
+
+  if (src_regno < FIRST_PSEUDO_REGISTER && dest_regno >= FIRST_PSEUDO_REGISTER
+      && reg_allocno[dest_regno] >= 0)
+    {
+      src_regno += offset;
+      if (src_regno < FIRST_PSEUDO_REGISTER)
+	{
+	  if (copy)
+	    SET_REGBIT (hard_reg_copy_preferences,
+			reg_allocno[dest_regno], src_regno);
+
+	  SET_REGBIT (hard_reg_preferences,
+		      reg_allocno[dest_regno], src_regno);
+	  end_regno = end_hard_regno (GET_MODE (src), src_regno);
+	  for (i = src_regno; i < end_regno; i++)
+	    SET_REGBIT (hard_reg_full_preferences, reg_allocno[dest_regno], i);
+	}
+    }
+}
+
+/* Helper function for set_preferences.  */
+static void
+set_preferences_1 (rtx reg, const_rtx setter, void *data ATTRIBUTE_UNUSED)
+{
+  if (GET_CODE (reg) == SUBREG)
+    reg = SUBREG_REG (reg);
+
+  if (!REG_P (reg))
+    return;
+
+  gcc_assert (setter);
+  if (GET_CODE (setter) != CLOBBER)
+    set_preference (reg, SET_SRC (setter));
+}
+
+/* Scan all of the insns and initialize the preferences.  */
+
+static void 
+set_preferences (void)
+{
+  basic_block bb;
+  rtx insn;
+  FOR_EACH_BB (bb)
+    FOR_BB_INSNS_REVERSE (bb, insn)
+      {
+	if (!INSN_P (insn))
+	  continue;	
+
+	note_stores (PATTERN (insn), set_preferences_1, NULL);
+      }
+}
+
+
 
 /* Prune the preferences for global registers to exclude registers that cannot
    be used.
@@ -1446,62 +1224,16 @@ retry_global_alloc (int regno, HARD_REG_
     }
 }
 
-/* Record a conflict between register REGNO
-   and everything currently live.
-   REGNO must not be a pseudo reg that was allocated
-   by local_alloc; such numbers must be translated through
-   reg_renumber before calling here.  */
-
-static void
-record_one_conflict (int regno)
-{
-  int j;
-
-  if (regno < FIRST_PSEUDO_REGISTER)
-    /* When a hard register becomes live,
-       record conflicts with live pseudo regs.  */
-    EXECUTE_IF_SET_IN_ALLOCNO_SET (allocnos_live, j,
-      {
-	SET_HARD_REG_BIT (allocno[j].hard_reg_conflicts, regno);
-      });
-  else
-    /* When a pseudo-register becomes live,
-       record conflicts first with hard regs,
-       then with other pseudo regs.  */
-    {
-      int ialloc = reg_allocno[regno];
-      int ialloc_prod = ialloc * allocno_row_words;
-
-      IOR_HARD_REG_SET (allocno[ialloc].hard_reg_conflicts, hard_regs_live);
-      for (j = allocno_row_words - 1; j >= 0; j--)
-	conflicts[ialloc_prod + j] |= allocnos_live[j];
-    }
-}
-
-/* Record all allocnos currently live as conflicting
-   with all hard regs currently live.
-
-   ALLOCNO_VEC is a vector of LEN allocnos, all allocnos that
-   are currently live.  Their bits are also flagged in allocnos_live.  */
-
-static void
-record_conflicts (int *allocno_vec, int len)
-{
-  while (--len >= 0)
-    IOR_HARD_REG_SET (allocno[allocno_vec[len]].hard_reg_conflicts,
-                      hard_regs_live);
-}
-
 /* If CONFLICTP (i, j) is true, make sure CONFLICTP (j, i) is also true.  */
 static void
 mirror_conflicts (void)
 {
   int i, j;
   int rw = allocno_row_words;
-  int rwb = rw * INT_BITS;
-  INT_TYPE *p = conflicts;
-  INT_TYPE *q0 = conflicts, *q1, *q2;
-  unsigned INT_TYPE mask;
+  int rwb = rw * HOST_BITS_PER_WIDE_INT;
+  HOST_WIDE_INT *p = conflicts;
+  HOST_WIDE_INT *q0 = conflicts, *q1, *q2;
+  unsigned HOST_WIDE_INT mask;
 
   for (i = max_allocno - 1, mask = 1; i >= 0; i--, mask <<= 1)
     {
@@ -1512,9 +1244,9 @@ mirror_conflicts (void)
 	}
       for (j = allocno_row_words - 1, q1 = q0; j >= 0; j--, q1 += rwb)
 	{
-	  unsigned INT_TYPE word;
+	  unsigned HOST_WIDE_INT word;
 
-	  for (word = (unsigned INT_TYPE) *p++, q2 = q1; word;
+	  for (word = (unsigned HOST_WIDE_INT) *p++, q2 = q1; word;
 	       word >>= 1, q2 += rw)
 	    {
 	      if (word & 1)
@@ -1524,252 +1256,6 @@ mirror_conflicts (void)
     }
 }
 
-/* Handle the case where REG is set by the insn being scanned,
-   during the forward scan to accumulate conflicts.
-   Store a 1 in regs_live or allocnos_live for this register, record how many
-   consecutive hardware registers it actually needs,
-   and record a conflict with all other registers already live.
-
-   Note that even if REG does not remain alive after this insn,
-   we must mark it here as live, to ensure a conflict between
-   REG and any other regs set in this insn that really do live.
-   This is because those other regs could be considered after this.
-
-   REG might actually be something other than a register;
-   if so, we do nothing.
-
-   SETTER is 0 if this register was modified by an auto-increment (i.e.,
-   a REG_INC note was found for it).  */
-
-static void
-mark_reg_store (rtx reg, const_rtx setter, void *data ATTRIBUTE_UNUSED)
-{
-  int regno;
-
-  if (GET_CODE (reg) == SUBREG)
-    reg = SUBREG_REG (reg);
-
-  if (!REG_P (reg))
-    return;
-
-  VEC_safe_push (rtx, heap, regs_set, reg);
-
-  if (setter && GET_CODE (setter) != CLOBBER)
-    set_preference (reg, SET_SRC (setter));
-
-  regno = REGNO (reg);
-
-  /* Either this is one of the max_allocno pseudo regs not allocated,
-     or it is or has a hardware reg.  First handle the pseudo-regs.  */
-  if (regno >= FIRST_PSEUDO_REGISTER)
-    {
-      if (reg_allocno[regno] >= 0)
-	{
-	  SET_ALLOCNO_LIVE (reg_allocno[regno]);
-	  record_one_conflict (regno);
-	}
-    }
-
-  if (reg_renumber[regno] >= 0)
-    regno = reg_renumber[regno];
-
-  /* Handle hardware regs (and pseudos allocated to hard regs).  */
-  if (regno < FIRST_PSEUDO_REGISTER && ! fixed_regs[regno])
-    {
-      int last = end_hard_regno (GET_MODE (reg), regno);
-      while (regno < last)
-	{
-	  record_one_conflict (regno);
-	  SET_HARD_REG_BIT (hard_regs_live, regno);
-	  regno++;
-	}
-    }
-}
-
-/* Like mark_reg_store except notice just CLOBBERs; ignore SETs.  */
-
-static void
-mark_reg_clobber (rtx reg, const_rtx setter, void *data)
-{
-  if (GET_CODE (setter) == CLOBBER)
-    mark_reg_store (reg, setter, data);
-}
-
-/* Record that REG has conflicts with all the regs currently live.
-   Do not mark REG itself as live.  */
-
-static void
-mark_reg_conflicts (rtx reg)
-{
-  int regno;
-
-  if (GET_CODE (reg) == SUBREG)
-    reg = SUBREG_REG (reg);
-
-  if (!REG_P (reg))
-    return;
-
-  regno = REGNO (reg);
-
-  /* Either this is one of the max_allocno pseudo regs not allocated,
-     or it is or has a hardware reg.  First handle the pseudo-regs.  */
-  if (regno >= FIRST_PSEUDO_REGISTER)
-    {
-      if (reg_allocno[regno] >= 0)
-	record_one_conflict (regno);
-    }
-
-  if (reg_renumber[regno] >= 0)
-    regno = reg_renumber[regno];
-
-  /* Handle hardware regs (and pseudos allocated to hard regs).  */
-  if (regno < FIRST_PSEUDO_REGISTER && ! fixed_regs[regno])
-    {
-      int last = end_hard_regno (GET_MODE (reg), regno);
-      while (regno < last)
-	{
-	  record_one_conflict (regno);
-	  regno++;
-	}
-    }
-}
-
-/* Mark REG as being dead (following the insn being scanned now).
-   Store a 0 in regs_live or allocnos_live for this register.  */
-
-static void
-mark_reg_death (rtx reg)
-{
-  int regno = REGNO (reg);
-
-  /* Either this is one of the max_allocno pseudo regs not allocated,
-     or it is a hardware reg.  First handle the pseudo-regs.  */
-  if (regno >= FIRST_PSEUDO_REGISTER)
-    {
-      if (reg_allocno[regno] >= 0)
-	CLEAR_ALLOCNO_LIVE (reg_allocno[regno]);
-    }
-
-  /* For pseudo reg, see if it has been assigned a hardware reg.  */
-  if (reg_renumber[regno] >= 0)
-    regno = reg_renumber[regno];
-
-  /* Handle hardware regs (and pseudos allocated to hard regs).  */
-  if (regno < FIRST_PSEUDO_REGISTER && ! fixed_regs[regno])
-    /* Pseudo regs already assigned hardware regs are treated
-       almost the same as explicit hardware regs.  */
-    remove_from_hard_reg_set (&hard_regs_live, GET_MODE (reg), regno);
-}
-
-/* Try to set a preference for an allocno to a hard register.
-   We are passed DEST and SRC which are the operands of a SET.  It is known
-   that SRC is a register.  If SRC or the first operand of SRC is a register,
-   try to set a preference.  If one of the two is a hard register and the other
-   is a pseudo-register, mark the preference.
-
-   Note that we are not as aggressive as local-alloc in trying to tie a
-   pseudo-register to a hard register.  */
-
-static void
-set_preference (rtx dest, rtx src)
-{
-  unsigned int src_regno, dest_regno, end_regno;
-  /* Amount to add to the hard regno for SRC, or subtract from that for DEST,
-     to compensate for subregs in SRC or DEST.  */
-  int offset = 0;
-  unsigned int i;
-  int copy = 1;
-
-  if (GET_RTX_FORMAT (GET_CODE (src))[0] == 'e')
-    src = XEXP (src, 0), copy = 0;
-
-  /* Get the reg number for both SRC and DEST.
-     If neither is a reg, give up.  */
-
-  if (REG_P (src))
-    src_regno = REGNO (src);
-  else if (GET_CODE (src) == SUBREG && REG_P (SUBREG_REG (src)))
-    {
-      src_regno = REGNO (SUBREG_REG (src));
-
-      if (REGNO (SUBREG_REG (src)) < FIRST_PSEUDO_REGISTER)
-	offset += subreg_regno_offset (REGNO (SUBREG_REG (src)),
-				       GET_MODE (SUBREG_REG (src)),
-				       SUBREG_BYTE (src),
-				       GET_MODE (src));
-      else
-	offset += (SUBREG_BYTE (src)
-		   / REGMODE_NATURAL_SIZE (GET_MODE (src)));
-    }
-  else
-    return;
-
-  if (REG_P (dest))
-    dest_regno = REGNO (dest);
-  else if (GET_CODE (dest) == SUBREG && REG_P (SUBREG_REG (dest)))
-    {
-      dest_regno = REGNO (SUBREG_REG (dest));
-
-      if (REGNO (SUBREG_REG (dest)) < FIRST_PSEUDO_REGISTER)
-	offset -= subreg_regno_offset (REGNO (SUBREG_REG (dest)),
-				       GET_MODE (SUBREG_REG (dest)),
-				       SUBREG_BYTE (dest),
-				       GET_MODE (dest));
-      else
-	offset -= (SUBREG_BYTE (dest)
-		   / REGMODE_NATURAL_SIZE (GET_MODE (dest)));
-    }
-  else
-    return;
-
-  /* Convert either or both to hard reg numbers.  */
-
-  if (reg_renumber[src_regno] >= 0)
-    src_regno = reg_renumber[src_regno];
-
-  if (reg_renumber[dest_regno] >= 0)
-    dest_regno = reg_renumber[dest_regno];
-
-  /* Now if one is a hard reg and the other is a global pseudo
-     then give the other a preference.  */
-
-  if (dest_regno < FIRST_PSEUDO_REGISTER && src_regno >= FIRST_PSEUDO_REGISTER
-      && reg_allocno[src_regno] >= 0)
-    {
-      dest_regno -= offset;
-      if (dest_regno < FIRST_PSEUDO_REGISTER)
-	{
-	  if (copy)
-	    SET_REGBIT (hard_reg_copy_preferences,
-			reg_allocno[src_regno], dest_regno);
-
-	  SET_REGBIT (hard_reg_preferences,
-		      reg_allocno[src_regno], dest_regno);
-	  end_regno = end_hard_regno (GET_MODE (dest), dest_regno);
-	  for (i = dest_regno; i < end_regno; i++)
-	    SET_REGBIT (hard_reg_full_preferences, reg_allocno[src_regno], i);
-	}
-    }
-
-  if (src_regno < FIRST_PSEUDO_REGISTER && dest_regno >= FIRST_PSEUDO_REGISTER
-      && reg_allocno[dest_regno] >= 0)
-    {
-      src_regno += offset;
-      if (src_regno < FIRST_PSEUDO_REGISTER)
-	{
-	  if (copy)
-	    SET_REGBIT (hard_reg_copy_preferences,
-			reg_allocno[dest_regno], src_regno);
-
-	  SET_REGBIT (hard_reg_preferences,
-		      reg_allocno[dest_regno], src_regno);
-	  end_regno = end_hard_regno (GET_MODE (src), src_regno);
-	  for (i = src_regno; i < end_regno; i++)
-	    SET_REGBIT (hard_reg_full_preferences, reg_allocno[dest_regno], i);
-	}
-    }
-}
-
 /* Indicate that hard register number FROM was eliminated and replaced with
    an offset from hard register number TO.  The status of hard registers live
    at the start of a basic block is updated by replacing a use of FROM with
@@ -1782,7 +1268,7 @@ mark_elimination (int from, int to)
 
   FOR_EACH_BB (bb)
     {
-      regset r = DF_RA_LIVE_IN (bb);
+      regset r = DF_LIVE_IN (bb);
       if (REGNO_REG_SET_P (r, from))
 	{
 	  CLEAR_REGNO_REG_SET (r, from);
@@ -1791,174 +1277,239 @@ mark_elimination (int from, int to)
     }
 }
 
-/* Used for communication between the following functions.  Holds the
-   current life information.  */
-static regset live_relevant_regs;
-
-/* Record in live_relevant_regs and REGS_SET that register REG became live.
-   This is called via note_stores.  */
 static void
-reg_becomes_live (rtx reg, const_rtx setter ATTRIBUTE_UNUSED, void *regs_set)
+print_insn_chain (FILE *file, struct insn_chain *c)
 {
-  int regno;
-
-  if (GET_CODE (reg) == SUBREG)
-    reg = SUBREG_REG (reg);
-
-  if (!REG_P (reg))
-    return;
-
-  regno = REGNO (reg);
-  if (regno < FIRST_PSEUDO_REGISTER)
-    {
-      int nregs = hard_regno_nregs[regno][GET_MODE (reg)];
-      while (nregs-- > 0)
-	{
-	  if (GET_CODE (setter) == CLOBBER)
-	    CLEAR_REGNO_REG_SET (live_relevant_regs, regno);
-	  else
-	    SET_REGNO_REG_SET (live_relevant_regs, regno);
-
-	  if (!fixed_regs[regno])
-	    SET_REGNO_REG_SET ((regset) regs_set, regno);
-	  regno++;
-	}
-    }
-  else if (reg_renumber[regno] >= 0)
-    {
-      if (GET_CODE (setter) == CLOBBER)
-	CLEAR_REGNO_REG_SET (live_relevant_regs, regno);
-      else
-	SET_REGNO_REG_SET (live_relevant_regs, regno);
-      SET_REGNO_REG_SET ((regset) regs_set, regno);
-    }
+  fprintf (file, "insn=%d, ", INSN_UID(c->insn));
+  bitmap_print (file, &c->live_throughout, "live_throughout: ", ", ");
+  bitmap_print (file, &c->dead_or_set, "dead_or_set: ", "\n");
 }
 
-/* Record in live_relevant_regs that register REGNO died.  */
 static void
-reg_dies (int regno, enum machine_mode mode, struct insn_chain *chain)
+print_insn_chains (FILE *file)
 {
-  if (regno < FIRST_PSEUDO_REGISTER)
-    {
-      int nregs = hard_regno_nregs[regno][mode];
-      while (nregs-- > 0)
-	{
-	  CLEAR_REGNO_REG_SET (live_relevant_regs, regno);
-	  if (! fixed_regs[regno])
-	    SET_REGNO_REG_SET (&chain->dead_or_set, regno);
-	  regno++;
-	}
-    }
-  else
-    {
-      CLEAR_REGNO_REG_SET (live_relevant_regs, regno);
-      if (reg_renumber[regno] >= 0)
-	SET_REGNO_REG_SET (&chain->dead_or_set, regno);
-    }
+  struct insn_chain *c;
+  for (c = reload_insn_chain; c ; c = c->next)
+    print_insn_chain (file, c);
 }
-
 /* Walk the insns of the current function and build reload_insn_chain,
    and record register life information.  */
-void
-build_insn_chain (rtx first)
+static void
+build_insn_chain (void)
 {
+  unsigned int i;
   struct insn_chain **p = &reload_insn_chain;
-  struct insn_chain *prev = 0;
-  basic_block b = ENTRY_BLOCK_PTR->next_bb;
+  basic_block bb;
+  struct insn_chain *c = NULL;
+  struct insn_chain *next = NULL;
+  bitmap live_relevant_regs = BITMAP_ALLOC (NULL);
+  bitmap elim_regset = BITMAP_ALLOC (NULL);
+  /* live_subregs is a vector used to keep accurate information about
+     which hardregs are live in multiword pseudos.  live_subregs and
+     live_subregs_used are indexed by pseudo number.  The live_subreg
+     entry for a particular pseudo is only used if the corresponding
+     element is non zero in live_subregs_used.  The value in
+     live_subregs_used is number of bytes that the pseudo can
+     occupy.  */
+  sbitmap *live_subregs = XCNEWVEC (sbitmap, max_regno);
+  int *live_subregs_used = XNEWVEC (int, max_regno);
 
-  live_relevant_regs = ALLOC_REG_SET (&reg_obstack);
+  for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+    if (TEST_HARD_REG_BIT (eliminable_regset, i))
+      bitmap_set_bit (elim_regset, i);
 
-  for (; first; first = NEXT_INSN (first))
+  FOR_EACH_BB_REVERSE (bb)
     {
-      struct insn_chain *c;
-
-      if (first == BB_HEAD (b))
+      bitmap_iterator bi;
+      rtx insn;
+      
+      CLEAR_REG_SET (live_relevant_regs);
+      memset (live_subregs_used, 0, max_regno * sizeof (int));
+      
+      EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb), 0, i, bi)
 	{
-	  unsigned i;
-	  bitmap_iterator bi;
-
-	  CLEAR_REG_SET (live_relevant_regs);
-
-	  EXECUTE_IF_SET_IN_BITMAP (df_get_live_top (b), 0, i, bi)
-	    {
-	      if (i < FIRST_PSEUDO_REGISTER
-		  ? ! TEST_HARD_REG_BIT (eliminable_regset, i)
-		  : reg_renumber[i] >= 0)
-		SET_REGNO_REG_SET (live_relevant_regs, i);
-	    }
+	  if (i >= FIRST_PSEUDO_REGISTER)
+	    break;
+	  bitmap_set_bit (live_relevant_regs, i);
 	}
 
-      if (!NOTE_P (first) && !BARRIER_P (first))
+      EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb), FIRST_PSEUDO_REGISTER, i, bi)
 	{
-	  c = new_insn_chain ();
-	  c->prev = prev;
-	  prev = c;
-	  *p = c;
-	  p = &c->next;
-	  c->insn = first;
-	  c->block = b->index;
+	  if (reg_renumber[i] >= 0)
+	    bitmap_set_bit (live_relevant_regs, i);
+	}
 
-	  if (INSN_P (first))
+      FOR_BB_INSNS_REVERSE (bb, insn)
+	{
+	  if (!NOTE_P (insn) && !BARRIER_P (insn))
 	    {
-	      rtx link;
-
-	      /* Mark the death of everything that dies in this instruction.  */
-
-	      for (link = REG_NOTES (first); link; link = XEXP (link, 1))
-		if (REG_NOTE_KIND (link) == REG_DEAD
-		    && REG_P (XEXP (link, 0)))
-		  reg_dies (REGNO (XEXP (link, 0)), GET_MODE (XEXP (link, 0)),
-			    c);
-
-	      COPY_REG_SET (&c->live_throughout, live_relevant_regs);
-
-	      /* Mark everything born in this instruction as live.  */
+	      unsigned int uid = INSN_UID (insn);
+	      struct df_ref **def_rec;
+	      struct df_ref **use_rec;
+
+	      c = new_insn_chain ();
+	      c->next = next;
+	      next = c;
+	      *p = c;
+	      p = &c->prev;
+	      
+	      c->insn = insn;
+	      c->block = bb->index;
 
-	      note_stores (PATTERN (first), reg_becomes_live,
-			   &c->dead_or_set);
-	    }
-	  else
-	    COPY_REG_SET (&c->live_throughout, live_relevant_regs);
+	      if (INSN_P (insn))
+		for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
+		  {
+		    struct df_ref *def = *def_rec;
+		    unsigned int regno = DF_REF_REGNO (def);
+		    
+		    /* Ignore may clobbers because these are generated
+		       from calls. However, every other kind of def is
+		       added to dead_or_set.  */
+		    if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
+		      {
+			if (regno < FIRST_PSEUDO_REGISTER)
+			  {
+			    if (! fixed_regs[regno])
+			      bitmap_set_bit (&c->dead_or_set, regno);
+			  }
+			else if (reg_renumber[regno] >= 0)
+			  bitmap_set_bit (&c->dead_or_set, regno);
+		      }
 
-	  if (INSN_P (first))
-	    {
-	      rtx link;
+		    if ((regno < FIRST_PSEUDO_REGISTER || reg_renumber[regno] >= 0)
+			&& (!DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL)))
+		      {
+			rtx reg = DF_REF_REG (def);
+			/* We can model subregs, but not if they are
+			   wrapped in ZERO_EXTRACTS.  */
+			if (GET_CODE (reg) == SUBREG
+			    && !DF_REF_FLAGS_IS_SET (def, DF_REF_EXTRACT))
+			  {
+			    unsigned int start = SUBREG_BYTE (reg);
+			    unsigned int last = start + GET_MODE_SIZE (GET_MODE (reg));
 
-	      /* Mark anything that is set in this insn and then unused as dying.  */
+			    ra_init_live_subregs (bitmap_bit_p (live_relevant_regs, regno), 
+						  live_subregs, live_subregs_used,
+						  regno, reg);
+			    /* Ignore the paradoxical bits.  */
+			    if ((int)last > live_subregs_used[regno])
+			      last = live_subregs_used[regno];
+
+			    while (start < last)
+			      {
+				RESET_BIT (live_subregs[regno], start);
+				start++;
+			      }
+			    
+			    if (sbitmap_empty_p (live_subregs[regno]))
+			      {
+				live_subregs_used[regno] = 0;
+				bitmap_clear_bit (live_relevant_regs, regno);
+			      }
+			    else
+			      /* Set live_relevant_regs here because
+				 that bit has to be true to get us to
+				 look at the live_subregs fields.  */
+			      bitmap_set_bit (live_relevant_regs, regno);
+			  }
+			else
+			  {
+			    /* DF_REF_PARTIAL is generated for
+			       subregs, STRICT_LOW_PART, and
+			       ZERO_EXTRACT.  We handle the subreg
+			       case above so here we have to keep from
+			       modeling the def as a killing def.  */
+			    if (!DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL))
+			      {
+				bitmap_clear_bit (live_relevant_regs, regno);
+				live_subregs_used[regno] = 0;
+			      }
+			  }
+		      }
+		  }
+	  
+	      bitmap_and_compl_into (live_relevant_regs, elim_regset);
+	      bitmap_copy (&c->live_throughout, live_relevant_regs);
 
-	      for (link = REG_NOTES (first); link; link = XEXP (link, 1))
-		if (REG_NOTE_KIND (link) == REG_UNUSED
-		    && REG_P (XEXP (link, 0)))
-		  reg_dies (REGNO (XEXP (link, 0)), GET_MODE (XEXP (link, 0)),
-			    c);
+	      if (INSN_P (insn))
+		for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++)
+		  {
+		    struct df_ref *use = *use_rec;
+		    unsigned int regno = DF_REF_REGNO (use);
+		    rtx reg = DF_REF_REG (use);
+		    
+		    /* DF_REF_READ_WRITE on a use means that this use
+		       is fabricated from a def that is a partial set
+		       to a multiword reg.  Here, we only model the
+		       subreg case that is not wrapped in ZERO_EXTRACT
+		       precisely so we do not need to look at the
+		       fabricated use. */
+		    if (DF_REF_FLAGS_IS_SET (use, DF_REF_READ_WRITE) 
+			&& !DF_REF_FLAGS_IS_SET (use, DF_REF_EXTRACT) 
+			&& DF_REF_FLAGS_IS_SET (use, DF_REF_SUBREG))
+		      continue;
+		    
+		    /* Add the last use of each var to dead_or_set.  */
+		    if (!bitmap_bit_p (live_relevant_regs, regno))
+		      {
+			if (regno < FIRST_PSEUDO_REGISTER)
+			  {
+			    if (! fixed_regs[regno])
+			      bitmap_set_bit (&c->dead_or_set, regno);
+			  }
+			else if (reg_renumber[regno] >= 0)
+			  bitmap_set_bit (&c->dead_or_set, regno);
+		      }
+		    
+		    if (regno < FIRST_PSEUDO_REGISTER || reg_renumber[regno] >= 0)
+		      {
+			if (GET_CODE (reg) == SUBREG
+			    && !DF_REF_FLAGS_IS_SET (use, DF_REF_EXTRACT)) 
+			  {
+			    unsigned int start = SUBREG_BYTE (reg);
+			    unsigned int last = start + GET_MODE_SIZE (GET_MODE (reg));
+			    
+			    ra_init_live_subregs (bitmap_bit_p (live_relevant_regs, regno), 
+						  live_subregs, live_subregs_used,
+						  regno, reg);
+			    
+			    /* Ignore the paradoxical bits.  */
+			    if ((int)last > live_subregs_used[regno])
+			      last = live_subregs_used[regno];
+
+			    while (start < last)
+			      {
+				SET_BIT (live_subregs[regno], start);
+				start++;
+			      }
+			  }
+			else
+			  /* Resetting the live_subregs_used is
+			     effectively saying do not use the subregs
+			     because we are reading the whole
+			     pseudo.  */
+			  live_subregs_used[regno] = 0;
+			bitmap_set_bit (live_relevant_regs, regno);
+		      }
+		  }
 	    }
 	}
+    }
 
-      if (first == BB_END (b))
-	b = b->next_bb;
-
-      /* Stop after we pass the end of the last basic block.  Verify that
-	 no real insns are after the end of the last basic block.
+  for (i = 0; i < (unsigned int)max_regno; i++)
+    if (live_subregs[i])
+      free (live_subregs[i]);
+
+  reload_insn_chain = c;
+  *p = NULL;
+
+  free (live_subregs);
+  free (live_subregs_used);
+  BITMAP_FREE (live_relevant_regs);
+  BITMAP_FREE (elim_regset);
 
-	 We may want to reorganize the loop somewhat since this test should
-	 always be the right exit test.  Allow an ADDR_VEC or ADDR_DIF_VEC if
-	 the previous real insn is a JUMP_INSN.  */
-      if (b == EXIT_BLOCK_PTR)
-	{
-#ifdef ENABLE_CHECKING
-	  for (first = NEXT_INSN (first); first; first = NEXT_INSN (first))
-	    gcc_assert (!INSN_P (first)
-			|| GET_CODE (PATTERN (first)) == USE
-			|| ((GET_CODE (PATTERN (first)) == ADDR_VEC
-			     || GET_CODE (PATTERN (first)) == ADDR_DIFF_VEC)
-			    && prev_real_insn (first) != 0
-			    && JUMP_P (prev_real_insn (first))));
-#endif
-	  break;
-	}
-    }
-  FREE_REG_SET (live_relevant_regs);
-  *p = 0;
+  if (dump_file)
+    print_insn_chains (dump_file);
 }
 
 /* Print debugging trace information if -dg switch is given,
@@ -2001,7 +1552,7 @@ dump_conflicts (FILE *file)
 	if (CONFLICTP (j, i))
 	  fprintf (file, " %d", allocno[j].reg);
       for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
-	if (TEST_HARD_REG_BIT (allocno[i].hard_reg_conflicts, j))
+	if (TEST_HARD_REG_BIT (allocno[i].hard_reg_conflicts, j) && ! fixed_regs[j])
 	  fprintf (file, " %d", j);
       fprintf (file, "\n");
 
@@ -2055,8 +1606,13 @@ rest_of_handle_global_alloc (void)
     failure = global_alloc ();
   else
     {
+      /* There is just too much going on in the register allocators to
+	 keep things up to date.  At the end we have to rescan anyway
+	 because things change when the reload_completed flag is set.  
+	 So we just turn off scanning and we will rescan by hand.  */
+      df_set_flags (DF_NO_INSN_RESCAN);
       compute_regsets (&eliminable_regset, &no_global_alloc_regs);
-      build_insn_chain (get_insns ());
+      build_insn_chain ();
       df_set_flags (DF_NO_INSN_RESCAN);
       failure = reload (get_insns (), 0);
     }
Index: hard-reg-set.h
===================================================================
--- hard-reg-set.h	(revision 128885)
+++ hard-reg-set.h	(working copy)
@@ -380,7 +380,7 @@ hard_reg_set_empty_p (const HARD_REG_SET
   return x[0] == 0 && x[1] == 0 && x[2] == 0 && x[3] == 0;
 }
 
-#else /* FIRST_PSEUDO_REGISTER > 3*HOST_BITS_PER_WIDEST_FAST_INT */
+#else /* FIRST_PSEUDO_REGISTER > 4*HOST_BITS_PER_WIDEST_FAST_INT */
 
 #define CLEAR_HARD_REG_SET(TO)  \
 do { HARD_REG_ELT_TYPE *scan_tp_ = (TO);			\
Index: local-alloc.c
===================================================================
--- local-alloc.c	(revision 128885)
+++ local-alloc.c	(working copy)
@@ -1211,13 +1211,9 @@ update_equiv_regs (void)
   if (!bitmap_empty_p (cleared_regs))
     FOR_EACH_BB (bb)
       {
-	bitmap_and_compl_into (DF_RA_LIVE_IN (bb), cleared_regs);
-	if (DF_RA_LIVE_TOP (bb))
-	  bitmap_and_compl_into (DF_RA_LIVE_TOP (bb), cleared_regs);
-	bitmap_and_compl_into (DF_RA_LIVE_OUT (bb), cleared_regs);
+	bitmap_and_compl_into (DF_LIVE_IN (bb), cleared_regs);
+	bitmap_and_compl_into (DF_LIVE_OUT (bb), cleared_regs);
 	bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
-	if (DF_LR_TOP (bb))
-	  bitmap_and_compl_into (DF_LR_TOP (bb), cleared_regs);
 	bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
       }
 
@@ -1277,6 +1273,7 @@ block_alloc (int b)
   int max_uid = get_max_uid ();
   int *qty_order;
   int no_conflict_combined_regno = -1;
+  struct df_ref ** def_rec;
 
   /* Count the instructions in the basic block.  */
 
@@ -1299,7 +1296,19 @@ block_alloc (int b)
 
   /* Initialize table of hardware registers currently live.  */
 
-  REG_SET_TO_HARD_REG_SET (regs_live, DF_LR_TOP (BASIC_BLOCK (b)));
+  REG_SET_TO_HARD_REG_SET (regs_live, DF_LR_IN (BASIC_BLOCK (b)));
+
+  /* This is conservative, as this would include registers that are
+     artificial-def'ed-but-not-used.  However, artificial-defs are
+     rare, and such uninitialized use is rarer still, and the chance
+     of this having any performance impact is even less, while the
+     benefit is not having to compute and keep the TOP set around.  */
+  for (def_rec = df_get_artificial_defs (b); *def_rec; def_rec++)
+    {
+      int regno = DF_REF_REGNO (*def_rec);
+      if (regno < FIRST_PSEUDO_REGISTER)
+	SET_HARD_REG_BIT (regs_live, regno);
+    }
 
   /* This loop scans the instructions of the basic block
      and assigns quantities to registers.
@@ -2502,6 +2511,49 @@ dump_local_alloc (FILE *file)
       fprintf (file, ";; Register %d in %d.\n", i, reg_renumber[i]);
 }
 
+#ifdef STACK_REGS
+static void
+find_stack_regs (void)
+{
+  bitmap stack_regs = BITMAP_ALLOC (NULL);
+  int i;
+  HARD_REG_SET stack_hard_regs, used;
+  basic_block bb;
+  
+  /* Any register that MAY be allocated to a register stack (like the
+     387) is treated poorly.  Each such register is marked as being
+     live everywhere.  This keeps the register allocator and the
+     subsequent passes from doing anything useful with these values.
+
+     FIXME: This seems like an incredibly poor idea.  */
+
+  CLEAR_HARD_REG_SET (stack_hard_regs);
+  for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
+    SET_HARD_REG_BIT (stack_hard_regs, i);
+
+  for (i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
+    {
+      COPY_HARD_REG_SET (used, reg_class_contents[reg_preferred_class (i)]);
+      IOR_HARD_REG_SET (used, reg_class_contents[reg_alternate_class (i)]);
+      AND_HARD_REG_SET (used, stack_hard_regs);
+      if (!hard_reg_set_empty_p (used))
+	bitmap_set_bit (stack_regs, i);
+    }
+
+  if (dump_file)
+    bitmap_print (dump_file, stack_regs, "stack regs:", "\n");
+
+  FOR_EACH_BB (bb)
+    {
+      bitmap_ior_into (DF_LIVE_IN (bb), stack_regs);
+      bitmap_and_into (DF_LIVE_IN (bb), DF_LR_IN (bb));
+      bitmap_ior_into (DF_LIVE_OUT (bb), stack_regs);
+      bitmap_and_into (DF_LIVE_OUT (bb), DF_LR_OUT (bb));
+    }
+  BITMAP_FREE (stack_regs);
+}
+#endif
+
 /* Run old register allocator.  Return TRUE if we must exit
    rest_of_compilation upon return.  */
 static unsigned int
@@ -2512,26 +2564,22 @@ rest_of_handle_local_alloc (void)
 
   df_note_add_problem ();
 
-  if (optimize > 1)
-    df_remove_problem (df_live);
-  /* Create a new version of df that has the special version of UR if
-     we are doing optimization.  */
-  if (optimize)
-    df_urec_add_problem ();
+  if (optimize == 1)
+    {
+      df_live_add_problem ();
+      df_live_set_all_dirty ();
+    }
 #ifdef ENABLE_CHECKING
   df->changeable_flags |= DF_VERIFY_SCHEDULED;
 #endif
   df_analyze ();
+#ifdef STACK_REGS
+  if (optimize)
+    find_stack_regs ();
+#endif
   regstat_init_n_sets_and_refs ();
   regstat_compute_ri ();
 
-  /* There is just too much going on in the register allocators to
-     keep things up to date.  At the end we have to rescan anyway
-     because things change when the reload_completed flag is set.  
-     So we just turn off scanning and we will rescan by hand.  */
-  df_set_flags (DF_NO_INSN_RESCAN);
-
-
   /* If we are not optimizing, then this is the only place before
      register allocation where dataflow is done.  And that is needed
      to generate these warnings.  */
Index: df.h
===================================================================
--- df.h	(revision 128885)
+++ df.h	(working copy)
@@ -43,11 +43,9 @@ struct df_link;
 #define DF_SCAN  0 
 #define DF_LR    1      /* Live Registers backward. */
 #define DF_LIVE  2      /* Live Registers & Uninitialized Registers */
-
 #define DF_RD    3      /* Reaching Defs. */
-#define DF_UREC  4      /* Uninitialized Registers with Early Clobber. */
-#define DF_CHAIN 5      /* Def-Use and/or Use-Def Chains. */
-#define DF_NOTE  6      /* REG_DEF and REG_UNUSED notes. */
+#define DF_CHAIN 4      /* Def-Use and/or Use-Def Chains. */
+#define DF_NOTE  5      /* REG_DEF and REG_UNUSED notes. */
 
 #define DF_LAST_PROBLEM_PLUS1 (DF_NOTE + 1)
 
@@ -70,10 +68,9 @@ enum df_ref_type {DF_REF_REG_DEF, DF_REF
 
 enum df_ref_flags
   {
-    /* Read-modify-write refs generate both a use and a def and
-       these are marked with this flag to show that they are not
-       independent.  */
-    DF_REF_READ_WRITE = 1 << 0,
+    /* This flag is set if this ref occurs inside of a conditional
+       execution instruction.  */
+    DF_REF_CONDITIONAL = 1 << 0,
 
     /* If this flag is set for an artificial use or def, that ref
        logically happens at the top of the block.  If it is not set
@@ -85,14 +82,26 @@ enum df_ref_flags
        note.  */
     DF_REF_IN_NOTE = 1 << 2,
 
+    /* This bit is true if this ref can make regs_ever_live true for
+       this regno.  */
+    DF_HARD_REG_LIVE = 1 << 3,
+
+
+    /* This flag is set if this ref is a partial use or def of the
+       associated register.  */
+    DF_REF_PARTIAL = 1 << 4,
+    
+    /* Read-modify-write refs generate both a use and a def and
+       these are marked with this flag to show that they are not
+       independent.  */
+    DF_REF_READ_WRITE = 1 << 5,
+
     /* This flag is set if this ref, generally a def, may clobber the
        referenced register.  This is generally only set for hard
        registers that cross a call site.  With better information
        about calls, some of these could be changed in the future to
        DF_REF_MUST_CLOBBER.  */
-    DF_REF_MAY_CLOBBER = 1 << 3,
-
-
+    DF_REF_MAY_CLOBBER = 1 << 6,
 
     /* This flag is set if this ref, generally a def, is a real
        clobber. This is not currently set for registers live across a
@@ -103,34 +112,31 @@ enum df_ref_flags
        clobber is to a subreg.  So in order to tell if the clobber
        wipes out the entire register, it is necessary to also check
        the DF_REF_PARTIAL flag.  */
-    DF_REF_MUST_CLOBBER = 1 << 4,
+    DF_REF_MUST_CLOBBER = 1 << 7,
 
-    /* This bit is true if this ref is part of a multiword hardreg.  */
-    DF_REF_MW_HARDREG = 1 << 5,
 
-    /* This flag is set if this ref is a partial use or def of the
-       associated register.  */
-    DF_REF_PARTIAL = 1 << 6,
-    
-    /* This flag is set if this ref occurs inside of a conditional
-       execution instruction.  */
-    DF_REF_CONDITIONAL = 1 << 7,
+    /* This flag is set if this ref is inside a pre/post modify.  */
+    DF_REF_PRE_POST_MODIFY = 1 << 8,
 
+    /* This flag is set if the ref contains a ZERO_EXTRACT or SIGN_EXTRACT.  */
+    DF_REF_EXTRACT = 1 << 9,
 
+    /* This flag is set if the ref contains a STRICT_LOWER_PART.  */
+    DF_REF_STRICT_LOWER_PART = 1 << 10,
 
-    /* This flag is set if this ref is inside a pre/post modify.  */
-    DF_REF_PRE_POST_MODIFY = 1 << 8,
+    /* This flag is set if the ref contains a SUBREG.  */
+    DF_REF_SUBREG = 1 << 11,
+
+
+    /* This bit is true if this ref is part of a multiword hardreg.  */
+    DF_REF_MW_HARDREG = 1 << 12,
 
     /* This flag is set if this ref is a usage of the stack pointer by
        a function call.  */
-    DF_REF_CALL_STACK_USAGE = 1 << 9,
+    DF_REF_CALL_STACK_USAGE = 1 << 13,
 
     /* This flag is used for verification of existing refs. */
-    DF_REF_REG_MARKER = 1 << 10,
-
-    /* This bit is true if this ref can make regs_ever_live true for
-       this regno.  */
-    DF_HARD_REG_LIVE = 1 << 11
+    DF_REF_REG_MARKER = 1 << 14
   };
 
 /* The possible ordering of refs within the df_ref_info.  */
@@ -544,7 +550,6 @@ struct df
 #define DF_SCAN_BB_INFO(BB) (df_scan_get_bb_info((BB)->index))
 #define DF_RD_BB_INFO(BB) (df_rd_get_bb_info((BB)->index))
 #define DF_LR_BB_INFO(BB) (df_lr_get_bb_info((BB)->index))
-#define DF_UREC_BB_INFO(BB) (df_urec_get_bb_info((BB)->index))
 #define DF_LIVE_BB_INFO(BB) (df_live_get_bb_info((BB)->index))
 
 /* Most transformations that wish to use live register analysis will
@@ -552,17 +557,10 @@ struct df
 #define DF_LIVE_IN(BB) (DF_LIVE_BB_INFO(BB)->in) 
 #define DF_LIVE_OUT(BB) (DF_LIVE_BB_INFO(BB)->out) 
 
-
-/* Live in for register allocation also takes into account several other factors.  */
-#define DF_RA_LIVE_IN(BB) (DF_UREC_BB_INFO(BB)->in) 
-#define DF_RA_LIVE_TOP(BB) (DF_UREC_BB_INFO(BB)->top) 
-#define DF_RA_LIVE_OUT(BB) (DF_UREC_BB_INFO(BB)->out) 
-
 /* These macros are currently used by only reg-stack since it is not
    tolerant of uninitialized variables.  This intolerance should be
    fixed because it causes other problems.  */ 
 #define DF_LR_IN(BB) (DF_LR_BB_INFO(BB)->in) 
-#define DF_LR_TOP(BB) (DF_LR_BB_INFO(BB)->top) 
 #define DF_LR_OUT(BB) (DF_LR_BB_INFO(BB)->out) 
 
 /* Macros to access the elements within the ref structure.  */
@@ -685,11 +683,21 @@ extern bitmap df_invalidated_by_call;
 /* One of these structures is allocated for every basic block.  */
 struct df_scan_bb_info
 {
-  /* Defs at the start of a basic block that is the target of an
-     exception edge.  */
+  /* The entry block has many artificial defs and these are at the
+     bottom of the block.
+
+     Blocks that are targets of exception edges may have some
+     artificial defs.  These are logically located at the top of the
+     block.
+
+     Blocks that are the targets of non-local goto's have the hard
+     frame pointer defined at the top of the block.  */
   struct df_ref **artificial_defs;
 
-  /* Uses of hard registers that are live at every block.  */
+  /* Blocks that are targets of exception edges may have some
+     artificial uses.  These are logically at the top of the block.
+
+     Most blocks have artificial uses at the bottom of the block.  */
   struct df_ref **artificial_uses;
 };
 
@@ -710,23 +718,8 @@ struct df_rd_bb_info 
 };
 
 
-/* Live registers.  All bitmaps are referenced by the register number.  
-
-   df_lr_bb_info:IN is the "in" set of the traditional dataflow sense
-   which is the confluence of out sets of all predecessor blocks.
-   The difference between IN and TOP is 
-   due to the artificial defs and uses at the top (DF_REF_TOP)
-   (e.g. exception handling dispatch block, which can have
-   a few registers defined by the runtime) - which is NOT included
-   in the "in" set before this function but is included after.  
-   For the initial live set of forward scanning, TOP should be used
-   instead of IN - otherwise, artificial defs won't be in IN set
-   causing the bad transformation. TOP set can not simply be
-   the union of IN set and artificial defs at the top, 
-   because artificial defs might not be used at all,
-   in which case those defs are not live at any point
-   (except as a dangling def) - hence TOP has to be calculated
-   during the LR problem computation and stored in df_lr_bb_info.  */
+/* Live registers, a backwards dataflow problem.  All bitmaps are
+   referenced by the register number.  */
 
 struct df_lr_bb_info 
 {
@@ -734,12 +727,9 @@ struct df_lr_bb_info 
   bitmap def;   /* The set of registers set in this block 
                    - except artificial defs at the top.  */
   bitmap use;   /* The set of registers used in this block.  */
-  bitmap adef;  /* The artificial defs at top. */
-  bitmap ause;  /* The artificial uses at top. */
 
   /* The results of the dataflow problem.  */
   bitmap in;    /* Just before the block itself. */
-  bitmap top;   /* Just before the first insn in the block. */
   bitmap out;   /* At the bottom of the block.  */
 };
 
@@ -761,23 +751,6 @@ struct df_live_bb_info 
 };
 
 
-/* Uninitialized registers.  All bitmaps are referenced by the register number.  */
-struct df_urec_bb_info 
-{
-  /* Local sets to describe the basic blocks.  */
-  bitmap earlyclobber;  /* The set of registers that are referenced
-			   with an early clobber mode.  */
-  /* Kill and gen are defined as in the UR problem.  */
-  bitmap kill;
-  bitmap gen;
-
-  /* The results of the dataflow problem.  */
-  bitmap in;    /* Just before the block.  */
-  bitmap top;   /* Just before the first insn in the block. */
-  bitmap out;   /* At the bottom of the block.  */
-};
-
-
 /* This is used for debugging and for the dumpers to find the latest
    instance so that the df info can be added to the dumps.  This
    should not be used by regular code.  */ 
@@ -786,7 +759,6 @@ extern struct df *df;
 #define df_rd    (df->problems_by_index[DF_RD])
 #define df_lr    (df->problems_by_index[DF_LR])
 #define df_live  (df->problems_by_index[DF_LIVE])
-#define df_urec  (df->problems_by_index[DF_UREC])
 #define df_chain (df->problems_by_index[DF_CHAIN])
 #define df_note  (df->problems_by_index[DF_NOTE])
 
@@ -861,7 +833,6 @@ extern void df_chain_unlink (struct df_r
 extern void df_chain_copy (struct df_ref *, struct df_link *);
 extern bitmap df_get_live_in (basic_block);
 extern bitmap df_get_live_out (basic_block);
-extern bitmap df_get_live_top (basic_block);
 extern void df_grow_bb_info (struct dataflow *);
 extern void df_chain_dump (struct df_link *, FILE *);
 extern void df_print_bb_index (basic_block bb, FILE *file);
@@ -871,7 +842,6 @@ extern void df_lr_verify_transfer_functi
 extern void df_live_verify_transfer_functions (void);
 extern void df_live_add_problem (void);
 extern void df_live_set_all_dirty (void);
-extern void df_urec_add_problem (void);
 extern void df_chain_add_problem (enum df_chain_flags);
 extern void df_note_add_problem (void);
 extern void df_simulate_find_defs (rtx, bitmap);
@@ -898,7 +868,6 @@ extern void df_bb_refs_record (int, bool
 extern bool df_insn_rescan (rtx);
 extern void df_insn_rescan_all (void);
 extern void df_process_deferred_rescans (void);
-extern bool df_has_eh_preds (basic_block);
 extern void df_recompute_luids (basic_block);
 extern void df_insn_change_bb (rtx);
 extern void df_maybe_reorganize_use_refs (enum df_ref_order);
@@ -956,16 +925,6 @@ df_live_get_bb_info (unsigned int index)
     return NULL;
 }
 
-static inline struct df_urec_bb_info *
-df_urec_get_bb_info (unsigned int index)
-{
-  if (index < df_urec->block_info_size)
-    return (struct df_urec_bb_info *) df_urec->block_info[index];
-  else
-    return NULL;
-}
-
-
 /* Get the artificial defs for a basic block.  */
 
 static inline struct df_ref **
Index: ra.h
===================================================================
--- ra.h	(revision 0)
+++ ra.h	(revision 0)
@@ -0,0 +1,133 @@
+/* Define per-register tables for data flow info and register allocation.
+   Copyright (C) 2007 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_RA_H
+#define GCC_RA_H
+
+#include "regs.h"
+
+struct allocno
+{
+  int reg;
+  /* Gives the number of consecutive hard registers needed by that
+     pseudo reg.  */
+  int size;
+
+  /* Number of calls crossed by each allocno.  */
+  int calls_crossed;
+
+  /* Number of calls that might throw crossed by each allocno.  */
+  int throwing_calls_crossed;
+
+  /* Number of refs to each allocno.  */
+  int n_refs;
+
+  /* Frequency of uses of each allocno.  */
+  int freq;
+
+  /* Guess at live length of each allocno.
+     This is actually the max of the live lengths of the regs.  */
+  int live_length;
+
+  /* Set of hard regs conflicting with allocno N.  */
+
+  HARD_REG_SET hard_reg_conflicts;
+
+  /* Set of hard regs preferred by allocno N.
+     This is used to make allocnos go into regs that are copied to or from them,
+     when possible, to reduce register shuffling.  */
+
+  HARD_REG_SET hard_reg_preferences;
+
+  /* Similar, but just counts register preferences made in simple copy
+     operations, rather than arithmetic.  These are given priority because
+     we can always eliminate an insn by using these, but using a register
+     in the above list won't always eliminate an insn.  */
+
+  HARD_REG_SET hard_reg_copy_preferences;
+
+  /* Similar to hard_reg_preferences, but includes bits for subsequent
+     registers when an allocno is multi-word.  The above variable is used for
+     allocation while this is used to build reg_someone_prefers, below.  */
+
+  HARD_REG_SET hard_reg_full_preferences;
+
+  /* Set of hard registers that some later allocno has a preference for.  */
+
+  HARD_REG_SET regs_someone_prefers;
+
+#ifdef STACK_REGS
+  /* Set to true if allocno can't be allocated in the stack register.  */
+  bool no_stack_reg;
+#endif
+};
+extern struct allocno *allocno;
+
+/* In ra-conflict.c  */
+
+/* Number of pseudo-registers which are candidates for allocation.  */
+
+extern int max_allocno;
+
+/* max_allocno by max_allocno array of bits, recording whether two
+   allocno's conflict (can't go in the same hardware register).
+
+   `conflicts' is symmetric after the call to mirror_conflicts.  */
+
+extern HOST_WIDE_INT *conflicts;
+
+/* Number of ints required to hold max_allocno bits.
+   This is the length of a row in `conflicts'.  */
+
+extern int allocno_row_words;
+
+/* Indexed by (pseudo) reg number, gives the allocno, or -1
+   for pseudo registers which are not to be allocated.  */
+
+extern int *reg_allocno;
+
+extern void global_conflicts (void);
+
+/* In global.c  */
+
+/* For any allocno set in ALLOCNO_SET, set ALLOCNO to that allocno,
+   and execute CODE.  */
+#define EXECUTE_IF_SET_IN_ALLOCNO_SET(ALLOCNO_SET, ALLOCNO, CODE)	\
+do {									\
+  int i_;								\
+  int allocno_;								\
+  HOST_WIDE_INT *p_ = (ALLOCNO_SET);					\
+									\
+  for (i_ = allocno_row_words - 1, allocno_ = 0; i_ >= 0;		\
+       i_--, allocno_ += HOST_BITS_PER_WIDE_INT)			\
+    {									\
+      unsigned HOST_WIDE_INT word_ = (unsigned HOST_WIDE_INT) *p_++;	\
+									\
+      for ((ALLOCNO) = allocno_; word_; word_ >>= 1, (ALLOCNO)++)	\
+	{								\
+	  if (word_ & 1)						\
+	    {CODE;}							\
+	}								\
+    }									\
+} while (0)
+
+extern void ra_init_live_subregs (bool, sbitmap *, int *, int, rtx reg);
+
+
+#endif /* GCC_RA_H */
Index: init-regs.c
===================================================================
--- init-regs.c	(revision 128885)
+++ init-regs.c	(working copy)
@@ -117,7 +117,11 @@ initialize_uninitialized_regs (void)
     }
 
   if (optimize == 1)
-    df_remove_problem (df_live);
+    {
+      if (dump_file) 
+	df_dump (dump_file);
+      df_remove_problem (df_live);
+    }
 
   BITMAP_FREE (already_genned);
 }
Index: rtl.h
===================================================================
--- rtl.h	(revision 128885)
+++ rtl.h	(working copy)
@@ -1063,6 +1063,7 @@ extern bool subreg_offset_representable_
 					   unsigned int, enum machine_mode);
 extern unsigned int subreg_regno (const_rtx);
 extern unsigned int subreg_nregs (const_rtx);
+extern unsigned int subreg_nregs_with_regno (unsigned int, const_rtx);
 extern unsigned HOST_WIDE_INT nonzero_bits (const_rtx, enum machine_mode);
 extern unsigned int num_sign_bit_copies (const_rtx, enum machine_mode);
 extern bool constant_pool_constant_p (rtx);
@@ -2172,7 +2173,6 @@ extern void dump_global_regs (FILE *);
 /* Yes, this ifdef is silly, but HARD_REG_SET is not always defined.  */
 extern void retry_global_alloc (int, HARD_REG_SET);
 #endif
-extern void build_insn_chain (rtx);
 
 /* In regclass.c */
 extern int reg_classes_intersect_p (enum reg_class, enum reg_class);
Index: df-problems.c
===================================================================
--- df-problems.c	(revision 128885)
+++ df-problems.c	(working copy)
@@ -71,9 +71,7 @@ df_get_live_out (basic_block bb)
 {
   gcc_assert (df_lr);
 
-  if (df_urec)
-    return DF_RA_LIVE_OUT (bb);
-  else if (df_live)
+  if (df_live)
     return DF_LIVE_OUT (bb);
   else 
     return DF_LR_OUT (bb);
@@ -89,31 +87,12 @@ df_get_live_in (basic_block bb)
 {
   gcc_assert (df_lr);
 
-  if (df_urec)
-    return DF_RA_LIVE_IN (bb);
-  else if (df_live)
+  if (df_live)
     return DF_LIVE_IN (bb);
   else 
     return DF_LR_IN (bb);
 }
 
-/* Get the live at top set for BB no matter what problem happens to be
-   defined.  This function is used by the register allocators who
-   choose different dataflow problems depending on the optimization
-   level.  */
-
-bitmap
-df_get_live_top (basic_block bb)
-{
-  gcc_assert (df_lr);
-
-  if (df_urec)
-    return DF_RA_LIVE_TOP (bb);
-  else 
-    return DF_LR_TOP (bb);
-}
-
-
 /*----------------------------------------------------------------------------
    Utility functions.
 ----------------------------------------------------------------------------*/
@@ -210,9 +189,28 @@ df_unset_seen (void)
    See df.h for details.
    ----------------------------------------------------------------------------*/
 
-/* See the comment at the top of the Reaching Uses problem for how the
-   uses are represented in the kill sets. The same games are played
-   here for the defs.  */
+/* This problem plays a large number of games for the sake of
+   efficiency.  
+   
+   1) The order of the bits in the bitvectors.  After the scanning
+   phase, all of the defs are sorted.  All of the defs for the reg 0
+   are first, followed by all defs for reg 1 and so on.
+   
+   2) There are two kill sets, one if the number of defs is less or
+   equal to DF_SPARSE_THRESHOLD and another if the number of defs is
+   greater.
+
+   <= : Data is built directly in the kill set.
+
+   > : One level of indirection is used to keep from generating long
+   strings of 1 bits in the kill sets.  Bitvectors that are indexed
+   by the regnum are used to represent that there is a killing def
+   for the register.  The confluence and transfer functions use
+   these along with the bitmap_clear_range call to remove ranges of
+   bits without actually generating a knockout vector.
+
+   The kill and sparse_kill and the dense_invalidated_by_call and
+   sparse_invalidated_by_call both play this game.  */
 
 /* Private data used to compute the solution for this problem.  These
    data structures are not accessible outside of this module.  */
@@ -740,14 +738,6 @@ df_lr_free_bb_info (basic_block bb ATTRI
     {
       BITMAP_FREE (bb_info->use);
       BITMAP_FREE (bb_info->def);
-      if (bb_info->in == bb_info->top)
-        bb_info->top = NULL;
-      else
-	{
-          BITMAP_FREE (bb_info->top);
-          BITMAP_FREE (bb_info->ause);
-          BITMAP_FREE (bb_info->adef);
-	}
       BITMAP_FREE (bb_info->in);
       BITMAP_FREE (bb_info->out);
       pool_free (df_lr->block_pool, bb_info);
@@ -777,11 +767,6 @@ df_lr_alloc (bitmap all_blocks ATTRIBUTE
 	{ 
 	  bitmap_clear (bb_info->def);
 	  bitmap_clear (bb_info->use);
-	  if (bb_info->adef)
-	    {
-	      bitmap_clear (bb_info->adef);
-	      bitmap_clear (bb_info->ause);
-	    }
 	}
       else
 	{ 
@@ -791,9 +776,6 @@ df_lr_alloc (bitmap all_blocks ATTRIBUTE
 	  bb_info->def = BITMAP_ALLOC (NULL);
 	  bb_info->in = BITMAP_ALLOC (NULL);
 	  bb_info->out = BITMAP_ALLOC (NULL);
-          bb_info->top = bb_info->in;
-          bb_info->adef = NULL;
-          bb_info->ause = NULL;
 	}
     }
 
@@ -815,7 +797,6 @@ df_lr_reset (bitmap all_blocks)
       gcc_assert (bb_info);
       bitmap_clear (bb_info->in);
       bitmap_clear (bb_info->out);
-      bitmap_clear (bb_info->top);
     }
 }
 
@@ -879,23 +860,18 @@ df_lr_bb_local_compute (unsigned int bb_
 	  bitmap_set_bit (bb_info->use, DF_REF_REGNO (use));
 	}
     }
-  /* Process the registers set in an exception handler.  */
+
+  /* Process the registers set in an exception handler or the hard
+     frame pointer if this block is the target of a non local
+     goto.  */
   for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
     {
       struct df_ref *def = *def_rec;
-      if ((DF_REF_FLAGS (def) & DF_REF_AT_TOP)
-	  && (!(DF_REF_FLAGS (def) & (DF_REF_PARTIAL | DF_REF_CONDITIONAL))))
+      if (DF_REF_FLAGS (def) & DF_REF_AT_TOP)
 	{
 	  unsigned int dregno = DF_REF_REGNO (def);
-	  if (bb_info->adef == NULL)
-	    {
-	      gcc_assert (bb_info->ause == NULL);
-	      gcc_assert (bb_info->top == bb_info->in);
-	      bb_info->adef = BITMAP_ALLOC (NULL);
-	      bb_info->ause = BITMAP_ALLOC (NULL);
-	      bb_info->top = BITMAP_ALLOC (NULL);
-	    }
-	  bitmap_set_bit (bb_info->adef, dregno);
+	  bitmap_set_bit (bb_info->def, dregno);
+	  bitmap_clear_bit (bb_info->use, dregno);
 	}
     }
   
@@ -906,17 +882,7 @@ df_lr_bb_local_compute (unsigned int bb_
       struct df_ref *use = *use_rec;
       /* Add use to set of uses in this BB.  */
       if (DF_REF_FLAGS (use) & DF_REF_AT_TOP)
-	{
-	  if (bb_info->adef == NULL)
-	    {
-	      gcc_assert (bb_info->ause == NULL);
-	      gcc_assert (bb_info->top == bb_info->in);
-	      bb_info->adef = BITMAP_ALLOC (NULL);
-	      bb_info->ause = BITMAP_ALLOC (NULL);
-	      bb_info->top = BITMAP_ALLOC (NULL);
-	    }
-	  bitmap_set_bit (bb_info->ause, DF_REF_REGNO (use));
-	}
+	bitmap_set_bit (bb_info->use, DF_REF_REGNO (use));
     }
 #endif
 
@@ -1041,19 +1007,8 @@ df_lr_transfer_function (int bb_index)
   bitmap out = bb_info->out;
   bitmap use = bb_info->use;
   bitmap def = bb_info->def;
-  bitmap top = bb_info->top;
-  bitmap ause = bb_info->ause;
-  bitmap adef = bb_info->adef;
-  bool changed;
-
-  changed = bitmap_ior_and_compl (top, use, out, def);
-  if (in != top)
-    {
-      gcc_assert (ause && adef);
-      changed |= bitmap_ior_and_compl (in, ause, top, adef);
-    }
 
-  return changed;
+  return bitmap_ior_and_compl (in, use, out, def);
 }
 
 
@@ -1097,14 +1052,6 @@ df_lr_free (void)
 	    {
 	      BITMAP_FREE (bb_info->use);
 	      BITMAP_FREE (bb_info->def);
-	      if (bb_info->in == bb_info->top)
-	        bb_info->top = NULL;
-	      else
-		{
-	          BITMAP_FREE (bb_info->top);
-                  BITMAP_FREE (bb_info->ause);
-                  BITMAP_FREE (bb_info->adef);
-		}
 	      BITMAP_FREE (bb_info->in);
 	      BITMAP_FREE (bb_info->out);
 	    }
@@ -1297,7 +1244,6 @@ df_lr_verify_transfer_functions (void)
   bitmap saved_adef;
   bitmap saved_ause;
   bitmap all_blocks;
-  bool need_as;
 
   if (!df)
     return;
@@ -1326,33 +1272,9 @@ df_lr_verify_transfer_functions (void)
 	      bitmap_clear (bb_info->def);
 	      bitmap_clear (bb_info->use);
 
-	      if (bb_info->adef)
-		{
-		  need_as = true;
-		  bitmap_copy (saved_adef, bb_info->adef);
-		  bitmap_copy (saved_ause, bb_info->ause);
-		  bitmap_clear (bb_info->adef);
-		  bitmap_clear (bb_info->ause);
-		}
-	      else
-		need_as = false;
-
 	      df_lr_bb_local_compute (bb->index);
 	      gcc_assert (bitmap_equal_p (saved_def, bb_info->def));
 	      gcc_assert (bitmap_equal_p (saved_use, bb_info->use));
-
-	      if (need_as)
-		{
-		  gcc_assert (bb_info->adef);
-		  gcc_assert (bb_info->ause);
-		  gcc_assert (bitmap_equal_p (saved_adef, bb_info->adef));
-		  gcc_assert (bitmap_equal_p (saved_ause, bb_info->ause));
-		}
-	      else
-		{
-		  gcc_assert (!bb_info->adef);
-		  gcc_assert (!bb_info->ause);
-		}
 	    }
 	}
       else
@@ -1633,7 +1555,7 @@ df_live_local_finalize (bitmap all_block
 	{
 	  struct df_lr_bb_info *bb_lr_info = df_lr_get_bb_info (bb_index);
 	  struct df_live_bb_info *bb_live_info = df_live_get_bb_info (bb_index);
-	  
+  
 	  /* No register may reach a location where it is not used.  Thus
 	     we trim the rr result to the places where it is used.  */
 	  bitmap_and_into (bb_live_info->in, bb_lr_info->in);
@@ -1913,615 +1835,6 @@ df_live_verify_transfer_functions (void)
   BITMAP_FREE (saved_kill);
   BITMAP_FREE (all_blocks);
 }
-
-
-
-/*----------------------------------------------------------------------------
-   UNINITIALIZED REGISTERS WITH EARLYCLOBBER
-
-   Find the set of uses for registers that are reachable from the entry
-   block without passing thru a definition.  In and out bitvectors are built
-   for each basic block.  The regnum is used to index into these sets.
-   See df.h for details.
-
-   This is a variant of the UR problem above that has a lot of special
-   features just for the register allocation phase.  This problem
-   should go away if someone would fix the interference graph.
-
-   ----------------------------------------------------------------------------*/
-
-/* Private data used to compute the solution for this problem.  These
-   data structures are not accessible outside of this module.  */
-struct df_urec_problem_data
-{
-  bool earlyclobbers_found;     /* True if any instruction contains an
-				   earlyclobber.  */
-#ifdef STACK_REGS
-  bitmap stack_regs;		/* Registers that may be allocated to a STACK_REGS.  */
-#endif
-};
-
-
-/* Set basic block info.  */
-
-static void
-df_urec_set_bb_info (unsigned int index, 
-		     struct df_urec_bb_info *bb_info)
-{
-  gcc_assert (df_urec);
-  gcc_assert (index < df_urec->block_info_size);
-  df_urec->block_info[index] = bb_info;
-}
-
-
-/* Free basic block info.  */
-
-static void
-df_urec_free_bb_info (basic_block bb ATTRIBUTE_UNUSED, 
-		      void *vbb_info)
-{
-  struct df_urec_bb_info *bb_info = (struct df_urec_bb_info *) vbb_info;
-  if (bb_info)
-    {
-      BITMAP_FREE (bb_info->gen);
-      BITMAP_FREE (bb_info->kill);
-      BITMAP_FREE (bb_info->in);
-      BITMAP_FREE (bb_info->out);
-      BITMAP_FREE (bb_info->earlyclobber);
-      pool_free (df_urec->block_pool, bb_info);
-    }
-}
-
-
-/* Allocate or reset bitmaps for DF_UREC blocks. The solution bits are
-   not touched unless the block is new.  */
-
-static void 
-df_urec_alloc (bitmap all_blocks)
-
-{
-  unsigned int bb_index;
-  bitmap_iterator bi;
-  struct df_urec_problem_data *problem_data
-    = (struct df_urec_problem_data *) df_urec->problem_data;
-
-  if (!df_urec->block_pool)
-    df_urec->block_pool = create_alloc_pool ("df_urec_block pool", 
-					   sizeof (struct df_urec_bb_info), 50);
-
-  if (!df_urec->problem_data)
-    {
-      problem_data = XNEW (struct df_urec_problem_data);
-      df_urec->problem_data = problem_data;
-    }
-  problem_data->earlyclobbers_found = false;
-
-  df_grow_bb_info (df_urec);
-
-  EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
-    {
-      struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
-      if (bb_info)
-	{ 
-	  bitmap_clear (bb_info->kill);
-	  bitmap_clear (bb_info->gen);
-	  bitmap_clear (bb_info->earlyclobber);
-	}
-      else
-	{ 
-	  bb_info = (struct df_urec_bb_info *) pool_alloc (df_urec->block_pool);
-	  df_urec_set_bb_info (bb_index, bb_info);
-	  bb_info->kill = BITMAP_ALLOC (NULL);
-	  bb_info->gen = BITMAP_ALLOC (NULL);
-	  bb_info->in = BITMAP_ALLOC (NULL);
-	  bb_info->out = BITMAP_ALLOC (NULL);
-          bb_info->top = BITMAP_ALLOC (NULL);
-	  bb_info->earlyclobber = BITMAP_ALLOC (NULL);
-	}
-    }
-  df_urec->optional_p = true;
-}
-
-
-/* The function modifies local info for register REG being changed in
-   SETTER.  DATA is used to pass the current basic block info.  */
-
-static void
-df_urec_mark_reg_change (rtx reg, const_rtx setter, void *data)
-{
-  int regno;
-  int endregno;
-  int i;
-  struct df_urec_bb_info *bb_info = (struct df_urec_bb_info*) data;
-
-  if (GET_CODE (reg) == SUBREG)
-    reg = SUBREG_REG (reg);
-
-  if (!REG_P (reg))
-    return;
-  
-  regno = REGNO (reg);
-  if (regno < FIRST_PSEUDO_REGISTER)
-    {
-      endregno = END_HARD_REGNO (reg);
-      for (i = regno; i < endregno; i++)
-	{
-	  bitmap_set_bit (bb_info->kill, i);
-	  
-	  if (GET_CODE (setter) != CLOBBER)
-	    bitmap_set_bit (bb_info->gen, i);
-	  else
-	    bitmap_clear_bit (bb_info->gen, i);
-	}
-    }
-  else
-    {
-      bitmap_set_bit (bb_info->kill, regno);
-      
-      if (GET_CODE (setter) != CLOBBER)
-	bitmap_set_bit (bb_info->gen, regno);
-      else
-	bitmap_clear_bit (bb_info->gen, regno);
-    }
-}
-/* Classes of registers which could be early clobbered in the current
-   insn.  */
-
-static VEC(int,heap) *earlyclobber_regclass;
-
-/* This function finds and stores register classes that could be early
-   clobbered in INSN.  If any earlyclobber classes are found, the function
-   returns TRUE, in all other cases it returns FALSE.  */
-
-static bool
-df_urec_check_earlyclobber (rtx insn)
-{
-  int opno;
-  bool found = false;
-
-  extract_insn (insn);
-
-  VEC_truncate (int, earlyclobber_regclass, 0);
-  for (opno = 0; opno < recog_data.n_operands; opno++)
-    {
-      char c;
-      bool amp_p;
-      int i;
-      enum reg_class class;
-      const char *p = recog_data.constraints[opno];
-
-      class = NO_REGS;
-      amp_p = false;
-      for (;;)
-	{
-	  c = *p;
-	  switch (c)
-	    {
-	    case '=':  case '+':  case '?':
-	    case '#':  case '!':
-	    case '*':  case '%':
-	    case 'm':  case '<':  case '>':  case 'V':  case 'o':
-	    case 'E':  case 'F':  case 'G':  case 'H':
-	    case 's':  case 'i':  case 'n':
-	    case 'I':  case 'J':  case 'K':  case 'L':
-	    case 'M':  case 'N':  case 'O':  case 'P':
-	    case 'X':
-	    case '0': case '1':  case '2':  case '3':  case '4':
-	    case '5': case '6':  case '7':  case '8':  case '9':
-	      /* These don't say anything we care about.  */
-	      break;
-
-	    case '&':
-	      amp_p = true;
-	      break;
-	    case '\0':
-	    case ',':
-	      if (amp_p && class != NO_REGS)
-		{
-		  int rc;
-
-		  found = true;
-		  for (i = 0;
-		       VEC_iterate (int, earlyclobber_regclass, i, rc);
-		       i++)
-		    {
-		      if (rc == (int) class)
-			goto found_rc;
-		    }
-
-		  /* We use VEC_quick_push here because
-		     earlyclobber_regclass holds no more than
-		     N_REG_CLASSES elements. */
-		  VEC_quick_push (int, earlyclobber_regclass, (int) class);
-		found_rc:
-		  ;
-		}
-	      
-	      amp_p = false;
-	      class = NO_REGS;
-	      break;
-
-	    case 'r':
-	      class = GENERAL_REGS;
-	      break;
-
-	    default:
-	      class = REG_CLASS_FROM_CONSTRAINT (c, p);
-	      break;
-	    }
-	  if (c == '\0')
-	    break;
-	  p += CONSTRAINT_LEN (c, p);
-	}
-    }
-
-  return found;
-}
-
-/* The function checks that pseudo-register *X has a class
-   intersecting with the class of pseudo-register could be early
-   clobbered in the same insn.
-
-   This function is a no-op if earlyclobber_regclass is empty. 
-
-   Reload can assign the same hard register to uninitialized
-   pseudo-register and early clobbered pseudo-register in an insn if
-   the pseudo-register is used first time in given BB and not lived at
-   the BB start.  To prevent this we don't change life information for
-   such pseudo-registers.  */
-
-static int
-df_urec_mark_reg_use_for_earlyclobber (rtx *x, void *data)
-{
-  enum reg_class pref_class, alt_class;
-  int i, regno;
-  struct df_urec_bb_info *bb_info = (struct df_urec_bb_info*) data;
-
-  if (REG_P (*x) && REGNO (*x) >= FIRST_PSEUDO_REGISTER)
-    {
-      int rc;
-
-      regno = REGNO (*x);
-      if (bitmap_bit_p (bb_info->kill, regno)
-	  || bitmap_bit_p (bb_info->gen, regno))
-	return 0;
-      pref_class = reg_preferred_class (regno);
-      alt_class = reg_alternate_class (regno);
-      for (i = 0; VEC_iterate (int, earlyclobber_regclass, i, rc); i++)
-	{
-	  if (reg_classes_intersect_p (rc, pref_class)
-	      || (rc != NO_REGS
-		  && reg_classes_intersect_p (rc, alt_class)))
-	    {
-	      bitmap_set_bit (bb_info->earlyclobber, regno);
-	      break;
-	    }
-	}
-    }
-  return 0;
-}
-
-/* The function processes all pseudo-registers in *X with the aid of
-   previous function.  */
-
-static void
-df_urec_mark_reg_use_for_earlyclobber_1 (rtx *x, void *data)
-{
-  for_each_rtx (x, df_urec_mark_reg_use_for_earlyclobber, data);
-}
-
-
-/* Compute local uninitialized register info for basic block BB.  */
-
-static void
-df_urec_bb_local_compute (unsigned int bb_index)
-{
-  basic_block bb = BASIC_BLOCK (bb_index);
-  struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
-  rtx insn;
-  struct df_ref **def_rec;
-
-  for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
-    {
-      struct df_ref *def = *def_rec;
-      if (DF_REF_FLAGS (def) & DF_REF_AT_TOP)
-	{
-	  unsigned int regno = DF_REF_REGNO (def);
-	  bitmap_set_bit (bb_info->gen, regno);
-	}
-    }
-  
-  FOR_BB_INSNS (bb, insn)
-    {
-      if (INSN_P (insn))
-	{
-	  note_stores (PATTERN (insn), df_urec_mark_reg_change, bb_info);
-	  if (df_urec_check_earlyclobber (insn))
-	    {
-	      struct df_urec_problem_data *problem_data
-		= (struct df_urec_problem_data *) df_urec->problem_data;
-	      problem_data->earlyclobbers_found = true;
-	      note_uses (&PATTERN (insn), 
-			 df_urec_mark_reg_use_for_earlyclobber_1, bb_info);
-	    }
-	}
-    }
-
-  for (def_rec = df_get_artificial_defs (bb_index); *def_rec; def_rec++)
-    {
-      struct df_ref *def = *def_rec;
-      if ((DF_REF_FLAGS (def) & DF_REF_AT_TOP) == 0)
-	{
-	  unsigned int regno = DF_REF_REGNO (def);
-	  bitmap_set_bit (bb_info->gen, regno);
-	}
-    }
-}
-
-
-/* Compute local uninitialized register info.  */
-
-static void
-df_urec_local_compute (bitmap all_blocks)
-{
-  unsigned int bb_index;
-  bitmap_iterator bi;
-#ifdef STACK_REGS
-  int i;
-  HARD_REG_SET stack_hard_regs, used;
-  struct df_urec_problem_data *problem_data
-    = (struct df_urec_problem_data *) df_urec->problem_data;
-  
-  /* Any register that MAY be allocated to a register stack (like the
-     387) is treated poorly.  Each such register is marked as being
-     live everywhere.  This keeps the register allocator and the
-     subsequent passes from doing anything useful with these values.
-
-     FIXME: This seems like an incredibly poor idea.  */
-
-  CLEAR_HARD_REG_SET (stack_hard_regs);
-  for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
-    SET_HARD_REG_BIT (stack_hard_regs, i);
-  problem_data->stack_regs = BITMAP_ALLOC (NULL);
-  for (i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
-    {
-      COPY_HARD_REG_SET (used, reg_class_contents[reg_preferred_class (i)]);
-      IOR_HARD_REG_SET (used, reg_class_contents[reg_alternate_class (i)]);
-      AND_HARD_REG_SET (used, stack_hard_regs);
-      if (!hard_reg_set_empty_p (used))
-	bitmap_set_bit (problem_data->stack_regs, i);
-    }
-#endif
-
-  /* We know that earlyclobber_regclass holds no more than
-    N_REG_CLASSES elements.  See df_urec_check_earlyclobber.  */
-  earlyclobber_regclass = VEC_alloc (int, heap, N_REG_CLASSES);
-
-  EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
-    {
-      df_urec_bb_local_compute (bb_index);
-    }
-
-  VEC_free (int, heap, earlyclobber_regclass);
-}
-
-
-/* Initialize the solution vectors.  */
-
-static void 
-df_urec_init (bitmap all_blocks)
-{
-  unsigned int bb_index;
-  bitmap_iterator bi;
-
-  EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
-    {
-      struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
-
-      bitmap_copy (bb_info->out, bb_info->gen);
-      bitmap_clear (bb_info->in);
-    }
-}
-
-
-/* Or in the stack regs, hard regs and early clobber regs into the
-   urec_in sets of all of the blocks.  */
- 
-
-static void
-df_urec_local_finalize (bitmap all_blocks)
-{
-  bitmap tmp = BITMAP_ALLOC (NULL);
-  bitmap_iterator bi;
-  unsigned int bb_index;
-  struct df_urec_problem_data *problem_data
-    = (struct df_urec_problem_data *) df_urec->problem_data;
-
-  EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
-    {
-      struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
-      struct df_lr_bb_info *bb_lr_info = df_lr_get_bb_info (bb_index);
-
-      if (bb_index != ENTRY_BLOCK && bb_index != EXIT_BLOCK)
-	{
-	  if (problem_data->earlyclobbers_found)
-	    bitmap_ior_into (bb_info->in, bb_info->earlyclobber);
-	
-#ifdef STACK_REGS
-	  /* We can not use the same stack register for uninitialized
-	     pseudo-register and another living pseudo-register
-	     because if the uninitialized pseudo-register dies,
-	     subsequent pass reg-stack will be confused (it will
-	     believe that the other register dies).  */
-	  bitmap_ior_into (bb_info->in, problem_data->stack_regs);
-	  bitmap_ior_into (bb_info->out, problem_data->stack_regs);
-#endif
-	}
-
-      /* No register may reach a location where it is not used.  Thus
-	 we trim the rr result to the places where it is used.  */
-      bitmap_and_into (bb_info->in, bb_lr_info->in);
-      bitmap_and_into (bb_info->out, bb_lr_info->out);
-      bitmap_copy (bb_info->top, bb_info->in);
-      if (bb_lr_info->adef)
-        bitmap_ior_into (bb_info->top, bb_lr_info->adef);
-      bitmap_and_into (bb_info->top, bb_lr_info->top);
-#if 0
-      /* Hard registers may still stick in the ur_out set, but not
-	 be in the ur_in set, if their only mention was in a call
-	 in this block.  This is because a call kills in the lr
-	 problem but does not kill in the rr problem.  To clean
-	 this up, we execute the transfer function on the lr_in
-	 set and then use that to knock bits out of ur_out.  */
-      bitmap_ior_and_compl (tmp, bb_info->gen, bb_lr_info->in, 
-			    bb_info->kill);
-      bitmap_and_into (bb_info->out, tmp);
-#endif
-    }
-  
-#ifdef STACK_REGS
-  BITMAP_FREE (problem_data->stack_regs);
-#endif
-  BITMAP_FREE (tmp);
-}
-
-
-/* Confluence function that ignores fake edges.  */
-
-static void
-df_urec_confluence_n (edge e)
-{
-  bitmap op1 = df_urec_get_bb_info (e->dest->index)->in;
-  bitmap op2 = df_urec_get_bb_info (e->src->index)->out;
- 
-  if (e->flags & EDGE_FAKE) 
-    return;
-
-  bitmap_ior_into (op1, op2);
-} 
-
-
-/* Transfer function.  */
-
-static bool
-df_urec_transfer_function (int bb_index)
-{
-  struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb_index);
-  bitmap in = bb_info->in;
-  bitmap out = bb_info->out;
-  bitmap gen = bb_info->gen;
-  bitmap kill = bb_info->kill;
-
-  return bitmap_ior_and_compl (out, gen, in, kill);
-}
-
-
-/* Free all storage associated with the problem.  */
-
-static void
-df_urec_free (void)
-{
-  if (df_urec->block_info)
-    {
-      unsigned int i;
-      
-      for (i = 0; i < df_urec->block_info_size; i++)
-	{
-	  struct df_urec_bb_info *bb_info = df_urec_get_bb_info (i);
-	  if (bb_info)
-	    {
-	      BITMAP_FREE (bb_info->gen);
-	      BITMAP_FREE (bb_info->kill);
-	      BITMAP_FREE (bb_info->in);
-	      BITMAP_FREE (bb_info->out);
-	      BITMAP_FREE (bb_info->earlyclobber);
-              BITMAP_FREE (bb_info->top);
-	    }
-	}
-      
-      free_alloc_pool (df_urec->block_pool);
-      
-      df_urec->block_info_size = 0;
-      free (df_urec->block_info);
-      free (df_urec->problem_data);
-    }
-  free (df_urec);
-}
-
-
-/* Debugging info at top of bb.  */
-
-static void
-df_urec_top_dump (basic_block bb, FILE *file)
-{
-  struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb->index);
-  if (!bb_info || !bb_info->in)
-    return;
-      
-  fprintf (file, ";; urec  in  \t");
-  df_print_regset (file, bb_info->in);
-  fprintf (file, ";; urec  gen \t");
-  df_print_regset (file, bb_info->gen);
-  fprintf (file, ";; urec  kill\t");
-  df_print_regset (file, bb_info->kill);
-  fprintf (file, ";; urec  ec\t");
-  df_print_regset (file, bb_info->earlyclobber);
-}
-
-
-/* Debugging info at bottom of bb.  */
-
-static void
-df_urec_bottom_dump (basic_block bb, FILE *file)
-{
-  struct df_urec_bb_info *bb_info = df_urec_get_bb_info (bb->index);
-  if (!bb_info || !bb_info->out)
-    return;
-  fprintf (file, ";; urec  out \t");
-  df_print_regset (file, bb_info->out);
-}
-
-
-/* All of the information associated with every instance of the problem.  */
-
-static struct df_problem problem_UREC =
-{
-  DF_UREC,                    /* Problem id.  */
-  DF_FORWARD,                 /* Direction.  */
-  df_urec_alloc,              /* Allocate the problem specific data.  */
-  NULL,                       /* Reset global information.  */
-  df_urec_free_bb_info,       /* Free basic block info.  */
-  df_urec_local_compute,      /* Local compute function.  */
-  df_urec_init,               /* Init the solution specific data.  */
-  df_worklist_dataflow,       /* Worklist solver.  */
-  NULL,                       /* Confluence operator 0.  */ 
-  df_urec_confluence_n,       /* Confluence operator n.  */ 
-  df_urec_transfer_function,  /* Transfer function.  */
-  df_urec_local_finalize,     /* Finalize function.  */
-  df_urec_free,               /* Free all of the problem information.  */
-  df_urec_free,               /* Remove this problem from the stack of dataflow problems.  */
-  NULL,                       /* Debugging.  */
-  df_urec_top_dump,           /* Debugging start block.  */
-  df_urec_bottom_dump,        /* Debugging end block.  */
-  NULL,                       /* Incremental solution verify start.  */
-  NULL,                       /* Incremental solution verify end.  */
-  &problem_LR,                /* Dependent problem.  */
-  TV_DF_UREC,                 /* Timing variable.  */ 
-  false                       /* Reset blocks on dropping out of blocks_to_analyze.  */
-};
-
-
-/* Create a new DATAFLOW instance and add it to an existing instance
-   of DF.  The returned structure is what is used to get at the
-   solution.  */
-
-void
-df_urec_add_problem (void)
-{
-  df_add_problem (&problem_UREC);
-}
-
-
 
 /*----------------------------------------------------------------------------
    CREATE DEF_USE (DU) and / or USE_DEF (UD) CHAINS
@@ -3707,7 +3020,7 @@ df_simulate_fixup_sets (basic_block bb, 
 {
   /* These regs are considered always live so if they end up dying
      because of some def, we need to bring the back again.  */
-  if (df_has_eh_preds (bb))
+  if (bb_has_eh_pred (bb))
     bitmap_ior_into (live, df->eh_block_artificial_uses);
   else
     bitmap_ior_into (live, df->regular_block_artificial_uses);
Index: Makefile.in
===================================================================
--- Makefile.in	(revision 128885)
+++ Makefile.in	(working copy)
@@ -791,6 +791,7 @@ FUNCTION_H = function.h $(TREE_H) $(HASH
 EXPR_H = expr.h insn-config.h $(FUNCTION_H) $(RTL_H) $(FLAGS_H) $(TREE_H) $(MACHMODE_H) $(EMIT_RTL_H)
 OPTABS_H = optabs.h insn-codes.h
 REGS_H = regs.h varray.h $(MACHMODE_H) $(OBSTACK_H) $(BASIC_BLOCK_H) $(FUNCTION_H)
+RA_H = ra.h $(REGS_H)
 RESOURCE_H = resource.h hard-reg-set.h
 SCHED_INT_H = sched-int.h $(INSN_ATTR_H) $(BASIC_BLOCK_H) $(RTL_H) $(DF_H)
 INTEGRATE_H = integrate.h $(VARRAY_H)
@@ -1100,6 +1101,7 @@ OBJS-common = \
 	print-rtl.o \
 	print-tree.o \
 	profile.o \
+	ra-conflict.o \
 	real.o \
 	recog.o \
 	reg-stack.o \
@@ -2702,7 +2704,11 @@ bitmap.o : bitmap.c $(CONFIG_H) $(SYSTEM
 global.o : global.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
    $(FLAGS_H) reload.h $(FUNCTION_H) $(RECOG_H) $(REGS_H) hard-reg-set.h \
    insn-config.h output.h toplev.h $(TM_P_H) $(MACHMODE_H) tree-pass.h \
-   $(TIMEVAR_H) vecprim.h $(DF_H)
+   $(TIMEVAR_H) vecprim.h $(DF_H) $(DBGCNT_H) $(RA_H)
+ra-conflict.o : ra-conflict.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
+   $(FLAGS_H) reload.h $(FUNCTION_H) $(RECOG_H) $(REGS_H) hard-reg-set.h \
+   insn-config.h output.h toplev.h $(TM_P_H) $(MACHMODE_H) tree-pass.h \
+   $(TIMEVAR_H) vecprim.h $(DF_H) $(RA_H) sbitmap.h 
 varray.o : varray.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(GGC_H) \
    $(HASHTAB_H) $(BCONFIG_H) $(VARRAY_H) toplev.h
 vec.o : vec.c $(CONFIG_H) $(SYSTEM_H) coretypes.h vec.h $(GGC_H) \
Index: basic-block.h
===================================================================
--- basic-block.h	(revision 128885)
+++ basic-block.h	(working copy)
@@ -1135,6 +1135,21 @@ bb_has_eh_pred (basic_block bb)
   return false;
 }
 
+/* Return true when one of the predecessor edges of BB is marked with EDGE_ABNORMAL.  */
+static inline bool
+bb_has_abnormal_pred (basic_block bb)
+{
+  edge e;
+  edge_iterator ei;
+
+  FOR_EACH_EDGE (e, ei, bb->preds)
+    {
+      if (e->flags & EDGE_ABNORMAL)
+	return true;
+    }
+  return false;
+}
+
 /* In cfgloopmanip.c.  */
 extern edge mfb_kj_edge;
 bool mfb_keep_just (edge);
Index: dce.c
===================================================================
--- dce.c	(revision 128885)
+++ dce.c	(working copy)
@@ -573,8 +573,8 @@ dce_process_block (basic_block bb, bool 
   /* These regs are considered always live so if they end up dying
      because of some def, we need to bring the back again.
      Calling df_simulate_fixup_sets has the disadvantage of calling
-     df_has_eh_preds once per insn, so we cache the information here.  */
-  if (df_has_eh_preds (bb))
+     bb_has_eh_pred once per insn, so we cache the information here.  */
+  if (bb_has_eh_pred (bb))
     au = df->eh_block_artificial_uses;
   else
     au = df->regular_block_artificial_uses;
Index: reload1.c
===================================================================
--- reload1.c	(revision 128885)
+++ reload1.c	(working copy)
@@ -548,7 +548,7 @@ compute_use_by_pseudos (HARD_REG_SET *to
       if (r < 0)
 	{
 	  /* reload_combine uses the information from
-	     DF_RA_LIVE_IN (BASIC_BLOCK), which might still
+	     DF_LIVE_IN (BASIC_BLOCK), which might still
 	     contain registers that have not actually been allocated
 	     since they have an equivalence.  */
 	  gcc_assert (reload_completed);
@@ -1158,10 +1158,7 @@ reload (rtx first, int global)
 
   if (! frame_pointer_needed)
     FOR_EACH_BB (bb)
-      {
-	bitmap_clear_bit (df_get_live_in (bb), HARD_FRAME_POINTER_REGNUM);
-	bitmap_clear_bit (df_get_live_top (bb), HARD_FRAME_POINTER_REGNUM);
-      }
+      bitmap_clear_bit (df_get_live_in (bb), HARD_FRAME_POINTER_REGNUM);
 	
   /* Come here (with failure set nonzero) if we can't get enough spill
      regs.  */

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]