This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[dataflow[ PATCH: remove DF_EQUIV_NOTES scanning flag.


This flag gets rid of the DF_EQUIV_NOTES scanning flag.
It is replaced by adding a set of scanning datastructures (eq_uses) that
hold the refs that are in
REG_EQUIV/EQUAL notes separate from the refs for the regular uses.

If the new changeable_flag DF_EQ_NOTES is set, the eq_uses are
considered when building the RD problem and both the def_use and
use_chains.

The changes to the various passes are generally of three types:
1) The flags to df_init are changed to as above.
2) More consistent use of the df macros has made.
3) In the passes that need the DF_EQ_NOTES, code was added where they
examined the uses in an insn and added code to also check the eq_uses
list of refs. 

There is only one scanning flag left.  It is going to be dealt with by
Bonzini since it is only used for fwprop.c.

Seongbae Park of google and myself are going to attack making the
scanning persistent.

This code has been bootstrapped and regression tested on
powerpc-linux, x86-64-linux and x86-32-linux.

Kenny


2006-10-19  Kenneth Zadeck <zadeck@naturalbridge.com>
    * sched-ebb.c (schedule_ebbs): Changed flags to df_init.
    * fwprop.c (use_killed_between): Changed to use proper macros.
    (All_uses_available_at, try_fwprop_subst): Added support for
    separated reg_equiv/equal df_refs.
    (fwprop_init): Changed flags to df_init.
    (fwprop, fwprop_addr): Changed call to df_reorganize_refs to
    df_maybe_reorganize_use_refs.
    * see.c (see_initialize_data_structures): Changed flags to
    df_init.
    * ddg.c (build_inter_loop_deps): Now skips refs with
    reg_equal/equiv notes.
    * modulo-sched.c (sms_schedule): Changed flags to df_init.
    * web.c (union_defs): Added support for separated reg_equiv/equal
    df_refs.
    (web_main): Changed flags to df_init and changed call to
    df_reorganize_refs to df_maybe_reorganize_(use|def)_refs.
    * loop_invariant.c (check_dependency): New function split out from
    check_dependencies.
    (record_uses): Added support for separated reg_equiv/equal
    df_refs.
    (move_loop_invariants): Changed flags to df_init.
    * loop-iv.c (iv_analysis_loop_init): Changed flags to df_init.
    (latch_dominating_def): Changed to use proper macros.
    * combine.c (create_log_links): Ditto.
    * sched-rgn.c (schedule_insns): Changed flags to df_init.
    * dce.c (dce_process_block): Changed to use proper macros.
    * df.h (df_insn_info.eq_uses): New field.
    (DF_EQUIV_NOTES): Deleted permanent_flag.
    (DF_EQ_NOTES): New changeable_flag.
    (df_ref_info.regs_size, df_ref_info.regs_inited): Moved to df
    structure.
    (df.def_regs, df.use_regs, df.eq_use_regs): New fields.
    (df_ref_info.begin): Moved from df_reg_info.
    (DF_DEFS_COUNT, DF_DEFS_BEGIN, DF_DEFS_COUNT, DF_DEFS_BEGIN,
    DF_REG_EQ_USE_GET, DF_REG_EQ_USE_CHAIN, DF_REG_EQ_USE_COUNT): New
    macros.
    (DF_REG_SIZE, DF_REG_DEF_GET, DF_REG_DEF_CHAIN, DF_REG_DEF_COUNT,
    DF_REG_USE_GET, DF_REG_USE_CHAIN, DF_REG_USE_COUNT): Redefined.
    (df_reorganize_refs): Split into df_maybe_reorganize_use_refs and
    df_maybe_reorganize_def_refs.  
    (df_ref_info.refs_organized): Split into refs_organized_alone and
    refs_organized_with_eq_uses.
    * df-problems.c (df_ru_bb_local_compute_process_def,
    df_ru_local_compute, df_ru_confluence_n, df_ru_transfer_function,
    df_ru_start_dump, df_rd_bb_local_compute_process_def,
    df_rd_local_compute, df_rd_confluence_n, df_rd_transfer_function,
    df_rd_start_dump, df_chain_alloc, df_chain_insn_reset,
    df_chain_create_bb_process_use, df_chain_create_bb,
    df_chain_start_dump): Changed to use proper macros.
    (df_ru_bb_local_compute, df_chain_insn_reset, df_chain_create_bb):
    Added support for separated reg_equiv/equal df_refs.
    (df_ru_local_compute, df_rd_local_compute, df_chain_alloc): Split
    into df_maybe_reorganize_use_refs and
    df_maybe_reorganize_def_refs.
    * df-scan.c (df_grow_reg_info, df_rescan_blocks, df_ref_create):
    Changed to process all data structures dependent on number of
    registers at once.
    (df_scan_free_internal, df_scan_alloc): Changed to process new
    data structures properly.
    (df_rescan_blocks): Split into refs_organized_alone and
    refs_organized_with_eq_uses.
    (df_reg_chain_unlink): Remove decrement of bitmap_size fields.
    (df_reg_chain_unlink, df_insn_refs_delete,
    df_ref_create_structure): Changed to use proper macros.
    (df_reg_chain_unlink, df_ref_remove, df_insn_refs_delete,
    df_reorganize_refs, df_ref_create_structure, df_insn_refs_record):
    Added support for separated reg_equiv/equal df_refs.
    (df_maybe_reorganize_use_refs, df_maybe_reorganize_def_refs): New
    functions.
    * df-core.c (df_bb_regno_last_use_find,
    df_bb_regno_first_def_find, df_bb_regno_last_def_find,
    df_insn_regno_def_p, df_find_def, df_find_use, df_dump_start,
    df_regno_debug): Changed to use proper macros.
    (df_find_use, df_insn_uid_debug, df_insn_uid_debug,
    df_insn_debug_regno, df_insn_debug_regno): Added support for
    separated reg_equiv/equal df_refs.

    

Index: gcc/sched-ebb.c
===================================================================
--- gcc/sched-ebb.c	(revision 117395)
+++ gcc/sched-ebb.c	(working copy)
@@ -552,8 +552,7 @@ schedule_ebbs (void)
      invoked via sched_init.  */
   current_sched_info = &ebb_sched_info;
 
-  df = df_init (DF_EQUIV_NOTES + DF_SUBREGS + DF_RI_LIFE, 
-		DF_LR_RUN_DCE);
+  df = df_init (DF_RI_LIFE, DF_LR_RUN_DCE);
   df_lr_add_problem (df);
   df_live_add_problem (df);
   df_ri_add_problem (df);
Index: gcc/fwprop.c
===================================================================
--- gcc/fwprop.c	(revision 117330)
+++ gcc/fwprop.c	(working copy)
@@ -473,7 +473,7 @@ use_killed_between (struct df_ref *use, 
   /* Check if the reg in USE has only one definition.  We already
      know that this definition reaches use, or we wouldn't be here.  */
   regno = DF_REF_REGNO (use);
-  def = DF_REG_DEF_GET (df, regno)->reg_chain;
+  def = DF_REG_DEF_CHAIN (df, regno);
   if (def && (def->next_reg == NULL))
     return false;
 
@@ -553,6 +553,9 @@ all_uses_available_at (rtx def_insn, rtx
       for (use = DF_INSN_USES (df, def_insn); use; use = use->next_ref)
         if (rtx_equal_p (use->reg, def_reg))
           return false;
+      for (use = DF_INSN_EQ_USES (df, def_insn); use; use = use->next_ref)
+        if (rtx_equal_p (use->reg, def_reg))
+          return false;
     }
   else
     {
@@ -561,6 +564,9 @@ all_uses_available_at (rtx def_insn, rtx
       for (use = DF_INSN_USES (df, def_insn); use; use = use->next_ref)
 	if (use_killed_between (use, def_insn, target_insn))
 	  return false;
+      for (use = DF_INSN_EQ_USES (df, def_insn); use; use = use->next_ref)
+	if (use_killed_between (use, def_insn, target_insn))
+	  return false;
     }
 
   /* We don't do any analysis of memories or aliasing.  Reject any
@@ -676,8 +682,10 @@ try_fwprop_subst (struct df_ref *use, rt
       /* Unlink the use that we changed.  */
       df_ref_remove (df, use);
       if (!CONSTANT_P (new))
-	update_df (insn, loc, DF_INSN_USES (df, def_insn), type, flags);
-
+	{
+	  update_df (insn, loc, DF_INSN_USES (df, def_insn), type, flags);
+	  update_df (insn, loc, DF_INSN_EQ_USES (df, def_insn), type, flags);
+	}
       return true;
     }
   else
@@ -697,8 +705,12 @@ try_fwprop_subst (struct df_ref *use, rt
 						REG_NOTES (insn));
 
           if (!CONSTANT_P (new))
-	    update_df (insn, loc, DF_INSN_USES (df, def_insn),
-		       type, DF_REF_IN_NOTE);
+	    {
+	      update_df (insn, loc, DF_INSN_USES (df, def_insn),
+			 type, DF_REF_IN_NOTE);
+	      update_df (insn, loc, DF_INSN_EQ_USES (df, def_insn),
+			 type, DF_REF_IN_NOTE);
+	    }
 	}
 
       return false;
@@ -903,7 +915,7 @@ fwprop_init (void)
 
   /* Now set up the dataflow problem (we only want use-def chains) and
      put the dataflow solver to work.  */
-  df = df_init (DF_SUBREGS + DF_EQUIV_NOTES + DF_UD_CHAIN, 0);
+  df = df_init (DF_SUBREGS + DF_UD_CHAIN, DF_EQ_NOTES);
   df_chain_add_problem (df);
   df_analyze (df);
 }
@@ -950,7 +962,7 @@ fwprop (void)
      Do not forward propagate addresses into loops until after unrolling.
      CSE did so because it was able to fix its own mess, but we are not.  */
 
-  df_reorganize_refs (&df->use_info);
+  df_maybe_reorganize_use_refs (df);
   for (i = 0; i < DF_USES_SIZE (df); i++)
     {
       struct df_ref *use = DF_USES_GET (df, i);
@@ -999,7 +1011,7 @@ fwprop_addr (void)
 
   /* Go through all the uses.  update_df will create new ones at the
      end, and we'll go through them as well.  */
-  df_reorganize_refs (&df->use_info);
+  df_maybe_reorganize_use_refs (df);
   for (i = 0; i < DF_USES_SIZE (df); i++)
     {
       struct df_ref *use = DF_USES_GET (df, i);
Index: gcc/see.c
===================================================================
--- gcc/see.c	(revision 117351)
+++ gcc/see.c	(working copy)
@@ -1331,8 +1331,7 @@ static void
 see_initialize_data_structures (void)
 {
   /* Build the df object. */
-  df = df_init (DF_EQUIV_NOTES + DF_SUBREGS + DF_DU_CHAIN + DF_UD_CHAIN, 0);
-  df_rd_add_problem (df);
+  df = df_init (DF_DU_CHAIN + DF_UD_CHAIN, DF_EQ_NOTES);
   df_chain_add_problem (df);
   df_analyze (df);
 
@@ -3339,7 +3338,6 @@ see_update_uses_relevancy (void)
 
   for (i = 0; i < uses_num; i++)
     {
-
       insn = DF_REF_INSN (DF_USES_GET (df, i));
       reg = DF_REF_REAL_REG (DF_USES_GET (df, i));
 
Index: gcc/ddg.c
===================================================================
--- gcc/ddg.c	(revision 117329)
+++ gcc/ddg.c	(working copy)
@@ -334,10 +334,10 @@ build_inter_loop_deps (struct df *df, dd
   EXECUTE_IF_SET_IN_BITMAP (ru_bb_info->kill, 0, u_num, bi)
     {
       struct df_ref *use = DF_USES_GET (df, u_num);
-
-      /* We are interested in uses of this BB.  */
-      if (BLOCK_FOR_INSN (use->insn) == g->bb)
-      	add_deps_for_use (g, df, use);
+      if (!(DF_REF_FLAGS (use) & DF_REF_IN_NOTE))
+	/* We are interested in uses of this BB.  */
+	if (BLOCK_FOR_INSN (use->insn) == g->bb)
+	  add_deps_for_use (g, df, use);
     }
 }
 
Index: gcc/modulo-sched.c
===================================================================
--- gcc/modulo-sched.c	(revision 117395)
+++ gcc/modulo-sched.c	(working copy)
@@ -929,9 +929,7 @@ sms_schedule (void)
   current_sched_info = &sms_sched_info;
 
   /* Init Data Flow analysis, to be used in interloop dep calculation.  */
-  df = df_init (DF_EQUIV_NOTES + DF_SUBREGS + 
-		DF_RI_LIFE + DF_DU_CHAIN + DF_UD_CHAIN, 
-		DF_LR_RUN_DCE);
+  df = df_init (DF_DU_CHAIN + DF_UD_CHAIN, DF_LR_RUN_DCE);
   df_lr_add_problem (df);
   df_rd_add_problem (df);
   df_ru_add_problem (df);
Index: gcc/web.c
===================================================================
--- gcc/web.c	(revision 117351)
+++ gcc/web.c	(working copy)
@@ -109,18 +109,21 @@ union_defs (struct df *df, struct df_ref
   rtx insn = DF_REF_INSN (use);
   struct df_link *link = DF_REF_CHAIN (use);
   struct df_ref *use_link;
+  struct df_ref *eq_use_link;
   struct df_ref *def_link;
   rtx set;
 
   if (insn)
     {
       use_link = DF_INSN_USES (df, insn);
+      eq_use_link = DF_INSN_EQ_USES (df, insn);
       def_link = DF_INSN_DEFS (df, insn);
       set = single_set (insn);
     }
   else
     {
       use_link = NULL;
+      eq_use_link = NULL;
       def_link = NULL;
       set = NULL;
     }
@@ -139,6 +142,15 @@ union_defs (struct df *df, struct df_ref
       use_link = use_link->next_ref;
     }
 
+  while (eq_use_link)
+    {
+      if (use != eq_use_link
+	  && DF_REF_REAL_REG (use) == DF_REF_REAL_REG (eq_use_link))
+ 	(*fun) (use_entry + DF_REF_ID (use),
+ 		use_entry + DF_REF_ID (eq_use_link));
+      eq_use_link = eq_use_link->next_ref;
+    }
+
   /* Recognize trivial noop moves and attempt to keep them as noop.
      While most of noop moves should be removed, we still keep some
      of them at libcall boundaries and such.  */
@@ -253,11 +265,11 @@ web_main (void)
   int max = max_reg_num ();
   char *used;
 
-  df = df_init (DF_EQUIV_NOTES + DF_UD_CHAIN, DF_NO_HARD_REGS);
+  df = df_init (DF_UD_CHAIN, DF_NO_HARD_REGS + DF_EQ_NOTES);
   df_chain_add_problem (df);
   df_analyze (df);
-  df_reorganize_refs (&df->def_info);
-  df_reorganize_refs (&df->use_info);
+  df_maybe_reorganize_def_refs (df);
+  df_maybe_reorganize_use_refs (df);
 
   def_entry = XCNEWVEC (struct web_entry, DF_DEFS_SIZE (df));
   use_entry = XCNEWVEC (struct web_entry, DF_USES_SIZE (df));
Index: gcc/loop-invariant.c
===================================================================
--- gcc/loop-invariant.c	(revision 117351)
+++ gcc/loop-invariant.c	(working copy)
@@ -684,49 +684,66 @@ record_use (struct def *def, rtx *use, r
   def->n_uses++;
 }
 
-/* Finds the invariants INSN depends on and store them to the DEPENDS_ON
-   bitmap.  Returns true if all dependencies of INSN are known to be
+/* Finds the invariants USE depends on and store them to the DEPENDS_ON
+   bitmap.  Returns true if all dependencies of USE are known to be
    loop invariants, false otherwise.  */
 
 static bool
-check_dependencies (rtx insn, bitmap depends_on)
+check_dependency (basic_block bb, struct df_ref *use, bitmap depends_on)
 {
+  struct df_ref *def;
+  basic_block def_bb;
   struct df_link *defs;
-  struct df_ref *use, *def;
-  basic_block bb = BLOCK_FOR_INSN (insn), def_bb;
   struct def *def_data;
   struct invariant *inv;
+  
+  if (use->flags & DF_REF_READ_WRITE)
+    return false;
+  
+  defs = DF_REF_CHAIN (use);
+  if (!defs)
+    return true;
+  
+  if (defs->next)
+    return false;
+  
+  def = defs->ref;
+  inv = DF_REF_DATA (def);
+  if (!inv)
+    return false;
+  
+  def_data = inv->def;
+  gcc_assert (def_data != NULL);
+  
+  def_bb = DF_REF_BB (def);
+  /* Note that in case bb == def_bb, we know that the definition dominates
+     insn, because def has DF_REF_DATA defined and we process the insns
+     in the basic block bb sequentially.  */
+  if (!dominated_by_p (CDI_DOMINATORS, bb, def_bb))
+    return false;
+  
+  bitmap_set_bit (depends_on, def_data->invno);
+  return true;
+}
 
-  for (use = DF_INSN_GET (df, insn)->uses; use; use = use->next_ref)
-    {
-      if (use->flags & DF_REF_READ_WRITE)
-	return false;
-
-      defs = DF_REF_CHAIN (use);
-      if (!defs)
-	continue;
-
-      if (defs->next)
-	return false;
-
-      def = defs->ref;
-      inv = DF_REF_DATA (def);
-      if (!inv)
-	return false;
-
-      def_data = inv->def;
-      gcc_assert (def_data != NULL);
 
-      def_bb = DF_REF_BB (def);
-      /* Note that in case bb == def_bb, we know that the definition dominates
-	 insn, because def has DF_REF_DATA defined and we process the insns
-	 in the basic block bb sequentially.  */
-      if (!dominated_by_p (CDI_DOMINATORS, bb, def_bb))
-	return false;
+/* Finds the invariants INSN depends on and store them to the DEPENDS_ON
+   bitmap.  Returns true if all dependencies of INSN are known to be
+   loop invariants, false otherwise.  */
 
-      bitmap_set_bit (depends_on, def_data->invno);
-    }
+static bool
+check_dependencies (rtx insn, bitmap depends_on)
+{
+  struct df_ref *use;
+  basic_block bb = BLOCK_FOR_INSN (insn);
 
+  for (use = DF_INSN_USES (df, insn); use; use = use->next_ref)
+    if (!check_dependency (bb, use, depends_on))
+      return false;
+  for (use = DF_INSN_EQ_USES (df, insn); use; use = use->next_ref)
+    if (!check_dependency (bb, use, depends_on))
+      return false;
+	
   return true;
 }
 
@@ -807,7 +824,13 @@ record_uses (rtx insn)
   struct df_ref *use;
   struct invariant *inv;
 
-  for (use = DF_INSN_GET (df, insn)->uses; use; use = use->next_ref)
+  for (use = DF_INSN_USES (df, insn); use; use = use->next_ref)
+    {
+      inv = invariant_for_use (use);
+      if (inv)
+	record_use (inv->def, DF_REF_LOC (use), DF_REF_INSN (use));
+    }
+  for (use = DF_INSN_EQ_USES (df, insn); use; use = use->next_ref)
     {
       inv = invariant_for_use (use);
       if (inv)
@@ -1319,7 +1342,7 @@ move_loop_invariants (struct loops *loop
 {
   struct loop *loop;
   unsigned i;
-  df = df_init (DF_EQUIV_NOTES + DF_UD_CHAIN, 0);
+  df = df_init (DF_UD_CHAIN, DF_EQ_NOTES);
   df_chain_add_problem (df);
  
   /* Process the loops, innermost first.  */
Index: gcc/loop-iv.c
===================================================================
--- gcc/loop-iv.c	(revision 117351)
+++ gcc/loop-iv.c	(working copy)
@@ -252,7 +252,7 @@ iv_analysis_loop_init (struct loop *loop
   /* Clear the information from the analysis of the previous loop.  */
   if (first_time)
     {
-      df = df_init (DF_EQUIV_NOTES + DF_UD_CHAIN, 0);
+      df = df_init (DF_UD_CHAIN, DF_EQ_NOTES);
       df_chain_add_problem (df);
       bivs = htab_create (10, biv_hash, biv_eq, free);
     }
@@ -280,10 +280,9 @@ latch_dominating_def (rtx reg, struct df
 {
   struct df_ref *single_rd = NULL, *adef;
   unsigned regno = REGNO (reg);
-  struct df_reg_info *reg_info = DF_REG_DEF_GET (df, regno);
   struct df_rd_bb_info *bb_info = DF_RD_BB_INFO (df, current_loop->latch);
 
-  for (adef = reg_info->reg_chain; adef; adef = adef->next_reg)
+  for (adef = DF_REG_DEF_CHAIN (df, regno); adef; adef = adef->next_reg)
     {
       if (!bitmap_bit_p (bb_info->out, DF_REF_ID (adef)))
 	continue;
Index: gcc/combine.c
===================================================================
--- gcc/combine.c	(revision 117351)
+++ gcc/combine.c	(working copy)
@@ -12830,7 +12830,7 @@ create_log_links (void)
 	  /* Log links are created only once.  */
 	  gcc_assert (!LOG_LINKS (insn));
 
-          for (def = DF_INSN_GET (df, insn)->defs; def; def = def->next_ref)
+          for (def = DF_INSN_DEFS (df, insn); def; def = def->next_ref)
             {
               int regno = DF_REF_REGNO (def);
               rtx use_insn;
@@ -12873,7 +12873,7 @@ create_log_links (void)
               next_use[regno] = NULL_RTX;
             }
 
-          for (use = DF_INSN_GET (df, insn)->uses; use; use = use->next_ref)
+          for (use = DF_INSN_USES (df, insn); use; use = use->next_ref)
             {
               int regno = DF_REF_REGNO (use);
 
Index: gcc/sched-rgn.c
===================================================================
--- gcc/sched-rgn.c	(revision 117395)
+++ gcc/sched-rgn.c	(working copy)
@@ -2898,8 +2898,7 @@ schedule_insns (void)
      invoked via sched_init.  */
   current_sched_info = &region_sched_info;
 
-  df = df_init (DF_EQUIV_NOTES + DF_SUBREGS + DF_RI_LIFE, 
-		DF_LR_RUN_DCE);
+  df = df_init (DF_RI_LIFE, DF_LR_RUN_DCE);
   df_lr_add_problem (df);
   df_live_add_problem (df);
   df_ri_add_problem (df);
Index: gcc/dce.c
===================================================================
--- gcc/dce.c	(revision 117351)
+++ gcc/dce.c	(working copy)
@@ -490,7 +490,7 @@ dce_process_block (basic_block bb, bool 
 		libcall_start = NULL;
 		libcall_id = -1;
 	      }
-	    for (def = DF_INSN_GET (dce_df, insn)->defs; 
+	    for (def = DF_INSN_DEFS (dce_df, insn); 
 		 def; def = def->next_ref)
 	      if (bitmap_bit_p (local_live, DF_REF_REGNO (def)))
 		{
@@ -518,14 +518,14 @@ dce_process_block (basic_block bb, bool 
 	
 	/* No matter if the instruction is needed or not, we remove
 	   any regno in the defs from the live set.  */
-	for (def = DF_INSN_GET (dce_df, insn)->defs; def; def = def->next_ref)
+	for (def = DF_INSN_DEFS (dce_df, insn); def; def = def->next_ref)
 	  {
 	    unsigned int regno = DF_REF_REGNO (def);
 	    if (!(DF_REF_FLAGS (def) & (DF_REF_PARTIAL | DF_REF_CONDITIONAL)))
 	      bitmap_clear_bit (local_live, regno);
 	  }
 	if (marked_insn_p (insn))
-	  for (use = DF_INSN_GET (dce_df, insn)->uses; 
+	  for (use = DF_INSN_USES (dce_df, insn); 
 	       use; use = use->next_ref)
 	    {
 	      unsigned int regno = DF_REF_REGNO (use);
Index: gcc/df.h
===================================================================
--- gcc/df.h	(revision 117351)
+++ gcc/df.h	(working copy)
@@ -91,7 +91,8 @@ enum df_ref_flags
        bottom of the block.  This is never set for regular refs.  */
     DF_REF_AT_TOP = 8,
 
-    /* This flag is set if the use is inside a REG_EQUAL note.  */
+    /* This flag is set if the use is inside a REG_EQUAL or REG_EQUIV
+       note.  */
     DF_REF_IN_NOTE = 16,
 
     /* This flag is set if this ref, generally a def, may clobber the
@@ -259,6 +260,8 @@ struct df_insn_info
   struct df_ref *defs;	        /* Head of insn-def chain.  */
   struct df_ref *uses;	        /* Head of insn-use chain.  */
   struct df_mw_hardreg *mw_hardregs;   
+  /* Head of insn-use chain for uses in REG_EQUAL/EQUIV notes.  */
+  struct df_ref *eq_uses;       
   /* ???? The following luid field should be considered private so that
      we can change it on the fly to accommodate new insns?  */
   int luid;			/* Logical UID.  */
@@ -266,15 +269,6 @@ struct df_insn_info
 };
 
 
-/* Two of these structures are allocated for every pseudo reg, one for
-   the uses and one for the defs.  */
-struct df_reg_info
-{
-  struct df_ref *reg_chain;     /* Head of reg-use or def chain.  */
-  unsigned int begin;           /* First def_index for this pseudo.  */
-  unsigned int n_refs;          /* Number of refs or defs for this pseudo.  */
-};
-
 /* Define a register reference structure.  One of these is allocated
    for every register reference (use or def).  Note some register
    references (e.g., post_inc, subreg) generate both a def and a use.  */
@@ -314,37 +308,17 @@ struct df_link
   struct df_link *next;
 };
 
-/* Two of these structures are allocated, one for the uses and one for
-   the defs.  */
-struct df_ref_info
-{
-  struct df_reg_info **regs;    /* Array indexed by pseudo regno. */
-  unsigned int regs_size;       /* Size of currently allocated regs table.  */
-  unsigned int regs_inited;     /* Number of regs with reg_infos allocated.  */
-  struct df_ref **refs;         /* Ref table, indexed by id.  */
-  unsigned int refs_size;       /* Size of currently allocated refs table.  */
-  unsigned int bitmap_size;	/* Number of refs seen.  */
-
-  /* True if refs table is organized so that every reference for a
-     pseudo is contiguous.  */
-  bool refs_organized;
-  /* True if the next refs should be added immediately or false to
-     defer to later to reorganize the table.  */
-  bool add_refs_inline; 
-};
-
 
 enum df_permanent_flags 
 {
   /* Scanning flags.  */
-  DF_EQUIV_NOTES   =  1, /* Mark uses present in EQUIV/EQUAL notes.  */
-  DF_SUBREGS       =  2, /* Return subregs rather than the inner reg.  */
+  DF_SUBREGS       =  1, /* Return subregs rather than the inner reg.  */
   /* Flags that control the building of chains.  */
-  DF_DU_CHAIN      =  4, /* Build DU chains.  */  
-  DF_UD_CHAIN      =  8, /* Build UD chains.  */
+  DF_DU_CHAIN      =  2, /* Build DU chains.  */  
+  DF_UD_CHAIN      =  4, /* Build UD chains.  */
   /* Flag to control the building of register info.  */
-  DF_RI_LIFE       = 16, /* Build register info.  */
-  DF_RI_SETJMP     = 32  /* Build pseudos that cross setjmp info.  */
+  DF_RI_LIFE       =  8, /* Build register info.  */
+  DF_RI_SETJMP     = 16  /* Build pseudos that cross setjmp info.  */
 };
 
 enum df_changeable_flags 
@@ -352,9 +326,39 @@ enum df_changeable_flags 
   /* Scanning flags.  */
   /* Flag to control the running of dce as a side effect of building LR.  */
   DF_LR_RUN_DCE    = 1,  /* Run DCE.  */
-  DF_NO_HARD_REGS  = 2   /* Skip hard registers in RD and CHAIN Building.  */
+  DF_NO_HARD_REGS  = 2,  /* Skip hard registers in RD and CHAIN Building.  */
+  DF_EQ_NOTES      = 4   /* Build chains with uses present in EQUIV/EQUAL notes. */
+};
+
+/* Two of these structures are inline in df, one for the uses and one
+   for the defs.  */
+struct df_ref_info
+{
+  struct df_ref **refs;         /* Ref table, indexed by id.  */
+  unsigned int *begin;          /* First ref_index for this pseudo.  */
+  unsigned int refs_size;       /* Size of currently allocated refs table.  */
+  unsigned int bitmap_size;	/* Number of refs seen.  */
+
+  /* True if refs table is organized so that every reference for a
+     pseudo is contiguous.  */
+  bool refs_organized_alone;
+  /* True if the next refs should be added immediately or false to
+     defer to later to reorganize the table.  */
+  bool refs_organized_with_eq_uses;
+  /* True if the next refs should be added immediately or false to
+     defer to later to reorganize the table.  */
+  bool add_refs_inline; 
 };
 
+/* Three of these structures are allocated for every pseudo reg. One
+   for the uses, one for the eq_uses and one for the defs.  */
+struct df_reg_info
+{
+  /* Head of chain for refs of that type and regno.  */
+  struct df_ref *reg_chain;
+  /* Number of refs in the chain.  */
+  unsigned int n_refs;
+};
 
 /*----------------------------------------------------------------------------
    Problem data for the scanning dataflow problem.  Unlike the other
@@ -392,6 +396,16 @@ struct df
      to keep getting it from there.  */
   struct df_ref_info def_info;   /* Def info.  */
   struct df_ref_info use_info;   /* Use info.  */
+
+  /* The following three arrays are allocated in parallel.   They contain
+     the sets of refs of each type for each reg.  */
+  struct df_reg_info **def_regs;       /* Def reg info.  */
+  struct df_reg_info **use_regs;       /* Eq_use reg info.  */
+  struct df_reg_info **eq_use_regs;    /* Eq_use info.  */
+  unsigned int regs_size;       /* Size of currently allocated regs table.  */
+  unsigned int regs_inited;     /* Number of regs with reg_infos allocated.  */
+
+
   struct df_insn_info **insns;   /* Insn table, indexed by insn UID.  */
   unsigned int insns_size;       /* Size of insn table.  */
   bitmap hardware_regs_used;     /* The set of hardware registers used.  */
@@ -477,19 +491,26 @@ struct df
 #define DF_DEFS_SIZE(DF) ((DF)->def_info.bitmap_size)
 #define DF_DEFS_GET(DF,ID) ((DF)->def_info.refs[(ID)])
 #define DF_DEFS_SET(DF,ID,VAL) ((DF)->def_info.refs[(ID)]=(VAL))
+#define DF_DEFS_COUNT(DF,ID) (DF_REG_DEF_COUNT(DF,ID))
+#define DF_DEFS_BEGIN(DF,ID) ((DF)->def_info.begin[(ID)])
 #define DF_USES_SIZE(DF) ((DF)->use_info.bitmap_size)
 #define DF_USES_GET(DF,ID) ((DF)->use_info.refs[(ID)])
 #define DF_USES_SET(DF,ID,VAL) ((DF)->use_info.refs[(ID)]=(VAL))
+#define DF_USES_COUNT(DF,ID) (DF_REG_USE_COUNT(DF,ID)+DF_REG_EQ_USE_COUNT(DF,ID))
+#define DF_USES_BEGIN(DF,ID) ((DF)->use_info.begin[(ID)])
 
 /* Macros to access the register information from scan dataflow record.  */
 
-#define DF_REG_SIZE(DF) ((DF)->def_info.regs_inited)
-#define DF_REG_DEF_GET(DF, REG) ((DF)->def_info.regs[(REG)])
-#define DF_REG_DEF_SET(DF, REG, VAL) ((DF)->def_info.regs[(REG)]=(VAL))
-#define DF_REG_DEF_COUNT(DF, REG) ((DF)->def_info.regs[(REG)]->n_refs)
-#define DF_REG_USE_GET(DF, REG) ((DF)->use_info.regs[(REG)])
-#define DF_REG_USE_SET(DF, REG, VAL) ((DF)->use_info.regs[(REG)]=(VAL))
-#define DF_REG_USE_COUNT(DF, REG) ((DF)->use_info.regs[(REG)]->n_refs)
+#define DF_REG_SIZE(DF) ((DF)->regs_inited)
+#define DF_REG_DEF_GET(DF,REG) ((DF)->def_regs[(REG)])
+#define DF_REG_DEF_CHAIN(DF,REG) ((DF)->def_regs[(REG)]->reg_chain)
+#define DF_REG_DEF_COUNT(DF,REG) ((DF)->def_regs[(REG)]->n_refs)
+#define DF_REG_USE_GET(DF,REG) ((DF)->use_regs[(REG)])
+#define DF_REG_USE_CHAIN(DF,REG) ((DF)->use_regs[(REG)]->reg_chain)
+#define DF_REG_USE_COUNT(DF,REG) ((DF)->use_regs[(REG)]->n_refs)
+#define DF_REG_EQ_USE_GET(DF,REG) ((DF)->eq_use_regs[(REG)])
+#define DF_REG_EQ_USE_CHAIN(DF,REG) ((DF)->eq_use_regs[(REG)]->reg_chain)
+#define DF_REG_EQ_USE_COUNT(DF,REG) ((DF)->eq_use_regs[(REG)]->n_refs)
 
 /* Macros to access the elements within the reg_info structure table.  */
 
@@ -504,15 +525,17 @@ struct df
 #define DF_INSN_GET(DF,INSN) ((DF)->insns[(INSN_UID(INSN))])
 #define DF_INSN_SET(DF,INSN,VAL) ((DF)->insns[(INSN_UID (INSN))]=(VAL))
 #define DF_INSN_CONTAINS_ASM(DF, INSN) (DF_INSN_GET(DF,INSN)->contains_asm)
-#define DF_INSN_LUID(DF, INSN) (DF_INSN_GET(DF,INSN)->luid)
-#define DF_INSN_DEFS(DF, INSN) (DF_INSN_GET(DF,INSN)->defs)
-#define DF_INSN_USES(DF, INSN) (DF_INSN_GET(DF,INSN)->uses)
+#define DF_INSN_LUID(DF,INSN) (DF_INSN_GET(DF,INSN)->luid)
+#define DF_INSN_DEFS(DF,INSN) (DF_INSN_GET(DF,INSN)->defs)
+#define DF_INSN_USES(DF,INSN) (DF_INSN_GET(DF,INSN)->uses)
+#define DF_INSN_EQ_USES(DF,INSN) (DF_INSN_GET(DF,INSN)->eq_uses)
 
 #define DF_INSN_UID_GET(DF,UID) ((DF)->insns[(UID)])
-#define DF_INSN_UID_LUID(DF, INSN) (DF_INSN_UID_GET(DF,INSN)->luid)
-#define DF_INSN_UID_DEFS(DF, INSN) (DF_INSN_UID_GET(DF,INSN)->defs)
-#define DF_INSN_UID_USES(DF, INSN) (DF_INSN_UID_GET(DF,INSN)->uses)
-#define DF_INSN_UID_MWS(DF, INSN) (DF_INSN_UID_GET(DF,INSN)->mw_hardregs)
+#define DF_INSN_UID_LUID(DF,INSN) (DF_INSN_UID_GET(DF,INSN)->luid)
+#define DF_INSN_UID_DEFS(DF,INSN) (DF_INSN_UID_GET(DF,INSN)->defs)
+#define DF_INSN_UID_USES(DF,INSN) (DF_INSN_UID_GET(DF,INSN)->uses)
+#define DF_INSN_UID_EQ_USES(DF,INSN) (DF_INSN_UID_GET(DF,INSN)->eq_uses)
+#define DF_INSN_UID_MWS(DF,INSN) (DF_INSN_UID_GET(DF,INSN)->mw_hardregs)
 
 /* This is a bitmap copy of regs_invalidated_by_call so that we can
    easily add it into bitmaps, etc. */ 
@@ -723,7 +746,8 @@ extern void df_refs_delete (struct dataf
 extern void df_insn_refs_record (struct dataflow *, basic_block, rtx);
 extern bool df_has_eh_preds (basic_block);
 extern void df_recompute_luids (struct df *, basic_block);
-extern void df_reorganize_refs (struct df_ref_info *);
+extern void df_maybe_reorganize_use_refs (struct df *);
+extern void df_maybe_reorganize_def_refs (struct df *);
 extern void df_hard_reg_init (void);
 extern bool df_read_modify_subreg_p (rtx);
 
Index: gcc/df-problems.c
===================================================================
--- gcc/df-problems.c	(revision 117351)
+++ gcc/df-problems.c	(working copy)
@@ -471,8 +471,9 @@ df_ru_bb_local_compute_process_def (stru
 	  && (!(DF_REF_FLAGS (def) & (DF_REF_PARTIAL | DF_REF_CONDITIONAL))))
 	{
 	  unsigned int regno = DF_REF_REGNO (def);
-	  unsigned int begin = DF_REG_USE_GET (df, regno)->begin;
-	  unsigned int n_uses = DF_REG_USE_GET (df, regno)->n_refs;
+	  unsigned int begin = DF_USES_BEGIN (df, regno);
+	  unsigned int n_uses = DF_USES_COUNT (df, regno);
+
 	  if (!bitmap_bit_p (seen_in_block, regno))
 	    {
 	      /* The first def for regno in the insn, causes the kill
@@ -557,6 +558,10 @@ df_ru_bb_local_compute (struct dataflow 
       df_ru_bb_local_compute_process_use (bb_info, 
 					  DF_INSN_UID_USES (df, uid), 0);
 
+      if (df->changeable_flags & DF_EQ_NOTES)
+	df_ru_bb_local_compute_process_use (bb_info, 
+					    DF_INSN_UID_EQ_USES (df, uid), 0);
+
       df_ru_bb_local_compute_process_def (dflow, bb_info, 
 					  DF_INSN_UID_DEFS (df, uid), 0);
 
@@ -591,8 +596,7 @@ df_ru_local_compute (struct dataflow *df
 
   df_set_seen ();
 
-  if (!df->use_info.refs_organized)
-    df_reorganize_refs (&df->use_info);
+  df_maybe_reorganize_use_refs (df);
 
   EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
     {
@@ -602,13 +606,13 @@ df_ru_local_compute (struct dataflow *df
   /* Set up the knockout bit vectors to be applied across EH_EDGES.  */
   EXECUTE_IF_SET_IN_BITMAP (df_invalidated_by_call, 0, regno, bi)
     {
-      struct df_reg_info *reg_info = DF_REG_USE_GET (df, regno);
-      if (reg_info->n_refs > DF_SPARSE_THRESHOLD)
+      if (DF_USES_COUNT (df, regno) > DF_SPARSE_THRESHOLD)
 	bitmap_set_bit (sparse_invalidated, regno);
       else
 	{
 	  bitmap defs = df_ref_bitmap (problem_data->use_sites, regno, 
-				       reg_info->begin, reg_info->n_refs);
+				       DF_USES_BEGIN (df, regno), 
+				       DF_USES_COUNT (df, regno));
 	  bitmap_ior_into (dense_invalidated, defs);
 	}
     }
@@ -659,8 +663,8 @@ df_ru_confluence_n (struct dataflow *dfl
       EXECUTE_IF_SET_IN_BITMAP (sparse_invalidated, 0, regno, bi)
 	{
  	  bitmap_clear_range (tmp, 
- 			      DF_REG_USE_GET (df, regno)->begin, 
- 			      DF_REG_USE_GET (df, regno)->n_refs);
+ 			      DF_USES_BEGIN (df, regno), 
+ 			      DF_USES_COUNT (df, regno));
 	}
       bitmap_ior_into (op1, tmp);
       BITMAP_FREE (tmp);
@@ -695,8 +699,8 @@ df_ru_transfer_function (struct dataflow
       EXECUTE_IF_SET_IN_BITMAP (sparse_kill, 0, regno, bi)
 	{
 	  bitmap_clear_range (tmp, 
- 			      DF_REG_USE_GET (df, regno)->begin, 
- 			      DF_REG_USE_GET (df, regno)->n_refs);
+ 			      DF_USES_BEGIN (df, regno), 
+ 			      DF_USES_COUNT (df, regno));
 	}
       bitmap_and_compl_into (tmp, kill);
       bitmap_ior_into (tmp, gen);
@@ -780,10 +784,10 @@ df_ru_start_dump (struct dataflow *dflow
   dump_bitmap (file, problem_data->dense_invalidated_by_call);
   
   for (regno = 0; regno < m; regno++)
-    if (DF_REG_USE_GET (df, regno)->n_refs)
+    if (DF_USES_COUNT (df, regno))
       fprintf (file, "%d[%d,%d] ", regno, 
-	       DF_REG_USE_GET (df, regno)->begin, 
-	       DF_REG_USE_GET (df, regno)->n_refs);
+	       DF_USES_BEGIN (df, regno), 
+	       DF_USES_COUNT (df, regno));
   fprintf (file, "\n");
 }
 
@@ -1027,8 +1031,8 @@ df_rd_bb_local_compute_process_def (stru
       if (top_flag == (DF_REF_FLAGS (def) & DF_REF_AT_TOP))
 	{
 	  unsigned int regno = DF_REF_REGNO (def);
-	  unsigned int begin = DF_REG_DEF_GET (df, regno)->begin;
-	  unsigned int n_defs = DF_REG_DEF_GET (df, regno)->n_refs;
+	  unsigned int begin = DF_DEFS_BEGIN (df, regno);
+	  unsigned int n_defs = DF_DEFS_COUNT (df, regno);
 	  
 	  if ((!(df->changeable_flags & DF_NO_HARD_REGS))
 	      || (regno >= FIRST_PSEUDO_REGISTER))
@@ -1143,8 +1147,7 @@ df_rd_local_compute (struct dataflow *df
 
   df_set_seen ();
 
-  if (!df->def_info.refs_organized)
-    df_reorganize_refs (&df->def_info);
+  df_maybe_reorganize_def_refs (df);
 
   EXECUTE_IF_SET_IN_BITMAP (all_blocks, 0, bb_index, bi)
     {
@@ -1154,15 +1157,13 @@ df_rd_local_compute (struct dataflow *df
   /* Set up the knockout bit vectors to be applied across EH_EDGES.  */
   EXECUTE_IF_SET_IN_BITMAP (df_invalidated_by_call, 0, regno, bi)
     {
-      struct df_reg_info *reg_info = DF_REG_DEF_GET (df, regno);
-      if (reg_info->n_refs > DF_SPARSE_THRESHOLD)
-	{
-	  bitmap_set_bit (sparse_invalidated, regno);
-	}
+      if (DF_DEFS_COUNT (df, regno) > DF_SPARSE_THRESHOLD)
+	bitmap_set_bit (sparse_invalidated, regno);
       else
 	{
 	  bitmap defs = df_ref_bitmap (problem_data->def_sites, regno, 
-				       reg_info->begin, reg_info->n_refs);
+				       DF_DEFS_BEGIN (df, regno), 
+				       DF_DEFS_COUNT (df, regno));
 	  bitmap_ior_into (dense_invalidated, defs);
 	}
     }
@@ -1212,8 +1213,8 @@ df_rd_confluence_n (struct dataflow *dfl
       EXECUTE_IF_SET_IN_BITMAP (sparse_invalidated, 0, regno, bi)
  	{
  	  bitmap_clear_range (tmp, 
- 			      DF_REG_DEF_GET (df, regno)->begin, 
- 			      DF_REG_DEF_GET (df, regno)->n_refs);
+ 			      DF_DEFS_BEGIN (df, regno), 
+ 			      DF_DEFS_COUNT (df, regno));
 	}
       bitmap_ior_into (op1, tmp);
       BITMAP_FREE (tmp);
@@ -1248,8 +1249,8 @@ df_rd_transfer_function (struct dataflow
       EXECUTE_IF_SET_IN_BITMAP (sparse_kill, 0, regno, bi)
 	{
 	  bitmap_clear_range (tmp, 
-			      DF_REG_DEF_GET (df, regno)->begin, 
-			      DF_REG_DEF_GET (df, regno)->n_refs);
+			      DF_DEFS_BEGIN (df, regno), 
+			      DF_DEFS_COUNT (df, regno));
 	}
       bitmap_and_compl_into (tmp, kill);
       bitmap_ior_into (tmp, gen);
@@ -1333,10 +1334,10 @@ df_rd_start_dump (struct dataflow *dflow
   dump_bitmap (file, problem_data->dense_invalidated_by_call);
 
   for (regno = 0; regno < m; regno++)
-    if (DF_REG_DEF_GET (df, regno)->n_refs)
+    if (DF_DEFS_COUNT (df, regno))
       fprintf (file, "%d[%d,%d] ", regno, 
-	       DF_REG_DEF_GET (df, regno)->begin, 
-	       DF_REG_DEF_GET (df, regno)->n_refs);
+	       DF_DEFS_BEGIN (df, regno), 
+	       DF_DEFS_COUNT (df, regno));
   fprintf (file, "\n");
 
 }
@@ -3099,24 +3100,22 @@ df_chain_alloc (struct dataflow *dflow, 
 
   if (df->permanent_flags & DF_DU_CHAIN)
     {
-      if (!df->def_info.refs_organized)
-	df_reorganize_refs (&df->def_info);
-      
+      df_maybe_reorganize_def_refs (df);
       /* Clear out the pointers from the refs.  */
       for (i = 0; i < DF_DEFS_SIZE (df); i++)
 	{
-	  struct df_ref *ref = df->def_info.refs[i];
+	  struct df_ref *ref = DF_DEFS_GET(df, i);
 	  DF_REF_CHAIN (ref) = NULL;
 	}
     }
   
   if (df->permanent_flags & DF_UD_CHAIN)
     {
-      if (!df->use_info.refs_organized)
-	df_reorganize_refs (&df->use_info);
+      df_maybe_reorganize_use_refs (df);
+      /* Clear out the pointers from the refs.  */
       for (i = 0; i < DF_USES_SIZE (df); i++)
 	{
-	  struct df_ref *ref = df->use_info.refs[i];
+	  struct df_ref *ref = DF_USES_GET (df, i);
 	  DF_REF_CHAIN (ref) = NULL;
 	}
     }
@@ -3133,7 +3132,7 @@ df_chain_insn_reset (struct dataflow *df
   struct df_insn_info *insn_info = NULL;
   struct df_ref *ref;
 
-  if (uid < df->insns_size)
+  if (uid < DF_INSN_SIZE (df))
     insn_info = DF_INSN_UID_GET (df, uid);
 
   if (insn_info)
@@ -3156,6 +3155,12 @@ df_chain_insn_reset (struct dataflow *df
 	      ref->chain = NULL;
 	      ref = ref->next_ref;
 	    }
+	  ref = insn_info->eq_uses;
+	  while (ref) 
+	    {
+	      ref->chain = NULL;
+	      ref = ref->next_ref;
+	    }
 	}
     }
 }
@@ -3246,12 +3251,12 @@ df_chain_create_bb_process_use (struct d
       if ((!(df->changeable_flags & DF_NO_HARD_REGS))
 	  || (uregno >= FIRST_PSEUDO_REGISTER))
 	{
-	  int count = DF_REG_DEF_GET (df, uregno)->n_refs;
+	  int count = DF_DEFS_COUNT (df, uregno);
 	  if (count)
 	    {
 	      if (top_flag == (DF_REF_FLAGS (use) & DF_REF_AT_TOP))
 		{
-		  unsigned int first_index = DF_REG_DEF_GET (df, uregno)->begin;
+		  unsigned int first_index = DF_DEFS_BEGIN (df, uregno);
 		  unsigned int last_index = first_index + count - 1;
 		  
 		  EXECUTE_IF_SET_IN_BITMAP (local_rd, first_index, def_index, bi)
@@ -3314,8 +3319,8 @@ df_chain_create_bb (struct dataflow *dfl
 	unsigned int dregno = DF_REF_REGNO (def);
 	if (!(DF_REF_FLAGS (def) & (DF_REF_PARTIAL | DF_REF_CONDITIONAL)))
 	  bitmap_clear_range (cpy, 
-			      DF_REG_DEF_GET (df, dregno)->begin, 
-			      DF_REG_DEF_GET (df, dregno)->n_refs);
+			      DF_DEFS_BEGIN (df, dregno), 
+			      DF_DEFS_COUNT (df, dregno));
 	bitmap_set_bit (cpy, DF_REF_ID (def));
       }
   
@@ -3334,6 +3339,11 @@ df_chain_create_bb (struct dataflow *dfl
       df_chain_create_bb_process_use (dflow, cpy,
 				     DF_INSN_UID_USES (df, uid), 0);
 
+      if (df->changeable_flags & DF_EQ_NOTES)
+	df_chain_create_bb_process_use (dflow, cpy,
+					DF_INSN_UID_EQ_USES (df, uid), 0);
+
+
       /* Since we are going forwards, process the defs second.  This
          pass only changes the bits in cpy.  */
       for (def = DF_INSN_UID_DEFS (df, uid); def; def = def->next_ref)
@@ -3344,8 +3354,8 @@ df_chain_create_bb (struct dataflow *dfl
 	    {
 	      if (!(DF_REF_FLAGS (def) & (DF_REF_PARTIAL | DF_REF_CONDITIONAL)))
 		bitmap_clear_range (cpy, 
-				    DF_REG_DEF_GET (df, dregno)->begin, 
-				    DF_REG_DEF_GET (df, dregno)->n_refs);
+				    DF_DEFS_BEGIN (df, dregno), 
+				    DF_DEFS_COUNT (df, dregno));
 	      if (!(DF_REF_FLAGS (def) 
 		    & (DF_REF_MUST_CLOBBER | DF_REF_MAY_CLOBBER)))
 		bitmap_set_bit (cpy, DF_REF_ID (def));
@@ -3401,7 +3411,7 @@ df_chain_start_dump (struct dataflow *df
   if (df->permanent_flags & DF_DU_CHAIN)
     {
       fprintf (file, "Def-use chains:\n");
-      for (j = 0; j < df->def_info.bitmap_size; j++)
+      for (j = 0; j < DF_DEFS_SIZE (df); j++)
 	{
 	  struct df_ref *def = DF_DEFS_GET (df, j);
 	  if (def)
@@ -3424,7 +3434,7 @@ df_chain_start_dump (struct dataflow *df
   if (df->permanent_flags & DF_UD_CHAIN)
     {
       fprintf (file, "Use-def chains:\n");
-      for (j = 0; j < df->use_info.bitmap_size; j++)
+      for (j = 0; j < DF_USES_SIZE (df); j++)
 	{
 	  struct df_ref *use = DF_USES_GET (df, j);
 	  if (use)
Index: gcc/df-scan.c
===================================================================
--- gcc/df-scan.c	(revision 117351)
+++ gcc/df-scan.c	(working copy)
@@ -99,7 +99,7 @@ static struct df_ref *df_ref_create_stru
 					       enum df_ref_flags);
 static void df_record_entry_block_defs (struct dataflow *);
 static void df_record_exit_block_uses (struct dataflow *);
-static void df_grow_reg_info (struct dataflow *, struct df_ref_info *);
+static void df_grow_reg_info (struct dataflow *);
 static void df_grow_ref_info (struct df_ref_info *, unsigned int);
 static void df_grow_insn_info (struct df *);
 
@@ -132,17 +132,26 @@ df_scan_free_internal (struct dataflow *
   struct df_scan_problem_data *problem_data
     = (struct df_scan_problem_data *) dflow->problem_data;
 
-  free (df->def_info.regs);
   free (df->def_info.refs);
+  free (df->def_info.begin);
   memset (&df->def_info, 0, (sizeof (struct df_ref_info)));
 
-  free (df->use_info.regs);
   free (df->use_info.refs);
+  free (df->use_info.begin);
   memset (&df->use_info, 0, (sizeof (struct df_ref_info)));
 
+  free (df->def_regs);
+  df->def_regs = NULL;
+  free (df->use_regs);
+  df->use_regs = NULL;
+  free (df->eq_use_regs);
+  df->eq_use_regs = NULL;
+  df->regs_size = 0;
+  DF_REG_SIZE(df) = 0;
+
   free (df->insns);
   df->insns = NULL;
-  df->insns_size = 0;
+  DF_INSN_SIZE (df) = 0;
 
   free (dflow->block_info);
   dflow->block_info = NULL;
@@ -247,10 +256,8 @@ df_scan_alloc (struct dataflow *dflow, b
 			 sizeof (struct df_link), block_size);
 
   insn_num += insn_num / 4; 
-  df_grow_reg_info (dflow, &df->def_info);
+  df_grow_reg_info (dflow);
   df_grow_ref_info (&df->def_info, insn_num);
-
-  df_grow_reg_info (dflow, &df->use_info);
   df_grow_ref_info (&df->use_info, insn_num *2);
 
   df_grow_insn_info (df);
@@ -365,30 +372,49 @@ df_scan_add_problem (struct df *df)
    filled with reg_info structures.  */
 
 static void 
-df_grow_reg_info (struct dataflow *dflow, struct df_ref_info *ref_info)
+df_grow_reg_info (struct dataflow *dflow)
 {
+  struct df *df = dflow->df;
   unsigned int max_reg = max_reg_num ();
   unsigned int new_size = max_reg;
   struct df_scan_problem_data *problem_data
     = (struct df_scan_problem_data *) dflow->problem_data;
   unsigned int i;
 
-  if (ref_info->regs_size < new_size)
+  if (df->regs_size < new_size)
     {
       new_size += new_size / 4;
-      ref_info->regs = xrealloc (ref_info->regs, 
-				 new_size *sizeof (struct df_reg_info*));
-      ref_info->regs_size = new_size;
+      df->def_regs = xrealloc (df->def_regs, 
+			       new_size *sizeof (struct df_reg_info*));
+      df->use_regs = xrealloc (df->use_regs, 
+			       new_size *sizeof (struct df_reg_info*));
+      df->eq_use_regs = xrealloc (df->eq_use_regs, 
+				  new_size *sizeof (struct df_reg_info*));
+      df->def_info.begin = xrealloc (df->def_info.begin, 
+				      new_size *sizeof (int));
+      df->use_info.begin = xrealloc (df->use_info.begin, 
+				      new_size *sizeof (int));
+      df->regs_size = new_size;
     }
 
-  for (i = ref_info->regs_inited; i < max_reg; i++)
+  for (i = df->regs_inited; i < max_reg; i++)
     {
-      struct df_reg_info *reg_info = pool_alloc (problem_data->reg_pool);
+      struct df_reg_info *reg_info;
+
+      reg_info = pool_alloc (problem_data->reg_pool);
       memset (reg_info, 0, sizeof (struct df_reg_info));
-      ref_info->regs[i] = reg_info;
+      df->def_regs[i] = reg_info;
+      reg_info = pool_alloc (problem_data->reg_pool);
+      memset (reg_info, 0, sizeof (struct df_reg_info));
+      df->use_regs[i] = reg_info;
+      reg_info = pool_alloc (problem_data->reg_pool);
+      memset (reg_info, 0, sizeof (struct df_reg_info));
+      df->eq_use_regs[i] = reg_info;
+      df->def_info.begin[i] = 0;
+      df->use_info.begin[i] = 0;
     }
   
-  ref_info->regs_inited = max_reg;
+  df->regs_inited = max_reg;
 }
 
 
@@ -416,14 +442,14 @@ static void 
 df_grow_insn_info (struct df *df)
 {
   unsigned int new_size = get_max_uid () + 1;
-  if (df->insns_size < new_size)
+  if (DF_INSN_SIZE (df) < new_size)
     {
       new_size += new_size / 4;
       df->insns = xrealloc (df->insns, 
 			    new_size *sizeof (struct df_insn_info *));
       memset (df->insns + df->insns_size, 0,
-	      (new_size - df->insns_size) *sizeof (struct df_insn_info *));
-      df->insns_size = new_size;
+	      (new_size - DF_INSN_SIZE (df)) *sizeof (struct df_insn_info *));
+      DF_INSN_SIZE (df) = new_size;
     }
 }
 
@@ -445,8 +471,10 @@ df_rescan_blocks (struct df *df, bitmap 
   struct dataflow *dflow = df->problems_by_index[DF_SCAN];
   basic_block bb;
 
-  df->def_info.refs_organized = false;
-  df->use_info.refs_organized = false;
+  df->def_info.refs_organized_with_eq_uses = false;
+  df->def_info.refs_organized_alone = false;
+  df->use_info.refs_organized_with_eq_uses = false;
+  df->use_info.refs_organized_alone = false;
 
   if (blocks)
     {
@@ -459,10 +487,9 @@ df_rescan_blocks (struct df *df, bitmap 
       unsigned int insn_num = get_max_uid () + 1;
       insn_num += insn_num / 4;
 
-      df_grow_reg_info (dflow, &df->def_info);
+      df_grow_reg_info (dflow);
+
       df_grow_ref_info (&df->def_info, insn_num);
-      
-      df_grow_reg_info (dflow, &df->use_info);
       df_grow_ref_info (&df->use_info, insn_num *2);
       
       df_grow_insn_info (df);
@@ -557,8 +584,7 @@ df_ref_create (struct df *df, rtx reg, r
   struct dataflow *dflow = df->problems_by_index[DF_SCAN];
   struct df_scan_bb_info *bb_info;
   
-  df_grow_reg_info (dflow, &df->use_info);
-  df_grow_reg_info (dflow, &df->def_info);
+  df_grow_reg_info (dflow);
   df_grow_bb_info (dflow);
   
   /* Make sure there is the bb_info for this block.  */
@@ -680,14 +706,15 @@ df_reg_chain_unlink (struct dataflow *df
   if (DF_REF_TYPE (ref) == DF_REF_REG_DEF)
     {
       reg_info = DF_REG_DEF_GET (df, DF_REF_REGNO (ref));
-      df->def_info.bitmap_size--;
       if (df->def_info.refs && (id < df->def_info.refs_size))
 	DF_DEFS_SET (df, id, NULL);
     }
   else 
     {
-      reg_info = DF_REG_USE_GET (df, DF_REF_REGNO (ref));
-      df->use_info.bitmap_size--;
+      if (DF_REF_FLAGS (ref) & DF_REF_IN_NOTE)
+	reg_info = DF_REG_EQ_USE_GET (df, DF_REF_REGNO (ref));
+      else
+	reg_info = DF_REG_USE_GET (df, DF_REF_REGNO (ref));
       if (df->use_info.refs && (id < df->use_info.refs_size))
 	DF_USES_SET (df, id, NULL);
     }
@@ -749,6 +776,9 @@ df_ref_remove (struct df *df, struct df_
 	  bb_info->artificial_uses 
 	    = df_ref_unlink (bb_info->artificial_uses, ref);
 	}
+      else if (DF_REF_FLAGS (ref) & DF_REF_IN_NOTE)
+	DF_INSN_UID_EQ_USES (df, DF_REF_INSN_UID (ref))
+	  = df_ref_unlink (DF_INSN_UID_EQ_USES (df, DF_REF_INSN_UID (ref)), ref);
       else
 	DF_INSN_UID_USES (df, DF_REF_INSN_UID (ref))
 	  = df_ref_unlink (DF_INSN_UID_USES (df, DF_REF_INSN_UID (ref)), ref);
@@ -792,7 +822,7 @@ df_insn_refs_delete (struct dataflow *df
   struct df_scan_problem_data *problem_data
     = (struct df_scan_problem_data *) dflow->problem_data;
 
-  if (uid < df->insns_size)
+  if (uid < DF_INSN_SIZE (df))
     insn_info = DF_INSN_UID_GET (df, uid);
 
   if (insn_info)
@@ -822,6 +852,10 @@ df_insn_refs_delete (struct dataflow *df
       while (ref) 
 	ref = df_reg_chain_unlink (dflow, ref);
 
+      ref = insn_info->eq_uses;
+      while (ref) 
+	ref = df_reg_chain_unlink (dflow, ref);
+
       pool_free (problem_data->insn_pool, insn_info);
       DF_INSN_SET (df, insn, NULL);
     }
@@ -882,17 +916,17 @@ df_refs_delete (struct dataflow *dflow, 
 /* Take build ref table for either the uses or defs from the reg-use
    or reg-def chains.  */ 
 
-void 
-df_reorganize_refs (struct df_ref_info *ref_info)
+static void 
+df_reorganize_refs (struct df *df,
+		    struct df_ref_info *ref_info,
+		    struct df_reg_info **reg1_info,
+		    struct df_reg_info **reg2_info)
 {
-  unsigned int m = ref_info->regs_inited;
+  unsigned int m = df->regs_inited;
   unsigned int regno;
   unsigned int offset = 0;
   unsigned int size = 0;
 
-  if (ref_info->refs_organized)
-    return;
-
   if (ref_info->refs_size < ref_info->bitmap_size)
     {  
       int new_size = ref_info->bitmap_size + ref_info->bitmap_size / 4;
@@ -901,21 +935,30 @@ df_reorganize_refs (struct df_ref_info *
 
   for (regno = 0; regno < m; regno++)
     {
-      struct df_reg_info *reg_info = ref_info->regs[regno];
-      int count = 0;
-      if (reg_info)
+      struct df_reg_info *reg_info = reg1_info[regno];
+      struct df_ref *ref = reg_info->reg_chain;
+      ref_info->begin[regno] = offset;
+      while (ref) 
+	{
+	  ref_info->refs[offset] = ref;
+	  DF_REF_ID (ref) = offset++;
+	  ref = DF_REF_NEXT_REG (ref);
+	  gcc_assert (size < ref_info->refs_size);
+	  size++;
+	}
+      if (reg2_info)
 	{
-	  struct df_ref *ref = reg_info->reg_chain;
-	  reg_info->begin = offset;
+	  reg_info = reg2_info[regno];
+	  gcc_assert (reg_info);
+	  ref = reg_info->reg_chain;
 	  while (ref) 
 	    {
 	      ref_info->refs[offset] = ref;
 	      DF_REF_ID (ref) = offset++;
 	      ref = DF_REF_NEXT_REG (ref);
-	      count++;
+	      gcc_assert (size < ref_info->refs_size);
 	      size++;
 	    }
-	  reg_info->n_refs = count;
 	}
     }
 
@@ -923,10 +966,45 @@ df_reorganize_refs (struct df_ref_info *
      reset it now that we have squished out all of the empty
      slots.  */
   ref_info->bitmap_size = size;
-  ref_info->refs_organized = true;
+  if (reg2_info)
+    {
+      ref_info->refs_organized_with_eq_uses = true;
+      ref_info->refs_organized_alone = false;
+    }
+  else
+    {
+      ref_info->refs_organized_with_eq_uses = false;
+      ref_info->refs_organized_alone = true;
+    }
   ref_info->add_refs_inline = true;
 }
 
+
+/* If the use refs in DF are not organized, reorganize them.  */
+
+void 
+df_maybe_reorganize_use_refs (struct df *df)
+{
+  if (df->changeable_flags & DF_EQ_NOTES)
+    {
+      if (!df->use_info.refs_organized_with_eq_uses)
+	df_reorganize_refs (df, &df->use_info, 
+			    df->use_regs, df->eq_use_regs);
+    }
+  else if (!df->use_info.refs_organized_alone)
+    df_reorganize_refs (df, &df->use_info, df->use_regs, NULL);
+}
+
+
+/* If the def refs in DF are not organized, reorganize them.  */
+
+void 
+df_maybe_reorganize_def_refs (struct df *df)
+{
+  if (!df->def_info.refs_organized_alone)
+    df_reorganize_refs (df, &df->def_info, df->def_regs, NULL);
+}
+
 
 /*----------------------------------------------------------------------------
    Hard core instruction scanning code.  No external interfaces here,
@@ -969,21 +1047,22 @@ df_ref_create_structure (struct dataflow
 	
 	/* Add the ref to the reg_def chain.  */
 	df_reg_chain_create (reg_info, this_ref);
-	DF_REF_ID (this_ref) = df->def_info.bitmap_size;
+
+	DF_REF_ID (this_ref) = DF_DEFS_SIZE (df);
 	if (df->def_info.add_refs_inline)
 	  {
 	    if (DF_DEFS_SIZE (df) >= df->def_info.refs_size)
 	      {
-		int new_size = df->def_info.bitmap_size 
-		  + df->def_info.bitmap_size / 4;
+		int new_size = DF_DEFS_SIZE (df) 
+		  + DF_DEFS_SIZE (df) / 4;
 		df_grow_ref_info (&df->def_info, new_size);
 	      }
 	    /* Add the ref to the big array of defs.  */
-	    DF_DEFS_SET (df, df->def_info.bitmap_size, this_ref);
-	    df->def_info.refs_organized = false;
+	    DF_DEFS_SET (df, DF_DEFS_SIZE (df), this_ref);
+	    df->def_info.refs_organized_alone = false;
 	  }
 	
-	df->def_info.bitmap_size++;
+	DF_DEFS_SIZE (df)++;
 	
 	if (DF_REF_FLAGS (this_ref) & DF_REF_ARTIFICIAL)
 	  {
@@ -994,8 +1073,8 @@ df_ref_create_structure (struct dataflow
 	  }
 	else
 	  {
-	    this_ref->next_ref = DF_INSN_GET (df, insn)->defs;
-	    DF_INSN_GET (df, insn)->defs = this_ref;
+	    this_ref->next_ref = DF_INSN_DEFS (df, insn);
+	    DF_INSN_DEFS (df, insn) = this_ref;
 	  }
       }
       break;
@@ -1004,26 +1083,32 @@ df_ref_create_structure (struct dataflow
     case DF_REF_REG_MEM_STORE:
     case DF_REF_REG_USE:
       {
-	struct df_reg_info *reg_info = DF_REG_USE_GET (df, regno);
+	struct df_reg_info *reg_info;
+	if (DF_REF_FLAGS (this_ref) & DF_REF_IN_NOTE)
+	  reg_info = DF_REG_EQ_USE_GET (df, regno);
+	else
+	  reg_info = DF_REG_USE_GET (df, regno);
+
 	reg_info->n_refs++;
-	
 	/* Add the ref to the reg_use chain.  */
 	df_reg_chain_create (reg_info, this_ref);
-	DF_REF_ID (this_ref) = df->use_info.bitmap_size;
+
+	DF_REF_ID (this_ref) = DF_USES_SIZE (df);
 	if (df->use_info.add_refs_inline)
 	  {
 	    if (DF_USES_SIZE (df) >= df->use_info.refs_size)
 	      {
-		int new_size = df->use_info.bitmap_size 
-		  + df->use_info.bitmap_size / 4;
+		int new_size = DF_USES_SIZE (df) 
+		  + DF_USES_SIZE (df) / 4;
 		df_grow_ref_info (&df->use_info, new_size);
 	      }
 	    /* Add the ref to the big array of defs.  */
-	    DF_USES_SET (df, df->use_info.bitmap_size, this_ref);
-	    df->use_info.refs_organized = false;
+	    DF_USES_SET (df, DF_USES_SIZE (df), this_ref);
+	    df->use_info.refs_organized_with_eq_uses = false;
+	    df->use_info.refs_organized_alone = false;
 	  }
 	
-	df->use_info.bitmap_size++;
+	DF_USES_SIZE (df)++;
 	if (DF_REF_FLAGS (this_ref) & DF_REF_ARTIFICIAL)
 	  {
 	    struct df_scan_bb_info *bb_info 
@@ -1033,8 +1118,16 @@ df_ref_create_structure (struct dataflow
 	  }
 	else
 	  {
-	    this_ref->next_ref = DF_INSN_GET (df, insn)->uses;
-	    DF_INSN_GET (df, insn)->uses = this_ref;
+	    if (DF_REF_FLAGS (this_ref) & DF_REF_IN_NOTE)
+	      {
+		this_ref->next_ref = DF_INSN_EQ_USES (df, insn);
+		DF_INSN_EQ_USES (df, insn) = this_ref;
+	      }
+	    else
+	      {
+		this_ref->next_ref = DF_INSN_USES (df, insn);
+		DF_INSN_USES (df, insn) = this_ref;
+	      }
 	  }
       }
       break;
@@ -1534,20 +1627,19 @@ df_insn_refs_record (struct dataflow *df
       /* Record register defs.  */
       df_defs_record (dflow, PATTERN (insn), bb, insn, 0);
 
-      if (df->permanent_flags & DF_EQUIV_NOTES)
-	for (note = REG_NOTES (insn); note;
-	     note = XEXP (note, 1))
-	  {
-	    switch (REG_NOTE_KIND (note))
-	      {
-	      case REG_EQUIV:
-	      case REG_EQUAL:
-		df_uses_record (dflow, &XEXP (note, 0), DF_REF_REG_USE,
-				bb, insn, DF_REF_IN_NOTE);
-	      default:
-		break;
-	      }
-	  }
+      for (note = REG_NOTES (insn); note;
+	   note = XEXP (note, 1))
+	{
+	  switch (REG_NOTE_KIND (note))
+	    {
+	    case REG_EQUIV:
+	    case REG_EQUAL:
+	      df_uses_record (dflow, &XEXP (note, 0), DF_REF_REG_USE,
+			      bb, insn, DF_REF_IN_NOTE);
+	    default:
+	      break;
+	    }
+	}
 
       if (CALL_P (insn))
 	{
Index: gcc/df-core.c
===================================================================
--- gcc/df-core.c	(revision 117351)
+++ gcc/df-core.c	(working copy)
@@ -66,7 +66,6 @@ and frees up any allocated memory.
 There are three flags that can be passed to df_init, each of these
 flags controls the scanning of the rtl:
 
-DF_EQUIV_NOTES marks the uses present in EQUIV/EQUAL notes.
 DF_SUBREGS return subregs rather than the inner reg.
 
 
@@ -225,14 +224,21 @@ There are 4 ways to obtain access to ref
      Artificial defs occur at the end of the entry block.  These arise
      from registers that are live at entry to the function.
 
-2) All of the uses and defs associated with each pseudo or hard
-   register are linked in a bidirectional chain.  These are called
-   reg-use or reg_def chains.
-
-   The first use (or def) for a register can be obtained using the
-   DF_REG_USE_GET macro (or DF_REG_DEF_GET macro).  Subsequent uses
-   for the same regno can be obtained by following the next_reg field
-   of the ref.
+2) There are three types of refs: defs, uses and eq_uses.  (Eq_uses are 
+   uses that appear inside a REG_EQUAL or REG_EQUIV note.)
+
+   All of the eq_uses, uses and defs associated with each pseudo or
+   hard register may be linked in a bidirectional chain.  These are
+   called reg-use or reg_def chains.  If the changeable flag
+   DF_EQ_NOTES is set when the chains are built, the eq_uses will be
+   treated like uses.  If it is not set they are ignored.  
+
+   The first use, eq_use or def for a register can be obtained using
+   the DF_REG_USE_CHAIN, DF_REG_EQ_USE_CHAIN or DF_REG_DEF_CHAIN
+   macros.  Subsequent uses for the same regno can be obtained by
+   following the next_reg field of the ref.  The number of elements in
+   each of the chains can be found by using the DF_REG_USE_COUNT,
+   DF_REG_EQ_USE_COUNT or DF_REG_DEF_COUNT macros.
 
    In previous versions of this code, these chains were ordered.  It
    has not been practical to continue this practice.
@@ -948,9 +954,14 @@ df_bb_regno_last_use_find (struct df *df
 	continue;
 
       uid = INSN_UID (insn);
-      for (use = DF_INSN_UID_GET (df, uid)->uses; use; use = use->next_ref)
+      for (use = DF_INSN_UID_USES (df, uid); use; use = use->next_ref)
 	if (DF_REF_REGNO (use) == regno)
 	  return use;
+
+      if (df->changeable_flags & DF_EQ_NOTES)
+	for (use = DF_INSN_UID_EQ_USES (df, uid); use; use = use->next_ref)
+	  if (DF_REF_REGNO (use) == regno)
+	    return use;
     }
   return NULL;
 }
@@ -971,7 +982,7 @@ df_bb_regno_first_def_find (struct df *d
 	continue;
 
       uid = INSN_UID (insn);
-      for (def = DF_INSN_UID_GET (df, uid)->defs; def; def = def->next_ref)
+      for (def = DF_INSN_UID_DEFS (df, uid); def; def = def->next_ref)
 	if (DF_REF_REGNO (def) == regno)
 	  return def;
     }
@@ -994,7 +1005,7 @@ df_bb_regno_last_def_find (struct df *df
 	continue;
 
       uid = INSN_UID (insn);
-      for (def = DF_INSN_UID_GET (df, uid)->defs; def; def = def->next_ref)
+      for (def = DF_INSN_UID_DEFS (df, uid); def; def = def->next_ref)
 	if (DF_REF_REGNO (def) == regno)
 	  return def;
     }
@@ -1011,7 +1022,7 @@ df_insn_regno_def_p (struct df *df, rtx 
   struct df_ref *def;
 
   uid = INSN_UID (insn);
-  for (def = DF_INSN_UID_GET (df, uid)->defs; def; def = def->next_ref)
+  for (def = DF_INSN_UID_DEFS (df, uid); def; def = def->next_ref)
     if (DF_REF_REGNO (def) == regno)
       return true;
   
@@ -1033,7 +1044,7 @@ df_find_def (struct df *df, rtx insn, rt
   gcc_assert (REG_P (reg));
 
   uid = INSN_UID (insn);
-  for (def = DF_INSN_UID_GET (df, uid)->defs; def; def = def->next_ref)
+  for (def = DF_INSN_UID_DEFS (df, uid); def; def = def->next_ref)
     if (rtx_equal_p (DF_REF_REAL_REG (def), reg))
       return def;
 
@@ -1064,9 +1075,13 @@ df_find_use (struct df *df, rtx insn, rt
   gcc_assert (REG_P (reg));
 
   uid = INSN_UID (insn);
-  for (use = DF_INSN_UID_GET (df, uid)->uses; use; use = use->next_ref)
+  for (use = DF_INSN_UID_USES (df, uid); use; use = use->next_ref)
     if (rtx_equal_p (DF_REF_REAL_REG (use), reg))
       return use; 
+  if (df->changeable_flags & DF_EQ_NOTES)
+    for (use = DF_INSN_UID_EQ_USES (df, uid); use; use = use->next_ref)
+      if (rtx_equal_p (DF_REF_REAL_REG (use), reg))
+	return use; 
 
   return NULL;
 }
@@ -1116,7 +1131,7 @@ df_dump_start (struct df *df, FILE *file
   fprintf (file, "\n\n%s\n", current_function_name ());
   fprintf (file, "\nDataflow summary:\n");
   fprintf (file, "def_info->bitmap_size = %d, use_info->bitmap_size = %d\n",
-	   df->def_info.bitmap_size, df->use_info.bitmap_size);
+	   DF_DEFS_SIZE (df), DF_USES_SIZE (df));
 
   for (i = 0; i < df->num_problems_defined; i++)
     {
@@ -1244,6 +1259,8 @@ df_insn_uid_debug (struct df *df, unsign
     bbi = DF_REF_BBNO (DF_INSN_UID_DEFS (df, uid));
   else if (DF_INSN_UID_USES(df, uid))
     bbi = DF_REF_BBNO (DF_INSN_UID_USES (df, uid));
+  else if (DF_INSN_UID_EQ_USES(df, uid))
+    bbi = DF_REF_BBNO (DF_INSN_UID_EQ_USES (df, uid));
   else
     bbi = -1;
 
@@ -1262,6 +1279,12 @@ df_insn_uid_debug (struct df *df, unsign
       df_refs_chain_dump (DF_INSN_UID_USES (df, uid), follow_chain, file);
     }
 
+  if (DF_INSN_UID_EQ_USES (df, uid))
+    {
+      fprintf (file, " uses ");
+      df_refs_chain_dump (DF_INSN_UID_EQ_USES (df, uid), follow_chain, file);
+    }
+
   if (DF_INSN_UID_MWS (df, uid))
     {
       fprintf (file, " mws ");
@@ -1288,6 +1311,8 @@ df_insn_debug_regno (struct df *df, rtx 
     bbi = DF_REF_BBNO (DF_INSN_UID_DEFS (df, uid));
   else if (DF_INSN_UID_USES(df, uid))
     bbi = DF_REF_BBNO (DF_INSN_UID_USES (df, uid));
+  else if (DF_INSN_UID_EQ_USES(df, uid))
+    bbi = DF_REF_BBNO (DF_INSN_UID_EQ_USES (df, uid));
   else
     bbi = -1;
 
@@ -1297,6 +1322,9 @@ df_insn_debug_regno (struct df *df, rtx 
     
   fprintf (file, " uses ");
   df_regs_chain_dump (df, DF_INSN_UID_USES (df, uid), file);
+
+  fprintf (file, " eq_uses ");
+  df_regs_chain_dump (df, DF_INSN_UID_EQ_USES (df, uid), file);
   fprintf (file, "\n");
 }
 
@@ -1304,9 +1332,11 @@ void
 df_regno_debug (struct df *df, unsigned int regno, FILE *file)
 {
   fprintf (file, "reg %d defs ", regno);
-  df_regs_chain_dump (df, DF_REG_DEF_GET (df, regno)->reg_chain, file);
+  df_regs_chain_dump (df, DF_REG_DEF_CHAIN (df, regno), file);
   fprintf (file, " uses ");
-  df_regs_chain_dump (df, DF_REG_USE_GET (df, regno)->reg_chain, file);
+  df_regs_chain_dump (df, DF_REG_USE_CHAIN (df, regno), file);
+  fprintf (file, " eq_uses ");
+  df_regs_chain_dump (df, DF_REG_EQ_USE_CHAIN (df, regno), file);
   fprintf (file, "\n");
 }
 

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]