This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Keep static VTA locs in cselib tables only


On Nov 25, 2011, Jakub Jelinek <jakub@redhat.com> wrote:

> The numbers I got with your patch (RTL checking) are below, seems
> the cumulative numbers other than 100% are all bigger with patched stage2,
> which means unfortunately debug info quality degradation.

Not really.  I found some actual degradation after finally getting back
to it.  In some cases, I failed to reset NO_LOC_P, and this caused
expressions that depended on it to resort to alternate values or end up
unset.  In other cases, we created different cselib values for debug
temps and implicit ptrs, and merging them at dataflow confluences no
longer found a common value because the common value was in cselib's
static equivalence table.  I've fixed (and added an assertion to catch)
left-over NO_LOC_Ps, and arranged for values created for debug exprs,
implicit ptr, entry values and parameter refs to be preserved across
basic blocks as constants within cselib.

With that, the debug info we get is a strict improvement in terms of
coverage, even though a bunch of .o files still display a decrease in
100% coverage.  In the handful files I examined, the patched compiler
was emitting a loc list without full coverage, while the original
compiler was emitting a single loc expr, that implicitly got full
coverage even though AFAICT it should really cover a narrower range.
Full coverage was a false positive, and less-than-100% coverage in these
cases is not a degradation, but rather an improvement.

Now, the reason why we emit additional expressions now is that the new
algorithm is more prone to emitting different (and better) expressions
when entering basic block, because we don't try as hard as before to
keep on with the same location expression.  Instead we recompute all the
potentially-changed expressions, which will tend to select better
expressions if available.

> Otherwise the patch looks good to me.

Thanks.  After the updated comparison data below, you can find the patch
I'm checking in, followed by the small interdiff from the previous
patch.


Happy GNU Year! :-)


The results below can be reproduced with r182723.

stage1 sources are patched, stage2 and stage3 aren't, so
stage2 is built with a patched compiler, stage3 isn't.

$ wc -l obj-{x86_64,i686}-linux-gnu/stage[23]-gcc/cc1plus.ev
  100784 obj-x86_64-linux-gnu/stage2-gcc/cc1plus.ev
  102406 obj-x86_64-linux-gnu/stage3-gcc/cc1plus.ev
   33275 obj-i686-linux-gnu/stage2-gcc/cc1plus.ev
   33944 obj-i686-linux-gnu/stage3-gcc/cc1plus.ev

$ wc -l obj-{x86_64,i686}-linux-gnu/stage[23]-gcc/cc1plus.csv
   523647 obj-x86_64-linux-gnu/stage2-gcc/cc1plus.csv
   523536 obj-x86_64-linux-gnu/stage3-gcc/cc1plus.csv
   521276 obj-i686-linux-gnu/stage2-gcc/cc1plus.csv
   521907 obj-i686-linux-gnu/stage3-gcc/cc1plus.csv

$ diff -yW80 obj-x86_64-linux-gnu/stage[23]-gcc/cc1plus.ls
cov%    samples cumul                   cov%    samples cumul
0.0     150949/30%      150949/30%    | 0.0     150980/30%      150980/30%
0..5    6234/1% 157183/31%            | 0..5    6254/1% 157234/31%
6..10   5630/1% 162813/32%            | 6..10   5641/1% 162875/32%
11..15  4675/0% 167488/33%            | 11..15  4703/0% 167578/33%
16..20  5041/1% 172529/34%            | 16..20  5044/1% 172622/34%
21..25  5435/1% 177964/35%            | 21..25  5466/1% 178088/35%
26..30  4249/0% 182213/36%            | 26..30  4269/0% 182357/36%
31..35  4666/0% 186879/37%            | 31..35  4674/0% 187031/37%
36..40  6939/1% 193818/38%            | 36..40  6982/1% 194013/38%
41..45  7824/1% 201642/40%            | 41..45  7859/1% 201872/40%
46..50  8538/1% 210180/42%            | 46..50  8536/1% 210408/42%
51..55  7585/1% 217765/43%            | 51..55  7611/1% 218019/43%
56..60  6088/1% 223853/44%            | 56..60  6108/1% 224127/44%
61..65  5545/1% 229398/45%            | 61..65  5574/1% 229701/46%
66..70  7151/1% 236549/47%            | 66..70  7195/1% 236896/47%
71..75  8068/1% 244617/49%            | 71..75  8104/1% 245000/49%
76..80  18852/3%        263469/52%    | 76..80  18879/3%        263879/52%
81..85  11958/2%        275427/55%    | 81..85  11954/2%        275833/55%
86..90  15201/3%        290628/58%    | 86..90  15145/3%        290978/58%
91..95  16814/3%        307442/61%    | 91..95  16727/3%        307705/61%
96..99  17121/3%        324563/65%    | 96..99  16991/3%        324696/65%
100     174515/34%      499078/100%   | 100     173994/34%      498690/100%

$ diff -yW80 obj-i686-linux-gnu/stage[23]-gcc/cc1plus.ls
cov%    samples cumul                   cov%    samples cumul
0.0     145453/27%      145453/27%    | 0.0     145480/27%      145480/27%
0..5    6594/1% 152047/29%            | 0..5    6603/1% 152083/29%
6..10   5664/1% 157711/30%            | 6..10   5671/1% 157754/30%
11..15  4982/0% 162693/31%            | 11..15  4997/0% 162751/31%
16..20  6155/1% 168848/32%            | 16..20  6169/1% 168920/32%
21..25  5038/0% 173886/33%            | 21..25  5057/0% 173977/33%
26..30  4925/0% 178811/34%            | 26..30  4918/0% 178895/34%
31..35  4359/0% 183170/35%            | 31..35  4372/0% 183267/35%
36..40  6977/1% 190147/36%            | 36..40  6972/1% 190239/36%
41..45  8138/1% 198285/38%            | 41..45  8148/1% 198387/38%
46..50  8538/1% 206823/39%            | 46..50  8538/1% 206925/39%
51..55  5607/1% 212430/40%            | 51..55  5610/1% 212535/40%
56..60  6629/1% 219059/41%            | 56..60  6629/1% 219164/42%
61..65  5232/1% 224291/42%            | 61..65  5242/1% 224406/43%
66..70  8827/1% 233118/44%            | 66..70  8824/1% 233230/44%
71..75  6240/1% 239358/45%            | 71..75  6245/1% 239475/45%
76..80  8573/1% 247931/47%            | 76..80  8577/1% 248052/47%
81..85  8235/1% 256166/49%            | 81..85  8236/1% 256288/49%
86..90  13385/2%        269551/51%    | 86..90  13365/2%        269653/51%
91..95  21427/4%        290978/55%    | 91..95  21397/4%        291050/55%
96..99  20791/3%        311769/59%    | 96..99  20739/3%        311789/59%
100     209906/40%      521675/100%   | 100     209781/40%      521570/100%

for  gcc/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* cselib.h (cselib_add_permanent_equiv): Declare.
	(canonical_cselib_val): New.
	* cselib.c (new_elt_loc_list): Rework to support value
	equivalences.  Adjust all callers.
	(preserve_only_constants): Retain value equivalences.
	(references_value_p): Retain preserved values.
	(rtx_equal_for_cselib_1): Handle value equivalences.
	(cselib_invalidate_regno): Use canonical value.
	(cselib_add_permanent_equiv): New.
	* alias.c (find_base_term): Reset locs lists while recursing.
	* var-tracking.c (val_bind): New.  Don't add equivalences
	present in cselib table, compared with code moved from...
	(val_store): ... here.
	(val_resolve): Use val_bind.
	(VAL_EXPR_HAS_REVERSE): Drop.
	(add_uses): Do not create MOps for addresses.  Do not mark
	non-REG non-MEM expressions as requiring resolution.
	(reverse_op): Record reverse as a cselib equivalence.
	(add_stores): Use it.  Do not create MOps for addresses.
	Do not require resolution for non-REG non-MEM expressions.
	Simplify support for reverse operations.
	(compute_bb_dataflow): Drop reverse support.
	(emit_notes_in_bb): Likewise.
	(create_entry_value): Rename to...
	(record_entry_value): ... this.  Use cselib equivalences.
	(vt_add_function_parameter): Adjust.

Index: gcc/cselib.h
===================================================================
--- gcc/cselib.h.orig	2011-12-29 19:47:25.802168266 -0200
+++ gcc/cselib.h	2011-12-30 20:47:16.725291200 -0200
@@ -96,5 +96,24 @@ extern void cselib_preserve_value (cseli
 extern bool cselib_preserved_value_p (cselib_val *);
 extern void cselib_preserve_only_values (void);
 extern void cselib_preserve_cfa_base_value (cselib_val *, unsigned int);
+extern void cselib_add_permanent_equiv (cselib_val *, rtx, rtx);
 
 extern void dump_cselib_table (FILE *);
+
+/* Return the canonical value for VAL, following the equivalence chain
+   towards the earliest (== lowest uid) equivalent value.  */
+
+static inline cselib_val *
+canonical_cselib_val (cselib_val *val)
+{
+  cselib_val *canon;
+
+  if (!val->locs || val->locs->next
+      || !val->locs->loc || GET_CODE (val->locs->loc) != VALUE
+      || val->uid < CSELIB_VAL_PTR (val->locs->loc)->uid)
+    return val;
+
+  canon = CSELIB_VAL_PTR (val->locs->loc);
+  gcc_checking_assert (canonical_cselib_val (canon) == canon);
+  return canon;
+}
Index: gcc/cselib.c
===================================================================
--- gcc/cselib.c.orig	2011-12-29 19:47:25.802168266 -0200
+++ gcc/cselib.c	2011-12-30 20:52:48.270739948 -0200
@@ -55,7 +55,7 @@ static bool cselib_preserve_constants;
 static int entry_and_rtx_equal_p (const void *, const void *);
 static hashval_t get_value_hash (const void *);
 static struct elt_list *new_elt_list (struct elt_list *, cselib_val *);
-static struct elt_loc_list *new_elt_loc_list (struct elt_loc_list *, rtx);
+static void new_elt_loc_list (cselib_val *, rtx);
 static void unchain_one_value (cselib_val *);
 static void unchain_one_elt_list (struct elt_list **);
 static void unchain_one_elt_loc_list (struct elt_loc_list **);
@@ -223,26 +223,75 @@ new_elt_list (struct elt_list *next, cse
   return el;
 }
 
-/* Allocate a struct elt_loc_list and fill in its two elements with the
-   arguments.  */
+/* Allocate a struct elt_loc_list with LOC and prepend it to VAL's loc
+   list.  */
 
-static inline struct elt_loc_list *
-new_elt_loc_list (struct elt_loc_list *next, rtx loc)
+static inline void
+new_elt_loc_list (cselib_val *val, rtx loc)
 {
-  struct elt_loc_list *el;
-  el = (struct elt_loc_list *) pool_alloc (elt_loc_list_pool);
-  el->next = next;
-  el->loc = loc;
-  el->setting_insn = cselib_current_insn;
-  gcc_assert (!next || !next->setting_insn
-	      || !DEBUG_INSN_P (next->setting_insn));
+  struct elt_loc_list *el, *next = val->locs;
+
+  gcc_checking_assert (!next || !next->setting_insn
+		       || !DEBUG_INSN_P (next->setting_insn)
+		       || cselib_current_insn == next->setting_insn);
 
   /* If we're creating the first loc in a debug insn context, we've
      just created a debug value.  Count it.  */
   if (!next && cselib_current_insn && DEBUG_INSN_P (cselib_current_insn))
     n_debug_values++;
 
-  return el;
+  val = canonical_cselib_val (val);
+  next = val->locs;
+
+  if (GET_CODE (loc) == VALUE)
+    {
+      loc = canonical_cselib_val (CSELIB_VAL_PTR (loc))->val_rtx;
+
+      gcc_checking_assert (PRESERVED_VALUE_P (loc)
+			   == PRESERVED_VALUE_P (val->val_rtx));
+
+      if (val->val_rtx == loc)
+	return;
+      else if (val->uid > CSELIB_VAL_PTR (loc)->uid)
+	{
+	  /* Reverse the insertion.  */
+	  new_elt_loc_list (CSELIB_VAL_PTR (loc), val->val_rtx);
+	  return;
+	}
+
+      gcc_checking_assert (val->uid < CSELIB_VAL_PTR (loc)->uid);
+
+      if (CSELIB_VAL_PTR (loc)->locs)
+	{
+	  /* Bring all locs from LOC to VAL.  */
+	  for (el = CSELIB_VAL_PTR (loc)->locs; el->next; el = el->next)
+	    {
+	      /* Adjust values that have LOC as canonical so that VAL
+		 becomes their canonical.  */
+	      if (el->loc && GET_CODE (el->loc) == VALUE)
+		{
+		  gcc_checking_assert (CSELIB_VAL_PTR (el->loc)->locs->loc
+				       == loc);
+		  CSELIB_VAL_PTR (el->loc)->locs->loc = val->val_rtx;
+		}
+	    }
+	  el->next = val->locs;
+	  next = val->locs = CSELIB_VAL_PTR (loc)->locs;
+	}
+
+      /* Chain LOC back to VAL.  */
+      el = (struct elt_loc_list *) pool_alloc (elt_loc_list_pool);
+      el->loc = val->val_rtx;
+      el->setting_insn = cselib_current_insn;
+      el->next = NULL;
+      CSELIB_VAL_PTR (loc)->locs = el;
+    }
+
+  el = (struct elt_loc_list *) pool_alloc (elt_loc_list_pool);
+  el->loc = loc;
+  el->setting_insn = cselib_current_insn;
+  el->next = next;
+  val->locs = el;
 }
 
 /* Promote loc L to a nondebug cselib_current_insn if L is marked as
@@ -320,6 +369,7 @@ static int
 preserve_only_constants (void **x, void *info ATTRIBUTE_UNUSED)
 {
   cselib_val *v = (cselib_val *)*x;
+  struct elt_loc_list *l;
 
   if (v->locs != NULL
       && v->locs->next == NULL)
@@ -328,6 +378,14 @@ preserve_only_constants (void **x, void 
 	  && (GET_CODE (v->locs->loc) != CONST
 	      || !references_value_p (v->locs->loc, 0)))
 	return 1;
+      /* Although a debug expr may be bound to different expressions,
+	 we can preserve it as if it was constant, to get unification
+	 and proper merging within var-tracking.  */
+      if (GET_CODE (v->locs->loc) == DEBUG_EXPR
+	  || GET_CODE (v->locs->loc) == DEBUG_IMPLICIT_PTR
+	  || GET_CODE (v->locs->loc) == ENTRY_VALUE
+	  || GET_CODE (v->locs->loc) == DEBUG_PARAMETER_REF)
+	return 1;
       if (cfa_base_preserved_val)
 	{
 	  if (v == cfa_base_preserved_val)
@@ -338,14 +396,11 @@ preserve_only_constants (void **x, void 
 	    return 1;
 	}
     }
-  /* Keep around VALUEs that forward function invariant ENTRY_VALUEs
-     to corresponding parameter VALUEs.  */
-  if (v->locs != NULL
-      && v->locs->next != NULL
-      && v->locs->next->next == NULL
-      && GET_CODE (v->locs->next->loc) == ENTRY_VALUE
-      && GET_CODE (v->locs->loc) == VALUE)
-    return 1;
+
+  /* Keep VALUE equivalences around.  */
+  for (l = v->locs; l; l = l->next)
+    if (GET_CODE (l->loc) == VALUE)
+      return 1;
 
   htab_clear_slot (cselib_hash_table, x);
   return 1;
@@ -490,7 +545,8 @@ references_value_p (const_rtx x, int onl
   int i, j;
 
   if (GET_CODE (x) == VALUE
-      && (! only_useless || CSELIB_VAL_PTR (x)->locs == 0))
+      && (! only_useless ||
+	  (CSELIB_VAL_PTR (x)->locs == 0 && !PRESERVED_VALUE_P (x))))
     return 1;
 
   for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
@@ -744,20 +800,22 @@ rtx_equal_for_cselib_1 (rtx x, rtx y, en
   if (x == y)
     return 1;
 
-  if (GET_CODE (x) == VALUE && GET_CODE (y) == VALUE)
-    return CSELIB_VAL_PTR (x) == CSELIB_VAL_PTR (y);
-
   if (GET_CODE (x) == VALUE)
     {
-      cselib_val *e = CSELIB_VAL_PTR (x);
+      cselib_val *e = canonical_cselib_val (CSELIB_VAL_PTR (x));
       struct elt_loc_list *l;
 
+      if (GET_CODE (y) == VALUE)
+	return e == canonical_cselib_val (CSELIB_VAL_PTR (y));
+
       for (l = e->locs; l; l = l->next)
 	{
 	  rtx t = l->loc;
 
-	  /* Avoid infinite recursion.  */
-	  if (REG_P (t) || MEM_P (t))
+	  /* Avoid infinite recursion.  We know we have the canonical
+	     value, so we can just skip any values in the equivalence
+	     list.  */
+	  if (REG_P (t) || MEM_P (t) || GET_CODE (t) == VALUE)
 	    continue;
 	  else if (rtx_equal_for_cselib_1 (t, y, memmode))
 	    return 1;
@@ -765,17 +823,16 @@ rtx_equal_for_cselib_1 (rtx x, rtx y, en
 
       return 0;
     }
-
-  if (GET_CODE (y) == VALUE)
+  else if (GET_CODE (y) == VALUE)
     {
-      cselib_val *e = CSELIB_VAL_PTR (y);
+      cselib_val *e = canonical_cselib_val (CSELIB_VAL_PTR (y));
       struct elt_loc_list *l;
 
       for (l = e->locs; l; l = l->next)
 	{
 	  rtx t = l->loc;
 
-	  if (REG_P (t) || MEM_P (t))
+	  if (REG_P (t) || MEM_P (t) || GET_CODE (t) == VALUE)
 	    continue;
 	  else if (rtx_equal_for_cselib_1 (x, t, memmode))
 	    return 1;
@@ -1217,9 +1274,8 @@ add_mem_for_addr (cselib_val *addr_elt, 
       }
 
   addr_elt->addr_list = new_elt_list (addr_elt->addr_list, mem_elt);
-  mem_elt->locs
-    = new_elt_loc_list (mem_elt->locs,
-			replace_equiv_address_nv (x, addr_elt->val_rtx));
+  new_elt_loc_list (mem_elt,
+		    replace_equiv_address_nv (x, addr_elt->val_rtx));
   if (mem_elt->next_containing_mem == NULL)
     {
       mem_elt->next_containing_mem = first_containing_mem;
@@ -1858,7 +1914,7 @@ cselib_lookup_1 (rtx x, enum machine_mod
 	}
 
       e = new_cselib_val (next_uid, GET_MODE (x), x);
-      e->locs = new_elt_loc_list (e->locs, x);
+      new_elt_loc_list (e, x);
       if (REG_VALUES (i) == 0)
 	{
 	  /* Maintain the invariant that the first entry of
@@ -1901,7 +1957,7 @@ cselib_lookup_1 (rtx x, enum machine_mod
 	      rtx sub = lowpart_subreg (mode, lwider->elt->val_rtx,
 					GET_MODE (lwider->elt->val_rtx));
 	      if (sub)
-		e->locs->next = new_elt_loc_list (e->locs->next, sub);
+		new_elt_loc_list (e, sub);
 	    }
 	}
       REG_VALUES (i)->next = new_elt_list (REG_VALUES (i)->next, e);
@@ -1933,8 +1989,7 @@ cselib_lookup_1 (rtx x, enum machine_mod
      the hash table is inconsistent until we do so, and
      cselib_subst_to_values will need to do lookups.  */
   *slot = (void *) e;
-  e->locs = new_elt_loc_list (e->locs,
-			      cselib_subst_to_values (x, memmode));
+  new_elt_loc_list (e, cselib_subst_to_values (x, memmode));
   return e;
 }
 
@@ -2059,6 +2114,8 @@ cselib_invalidate_regno (unsigned int re
 	  else
 	    unchain_one_elt_list (l);
 
+	  v = canonical_cselib_val (v);
+
 	  had_locs = v->locs != NULL;
 	  setting_insn = v->locs ? v->locs->setting_insn : NULL;
 
@@ -2245,7 +2302,7 @@ cselib_record_set (rtx dest, cselib_val 
 
       if (src_elt->locs == 0 && !PRESERVED_VALUE_P (src_elt->val_rtx))
 	n_useless_values--;
-      src_elt->locs = new_elt_loc_list (src_elt->locs, dest);
+      new_elt_loc_list (src_elt, dest);
     }
   else if (MEM_P (dest) && dest_addr_elt != 0
 	   && cselib_record_memory)
@@ -2256,6 +2313,33 @@ cselib_record_set (rtx dest, cselib_val 
     }
 }
 
+/* Make ELT and X's VALUE equivalent to each other at INSN.  */
+
+void
+cselib_add_permanent_equiv (cselib_val *elt, rtx x, rtx insn)
+{
+  cselib_val *nelt;
+  rtx save_cselib_current_insn = cselib_current_insn;
+
+  gcc_checking_assert (elt);
+  gcc_checking_assert (PRESERVED_VALUE_P (elt->val_rtx));
+  gcc_checking_assert (!side_effects_p (x));
+
+  cselib_current_insn = insn;
+
+  nelt = cselib_lookup (x, GET_MODE (elt->val_rtx), 1, VOIDmode);
+
+  if (nelt != elt)
+    {
+      if (!PRESERVED_VALUE_P (nelt->val_rtx))
+	cselib_preserve_value (nelt);
+
+      new_elt_loc_list (nelt, elt->val_rtx);
+    }
+
+  cselib_current_insn = save_cselib_current_insn;
+}
+
 /* There is no good way to determine how many elements there can be
    in a PARALLEL.  Since it's fairly cheap, use a really large number.  */
 #define MAX_SETS (FIRST_PSEUDO_REGISTER * 2)
Index: gcc/alias.c
===================================================================
--- gcc/alias.c.orig	2011-12-29 19:47:25.802168266 -0200
+++ gcc/alias.c	2011-12-30 20:47:16.729291141 -0200
@@ -1542,7 +1542,8 @@ rtx
 find_base_term (rtx x)
 {
   cselib_val *val;
-  struct elt_loc_list *l;
+  struct elt_loc_list *l, *f;
+  rtx ret;
 
 #if defined (FIND_BASE_TERM)
   /* Try machine-dependent ways to find the base term.  */
@@ -1591,12 +1592,26 @@ find_base_term (rtx x)
 
     case VALUE:
       val = CSELIB_VAL_PTR (x);
+      ret = NULL_RTX;
+
       if (!val)
-	return 0;
-      for (l = val->locs; l; l = l->next)
-	if ((x = find_base_term (l->loc)) != 0)
-	  return x;
-      return 0;
+	return ret;
+
+      f = val->locs;
+      /* Temporarily reset val->locs to avoid infinite recursion.  */
+      val->locs = NULL;
+
+      for (l = f; l; l = l->next)
+	if (GET_CODE (l->loc) == VALUE
+	    && CSELIB_VAL_PTR (l->loc)->locs
+	    && !CSELIB_VAL_PTR (l->loc)->locs->next
+	    && CSELIB_VAL_PTR (l->loc)->locs->loc == x)
+	  continue;
+	else if ((ret = find_base_term (l->loc)) != 0)
+	  break;
+
+      val->locs = f;
+      return ret;
 
     case LO_SUM:
       /* The standard form is (lo_sum reg sym) so look only at the
Index: gcc/var-tracking.c
===================================================================
--- gcc/var-tracking.c.orig	2011-12-29 19:47:25.803168252 -0200
+++ gcc/var-tracking.c	2011-12-30 20:47:16.794290201 -0200
@@ -2027,6 +2027,50 @@ unsuitable_loc (rtx loc)
     }
 }
 
+/* Bind VAL to LOC in SET.  If MODIFIED, detach LOC from any values
+   bound to it.  */
+
+static inline void
+val_bind (dataflow_set *set, rtx val, rtx loc, bool modified)
+{
+  if (REG_P (loc))
+    {
+      if (modified)
+	var_regno_delete (set, REGNO (loc));
+      var_reg_decl_set (set, loc, VAR_INIT_STATUS_INITIALIZED,
+			dv_from_value (val), 0, NULL_RTX, INSERT);
+    }
+  else if (MEM_P (loc))
+    {
+      struct elt_loc_list *l = CSELIB_VAL_PTR (val)->locs;
+
+      if (l && GET_CODE (l->loc) == VALUE)
+	l = canonical_cselib_val (CSELIB_VAL_PTR (l->loc))->locs;
+
+      /* If this MEM is a global constant, we don't need it in the
+	 dynamic tables.  ??? We should test this before emitting the
+	 micro-op in the first place.  */
+      while (l)
+	if (GET_CODE (l->loc) == MEM && XEXP (l->loc, 0) == XEXP (loc, 0))
+	  break;
+	else
+	  l = l->next;
+
+      if (!l)
+	var_mem_decl_set (set, loc, VAR_INIT_STATUS_INITIALIZED,
+			  dv_from_value (val), 0, NULL_RTX, INSERT);
+    }
+  else
+    {
+      /* Other kinds of equivalences are necessarily static, at least
+	 so long as we do not perform substitutions while merging
+	 expressions.  */
+      gcc_unreachable ();
+      set_variable_part (set, loc, dv_from_value (val), 0,
+			 VAR_INIT_STATUS_INITIALIZED, NULL_RTX, INSERT);
+    }
+}
+
 /* Bind a value to a location it was just stored in.  If MODIFIED
    holds, assume the location was modified, detaching it from any
    values bound to it.  */
@@ -2058,21 +2102,7 @@ val_store (dataflow_set *set, rtx val, r
 
   gcc_checking_assert (!unsuitable_loc (loc));
 
-  if (REG_P (loc))
-    {
-      if (modified)
-	var_regno_delete (set, REGNO (loc));
-      var_reg_decl_set (set, loc, VAR_INIT_STATUS_INITIALIZED,
-			dv_from_value (val), 0, NULL_RTX, INSERT);
-    }
-  else if (MEM_P (loc))
-    var_mem_decl_set (set, loc, VAR_INIT_STATUS_INITIALIZED,
-		      dv_from_value (val), 0, NULL_RTX, INSERT);
-  else
-    /* ??? Ideally we wouldn't get these, and use them from the static
-       cselib loc list.  */
-    set_variable_part (set, loc, dv_from_value (val), 0,
-		       VAR_INIT_STATUS_INITIALIZED, NULL_RTX, INSERT);
+  val_bind (set, val, loc, modified);
 }
 
 /* Reset this node, detaching all its equivalences.  Return the slot
@@ -2187,20 +2217,13 @@ val_resolve (dataflow_set *set, rtx val,
 
       /* If we didn't find any equivalence, we need to remember that
 	 this value is held in the named register.  */
-      if (!found)
-	var_reg_decl_set (set, loc, VAR_INIT_STATUS_INITIALIZED,
-			  dv_from_value (val), 0, NULL_RTX, INSERT);
+      if (found)
+	return;
     }
-  else if (MEM_P (loc))
-    /* ??? Merge equivalent MEMs.  */
-    var_mem_decl_set (set, loc, VAR_INIT_STATUS_INITIALIZED,
-		      dv_from_value (val), 0, NULL_RTX, INSERT);
-  else
-    /* ??? Ideally we wouldn't get these, and use them from the static
-       cselib loc list.  */
-    /* ??? Merge equivalent expressions.  */
-    set_variable_part (set, loc, dv_from_value (val), 0,
-		       VAR_INIT_STATUS_INITIALIZED, NULL_RTX, INSERT);
+  /* ??? Attempt to find and merge equivalent MEMs or other
+     expressions too.  */
+
+  val_bind (set, val, loc, false);
 }
 
 /* Initialize dataflow set SET to be empty.
@@ -5046,10 +5069,6 @@ log_op_type (rtx x, basic_block bb, rtx 
    MO_CLOBBER as well.  */
 #define VAL_EXPR_IS_CLOBBERED(x) \
   (RTL_FLAG_CHECK1 ("VAL_EXPR_IS_CLOBBERED", (x), CONCAT)->unchanging)
-/* Whether the location is a CONCAT of the MO_VAL_SET expression and
-   a reverse operation that should be handled afterwards.  */
-#define VAL_EXPR_HAS_REVERSE(x) \
-  (RTL_FLAG_CHECK1 ("VAL_EXPR_HAS_REVERSE", (x), CONCAT)->return_val)
 
 /* All preserved VALUEs.  */
 static VEC (rtx, heap) *preserved_values;
@@ -5129,28 +5148,7 @@ add_uses (rtx *ploc, void *data)
 				 GET_MODE (mloc));
 
 	      if (val && !cselib_preserved_value_p (val))
-		{
-		  micro_operation moa;
-		  preserve_value (val);
-
-		  if (GET_CODE (XEXP (mloc, 0)) != ENTRY_VALUE
-		      && (GET_CODE (XEXP (mloc, 0)) != PLUS
-			  || XEXP (XEXP (mloc, 0), 0) != cfa_base_rtx
-			  || !CONST_INT_P (XEXP (XEXP (mloc, 0), 1))))
-		    {
-		      mloc = cselib_subst_to_values (XEXP (mloc, 0),
-						     GET_MODE (mloc));
-		      moa.type = MO_VAL_USE;
-		      moa.insn = cui->insn;
-		      moa.u.loc = gen_rtx_CONCAT (address_mode,
-						  val->val_rtx, mloc);
-		      if (dump_file && (dump_flags & TDF_DETAILS))
-			log_op_type (moa.u.loc, cui->bb, cui->insn,
-				     moa.type, dump_file);
-		      VEC_safe_push (micro_operation, heap, VTI (bb)->mos,
-				     &moa);
-		    }
-		}
+		preserve_value (val);
 	    }
 
 	  if (CONSTANT_P (vloc)
@@ -5162,7 +5160,11 @@ add_uses (rtx *ploc, void *data)
 	    {
 	      enum machine_mode mode2;
 	      enum micro_operation_type type2;
-	      rtx nloc = replace_expr_with_values (vloc);
+	      rtx nloc = NULL;
+	      bool resolvable = REG_P (vloc) || MEM_P (vloc);
+
+	      if (resolvable)
+		nloc = replace_expr_with_values (vloc);
 
 	      if (nloc)
 		{
@@ -5180,7 +5182,7 @@ add_uses (rtx *ploc, void *data)
 	      if (type2 == MO_CLOBBER
 		  && !cselib_preserved_value_p (val))
 		{
-		  VAL_NEEDS_RESOLUTION (oloc) = 1;
+		  VAL_NEEDS_RESOLUTION (oloc) = resolvable;
 		  preserve_value (val);
 		}
 	    }
@@ -5212,28 +5214,7 @@ add_uses (rtx *ploc, void *data)
 				 GET_MODE (mloc));
 
 	      if (val && !cselib_preserved_value_p (val))
-		{
-		  micro_operation moa;
-		  preserve_value (val);
-
-		  if (GET_CODE (XEXP (mloc, 0)) != ENTRY_VALUE
-		      && (GET_CODE (XEXP (mloc, 0)) != PLUS
-			  || XEXP (XEXP (mloc, 0), 0) != cfa_base_rtx
-			  || !CONST_INT_P (XEXP (XEXP (mloc, 0), 1))))
-		    {
-		      mloc = cselib_subst_to_values (XEXP (mloc, 0),
-						     GET_MODE (mloc));
-		      moa.type = MO_VAL_USE;
-		      moa.insn = cui->insn;
-		      moa.u.loc = gen_rtx_CONCAT (address_mode,
-						  val->val_rtx, mloc);
-		      if (dump_file && (dump_flags & TDF_DETAILS))
-			log_op_type (moa.u.loc, cui->bb, cui->insn,
-				     moa.type, dump_file);
-		      VEC_safe_push (micro_operation, heap, VTI (bb)->mos,
-				     &moa);
-		    }
-		}
+		preserve_value (val);
 	    }
 
 	  type2 = use_type (loc, 0, &mode2);
@@ -5256,6 +5237,7 @@ add_uses (rtx *ploc, void *data)
 
 	  */
 
+	  gcc_checking_assert (REG_P (loc) || MEM_P (loc));
 	  nloc = replace_expr_with_values (loc);
 	  if (!nloc)
 	    nloc = oloc;
@@ -5307,22 +5289,22 @@ add_uses_1 (rtx *x, void *cui)
    representable anyway.  */
 #define EXPR_USE_DEPTH (PARAM_VALUE (PARAM_MAX_VARTRACK_EXPR_DEPTH))
 
-/* Attempt to reverse the EXPR operation in the debug info.  Say for
-   reg1 = reg2 + 6 even when reg2 is no longer live we
-   can express its value as VAL - 6.  */
+/* Attempt to reverse the EXPR operation in the debug info and record
+   it in the cselib table.  Say for reg1 = reg2 + 6 even when reg2 is
+   no longer live we can express its value as VAL - 6.  */
 
-static rtx
-reverse_op (rtx val, const_rtx expr)
+static void
+reverse_op (rtx val, const_rtx expr, rtx insn)
 {
   rtx src, arg, ret;
   cselib_val *v;
   enum rtx_code code;
 
   if (GET_CODE (expr) != SET)
-    return NULL_RTX;
+    return;
 
   if (!REG_P (SET_DEST (expr)) || GET_MODE (val) != GET_MODE (SET_DEST (expr)))
-    return NULL_RTX;
+    return;
 
   src = SET_SRC (expr);
   switch (GET_CODE (src))
@@ -5333,30 +5315,30 @@ reverse_op (rtx val, const_rtx expr)
     case NOT:
     case NEG:
       if (!REG_P (XEXP (src, 0)))
-	return NULL_RTX;
+	return;
       break;
     case SIGN_EXTEND:
     case ZERO_EXTEND:
       if (!REG_P (XEXP (src, 0)) && !MEM_P (XEXP (src, 0)))
-	return NULL_RTX;
+	return;
       break;
     default:
-      return NULL_RTX;
+      return;
     }
 
   if (!SCALAR_INT_MODE_P (GET_MODE (src)) || XEXP (src, 0) == cfa_base_rtx)
-    return NULL_RTX;
+    return;
 
   v = cselib_lookup (XEXP (src, 0), GET_MODE (XEXP (src, 0)), 0, VOIDmode);
   if (!v || !cselib_preserved_value_p (v))
-    return NULL_RTX;
+    return;
 
   switch (GET_CODE (src))
     {
     case NOT:
     case NEG:
       if (GET_MODE (v->val_rtx) != GET_MODE (val))
-	return NULL_RTX;
+	return;
       ret = gen_rtx_fmt_e (GET_CODE (src), GET_MODE (val), val);
       break;
     case SIGN_EXTEND:
@@ -5374,15 +5356,15 @@ reverse_op (rtx val, const_rtx expr)
       goto binary;
     binary:
       if (GET_MODE (v->val_rtx) != GET_MODE (val))
-	return NULL_RTX;
+	return;
       arg = XEXP (src, 1);
       if (!CONST_INT_P (arg) && GET_CODE (arg) != SYMBOL_REF)
 	{
 	  arg = cselib_expand_value_rtx (arg, scratch_regs, 5);
 	  if (arg == NULL_RTX)
-	    return NULL_RTX;
+	    return;
 	  if (!CONST_INT_P (arg) && GET_CODE (arg) != SYMBOL_REF)
-	    return NULL_RTX;
+	    return;
 	}
       ret = simplify_gen_binary (code, GET_MODE (val), val, arg);
       if (ret == val)
@@ -5395,7 +5377,7 @@ reverse_op (rtx val, const_rtx expr)
       gcc_unreachable ();
     }
 
-  return gen_rtx_CONCAT (GET_MODE (v->val_rtx), v->val_rtx, ret);
+  cselib_add_permanent_equiv (v, ret, insn);
 }
 
 /* Add stores (register and memory references) LOC which will be tracked
@@ -5414,7 +5396,6 @@ add_stores (rtx loc, const_rtx expr, voi
   bool track_p = false;
   cselib_val *v;
   bool resolve, preserve;
-  rtx reverse;
 
   if (type == MO_CLOBBER)
     return;
@@ -5479,26 +5460,7 @@ add_stores (rtx loc, const_rtx expr, voi
 					   GET_MODE (mloc));
 
 	  if (val && !cselib_preserved_value_p (val))
-	    {
-	      preserve_value (val);
-
-	      if (GET_CODE (XEXP (mloc, 0)) != ENTRY_VALUE
-		  && (GET_CODE (XEXP (mloc, 0)) != PLUS
-		      || XEXP (XEXP (mloc, 0), 0) != cfa_base_rtx
-		      || !CONST_INT_P (XEXP (XEXP (mloc, 0), 1))))
-		{
-		  mloc = cselib_subst_to_values (XEXP (mloc, 0),
-						 GET_MODE (mloc));
-		  mo.type = MO_VAL_USE;
-		  mo.insn = cui->insn;
-		  mo.u.loc = gen_rtx_CONCAT (address_mode,
-					     val->val_rtx, mloc);
-		  if (dump_file && (dump_flags & TDF_DETAILS))
-		    log_op_type (mo.u.loc, cui->bb, cui->insn,
-				 mo.type, dump_file);
-		  VEC_safe_push (micro_operation, heap, VTI (bb)->mos, &mo);
-		}
-	    }
+	    preserve_value (val);
 	}
 
       if (GET_CODE (expr) == CLOBBER || !track_p)
@@ -5578,7 +5540,10 @@ add_stores (rtx loc, const_rtx expr, voi
     }
   else if (resolve && GET_CODE (mo.u.loc) == SET)
     {
-      nloc = replace_expr_with_values (SET_SRC (expr));
+      if (REG_P (SET_SRC (expr)) || MEM_P (SET_SRC (expr)))
+	nloc = replace_expr_with_values (SET_SRC (expr));
+      else
+	nloc = NULL_RTX;
 
       /* Avoid the mode mismatch between oexpr and expr.  */
       if (!nloc && mode != mode2)
@@ -5587,7 +5552,7 @@ add_stores (rtx loc, const_rtx expr, voi
 	  gcc_assert (oloc == SET_DEST (expr));
 	}
 
-      if (nloc)
+      if (nloc && nloc != SET_SRC (mo.u.loc))
 	oloc = gen_rtx_SET (GET_MODE (mo.u.loc), oloc, nloc);
       else
 	{
@@ -5634,14 +5599,7 @@ add_stores (rtx loc, const_rtx expr, voi
   */
 
   if (GET_CODE (PATTERN (cui->insn)) != COND_EXEC)
-    {
-      reverse = reverse_op (v->val_rtx, expr);
-      if (reverse)
-	{
-	  loc = gen_rtx_CONCAT (GET_MODE (mo.u.loc), loc, reverse);
-	  VAL_EXPR_HAS_REVERSE (loc) = 1;
-	}
-    }
+    reverse_op (v->val_rtx, expr, cui->insn);
 
   mo.u.loc = loc;
 
@@ -6299,14 +6257,9 @@ compute_bb_dataflow (basic_block bb)
 	  case MO_VAL_SET:
 	    {
 	      rtx loc = mo->u.loc;
-	      rtx val, vloc, uloc, reverse = NULL_RTX;
+	      rtx val, vloc, uloc;
 
 	      vloc = loc;
-	      if (VAL_EXPR_HAS_REVERSE (loc))
-		{
-		  reverse = XEXP (loc, 1);
-		  vloc = XEXP (loc, 0);
-		}
 	      uloc = XEXP (vloc, 1);
 	      val = XEXP (vloc, 0);
 	      vloc = uloc;
@@ -6382,10 +6335,6 @@ compute_bb_dataflow (basic_block bb)
 		var_regno_delete (out, REGNO (uloc));
 
 	      val_store (out, val, vloc, insn, true);
-
-	      if (reverse)
-		val_store (out, XEXP (reverse, 0), XEXP (reverse, 1),
-			   insn, false);
 	    }
 	    break;
 
@@ -7698,6 +7647,7 @@ notify_dependents_of_resolved_value (var
 	  /* We won't notify variables that are being expanded,
 	     because their dependency list is cleared before
 	     recursing.  */
+	  NO_LOC_P (value) = false;
 	  VALUE_RECURSED_INTO (value) = false;
 
 	  gcc_checking_assert (dv_changed_p (dv));
@@ -7910,7 +7860,10 @@ vt_expand_loc_callback (rtx x, bitmap re
   gcc_checking_assert (!VALUE_RECURSED_INTO (x) || NO_LOC_P (x));
 
   if (NO_LOC_P (x))
-    return NULL;
+    {
+      gcc_checking_assert (VALUE_RECURSED_INTO (x) || !dv_changed_p (dv));
+      return NULL;
+    }
 
   var = (variable) htab_find_with_hash (elcd->vars, dv, dv_htab_hash (dv));
 
@@ -8709,14 +8662,9 @@ emit_notes_in_bb (basic_block bb, datafl
 	  case MO_VAL_SET:
 	    {
 	      rtx loc = mo->u.loc;
-	      rtx val, vloc, uloc, reverse = NULL_RTX;
+	      rtx val, vloc, uloc;
 
 	      vloc = loc;
-	      if (VAL_EXPR_HAS_REVERSE (loc))
-		{
-		  reverse = XEXP (loc, 1);
-		  vloc = XEXP (loc, 0);
-		}
 	      uloc = XEXP (vloc, 1);
 	      val = XEXP (vloc, 0);
 	      vloc = uloc;
@@ -8787,10 +8735,6 @@ emit_notes_in_bb (basic_block bb, datafl
 
 	      val_store (set, val, vloc, insn, true);
 
-	      if (reverse)
-		val_store (set, XEXP (reverse, 0), XEXP (reverse, 1),
-			   insn, false);
-
 	      emit_notes_for_changes (next_insn, EMIT_NOTE_BEFORE_INSN,
 				      set->vars);
 	    }
@@ -8957,28 +8901,17 @@ vt_get_decl_and_offset (rtx rtl, tree *d
   return false;
 }
 
-/* Mark the value for the ENTRY_VALUE of RTL as equivalent to EQVAL in
-   OUT.  */
+/* Record the value for the ENTRY_VALUE of RTL as a global equivalence
+   of VAL.  */
 
 static void
-create_entry_value (dataflow_set *out, rtx eqval, rtx rtl)
+record_entry_value (cselib_val *val, rtx rtl)
 {
   rtx ev = gen_rtx_ENTRY_VALUE (GET_MODE (rtl));
-  cselib_val *val;
 
   ENTRY_VALUE_EXP (ev) = rtl;
 
-  val = cselib_lookup_from_insn (ev, GET_MODE (ev), true,
-				 VOIDmode, get_insns ());
-
-  if (val->val_rtx != eqval)
-    {
-      preserve_value (val);
-      set_variable_part (out, val->val_rtx, dv_from_value (eqval), 0,
-			 VAR_INIT_STATUS_INITIALIZED, NULL_RTX, INSERT);
-      set_variable_part (out, eqval, dv_from_value (val->val_rtx), 0,
-			 VAR_INIT_STATUS_INITIALIZED, NULL_RTX, INSERT);
-    }
+  cselib_add_permanent_equiv (val, ev, get_insns ());
 }
 
 /* Insert function parameter PARM in IN and OUT sets of ENTRY_BLOCK.  */
@@ -9137,7 +9070,7 @@ vt_add_function_parameter (tree parm)
 			 VAR_INIT_STATUS_INITIALIZED, NULL, INSERT);
       if (dv_is_value_p (dv))
 	{
-	  create_entry_value (out, dv_as_value (dv), incoming);
+	  record_entry_value (CSELIB_VAL_PTR (dv_as_value (dv)), incoming);
 	  if (TREE_CODE (TREE_TYPE (parm)) == REFERENCE_TYPE
 	      && INTEGRAL_TYPE_P (TREE_TYPE (TREE_TYPE (parm))))
 	    {
@@ -9150,9 +9083,9 @@ vt_add_function_parameter (tree parm)
 	      if (val)
 		{
 		  preserve_value (val);
+		  record_entry_value (val, mem);
 		  set_variable_part (out, mem, dv_from_value (val->val_rtx), 0,
 				     VAR_INIT_STATUS_INITIALIZED, NULL, INSERT);
-		  create_entry_value (out, val->val_rtx, mem);
 		}
 	    }
 	}
diff -u gcc/cselib.c gcc/cselib.c
--- gcc/cselib.c	2011-11-21 08:07:25.290452455 -0200
+++ gcc/cselib.c	2011-12-30 20:52:48.270739948 -0200
@@ -378,6 +378,14 @@
 	  && (GET_CODE (v->locs->loc) != CONST
 	      || !references_value_p (v->locs->loc, 0)))
 	return 1;
+      /* Although a debug expr may be bound to different expressions,
+	 we can preserve it as if it was constant, to get unification
+	 and proper merging within var-tracking.  */
+      if (GET_CODE (v->locs->loc) == DEBUG_EXPR
+	  || GET_CODE (v->locs->loc) == DEBUG_IMPLICIT_PTR
+	  || GET_CODE (v->locs->loc) == ENTRY_VALUE
+	  || GET_CODE (v->locs->loc) == DEBUG_PARAMETER_REF)
+	return 1;
       if (cfa_base_preserved_val)
 	{
 	  if (v == cfa_base_preserved_val)
diff -u gcc/var-tracking.c gcc/var-tracking.c
--- gcc/var-tracking.c	2011-11-21 22:13:11.831071308 -0200
+++ gcc/var-tracking.c	2011-12-30 20:47:16.794290201 -0200
@@ -7647,6 +7647,7 @@
 	  /* We won't notify variables that are being expanded,
 	     because their dependency list is cleared before
 	     recursing.  */
+	  NO_LOC_P (value) = false;
 	  VALUE_RECURSED_INTO (value) = false;
 
 	  gcc_checking_assert (dv_changed_p (dv));
@@ -7859,7 +7860,10 @@
   gcc_checking_assert (!VALUE_RECURSED_INTO (x) || NO_LOC_P (x));
 
   if (NO_LOC_P (x))
-    return NULL;
+    {
+      gcc_checking_assert (VALUE_RECURSED_INTO (x) || !dv_changed_p (dv));
+      return NULL;
+    }
 
   var = (variable) htab_find_with_hash (elcd->vars, dv, dv_htab_hash (dv));
 

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]