[PATCH] More Ranger cache tweaks

Andrew MacLeod amacleod@redhat.com
Tue Nov 3 15:19:34 GMT 2020


The range on entry cache is managed by the ranger_cache class. Part of 
the service it provides is auto-filling the cache when a request is made 
for the range on entry to a block.

The Ranger makes sure the global range at the def statement has been 
calculated then submits a requests for the range on entry to the desired 
block. fill_block_cache() then propagates the range from the def point, 
thru successive blocks to the block being request, adjusting that range 
based on any outgoing ranges GORI reports in the intervening blocks.

During this propagation, no new ranges are calculated, only outgoing 
edge adjustments. During this process, if GORI discovers it is using a 
value during the outgoing_edge calculation that hasnt been evaluated 
yet, it marks it as a "poor value", and uses a best guess (VARYING wrost 
case) and continues.

Once the propagation is finished, any "poor values" which were 
discovered are properly evaluated by a ranger request for that value, 
and then the original range that was being propagated is re-evaluated on 
that edge to see if it changed the calculated result.  If ti did, this 
new value is re-propagated.

This is what I use too refer to as "iterative updating".  There is an 
iterative aspect to it, but it doesn't really continue iterating.. its 
really just propagation.

This patch
  1)  renames the iterative bits to "propagation".
  2) The propagation request is also split out from fill_block_cache 
into a routine  called propagate_updated_value(). This will only 
propagate the range to blocks which already have on-entry values, not 
places which haven't been looked at yet.
  3)  And finally the set_global_range routine now checks if there was 
already a global range set and calls propagate_updated_value() if there 
was.  This ensures that if there were any on-entry values set and there 
is a new global value, that is is properly propagated thru whatever 
cache entries there are.

The primary times that global values are currently re-evaluated occur 
during back-edge cycles when a "poor value" is calculated for 
improvements to keep going without having final resolution.

Bootstrapped on x86_64-pc-linux-gnu, no regressions.  pushed.

Andrew






-------------- next part --------------
commit 435b300906629e6ff46bf69ce5fd8b069f4cb731
Author: Andrew MacLeod <amacleod@redhat.com>
Date:   Mon Nov 2 17:04:23 2020 -0500

    More Ranger cache tweaks
    
    This patch splits the individual value propagation out from fill_block_cache,
    and calls it from set_global_value when the global value is updated.
    This ensures the "current" global value is reflected in the on-entry cache.
    
            * gimple-range-cache.cc (ssa_global_cache::get_global_range): Return
            true if there was a previous range set.
            (ranger_cache::ranger_cache): Take a gimple_ranger parameter.
            (ranger_cache::set_global_range): Propagate the value if updating.
            (ranger_cache::propagate_cache): Renamed from iterative_cache_update.
            (ranger_cache::propagate_updated_value): New.  Split from:
            (ranger_cache::fill_block_cache): Split out value propagator.
            * gimple-range-cache.h (ssa_global_cache): Update prototypes.
            (ranger_cache): Update prototypes.

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 574debbc166..cca9025abba 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -419,8 +419,9 @@ ssa_global_cache::get_global_range (irange &r, tree name) const
 }
 
 // Set the range for NAME to R in the global cache.
+// Return TRUE if there was already a range set, otherwise false.
 
-void
+bool
 ssa_global_cache::set_global_range (tree name, const irange &r)
 {
   unsigned v = SSA_NAME_VERSION (name);
@@ -432,6 +433,7 @@ ssa_global_cache::set_global_range (tree name, const irange &r)
     *m = r;
   else
     m_tab[v] = m_irange_allocator->allocate (r);
+  return m != NULL;
 }
 
 // Set the range for NAME to R in the glonbal cache.
@@ -476,7 +478,7 @@ ssa_global_cache::dump (FILE *f)
 
 // --------------------------------------------------------------------------
 
-ranger_cache::ranger_cache (range_query &q) : query (q)
+ranger_cache::ranger_cache (gimple_ranger &q) : query (q)
 {
   m_workback.create (0);
   m_workback.safe_grow_cleared (last_basic_block_for_fn (cfun));
@@ -532,7 +534,18 @@ ranger_cache::get_global_range (irange &r, tree name) const
 void
 ranger_cache::set_global_range (tree name, const irange &r)
 {
-  m_globals.set_global_range (name, r);
+  if (m_globals.set_global_range (name, r))
+    {
+      // If there was already a range set, propagate the new value.
+      basic_block bb = gimple_bb (SSA_NAME_DEF_STMT (name));
+      if (!bb)
+	bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
+
+      if (DEBUG_RANGE_CACHE)
+	fprintf (dump_file, "   GLOBAL :");
+
+      propagate_updated_value (name, bb);
+    }
 }
 
 // Push a request for a new lookup in block BB of name.  Return true if
@@ -660,11 +673,11 @@ ranger_cache::add_to_update (basic_block bb)
     m_update_list.quick_push (bb);
 }
 
-// If there is anything in the iterative update_list, continue
+// If there is anything in the propagation update_list, continue
 // processing NAME until the list of blocks is empty.
 
 void
-ranger_cache::iterative_cache_update (tree name)
+ranger_cache::propagate_cache (tree name)
 {
   basic_block bb;
   edge_iterator ei;
@@ -755,6 +768,50 @@ ranger_cache::iterative_cache_update (tree name)
       }
 }
 
+// Check to see if an update to the value for NAME in BB has any effect
+// on values already in the on-entry cache for successor blocks.
+// If it does, update them.  Don't visit any blocks which dont have a cache
+// entry.
+
+void
+ranger_cache::propagate_updated_value (tree name, basic_block bb)
+{
+  edge e;
+  edge_iterator ei;
+
+  // The update work list should be empty at this point.
+  gcc_checking_assert (m_update_list.length () == 0);
+  gcc_checking_assert (bb);
+
+  if (DEBUG_RANGE_CACHE)
+    {
+      fprintf (dump_file, " UPDATE cache for ");
+      print_generic_expr (dump_file, name, TDF_SLIM);
+      fprintf (dump_file, " in BB %d : successors : ", bb->index);
+    }
+  FOR_EACH_EDGE (e, ei, bb->succs)
+    {
+      // Only update active cache entries.
+      if (m_on_entry.bb_range_p (name, e->dest))
+	{
+	  add_to_update (e->dest);
+	  if (DEBUG_RANGE_CACHE)
+	    fprintf (dump_file, " UPDATE: bb%d", e->dest->index);
+	}
+    }
+    if (m_update_list.length () != 0)
+      {
+	if (DEBUG_RANGE_CACHE)
+	  fprintf (dump_file, "\n");
+	propagate_cache (name);
+      }
+    else
+      {
+	if (DEBUG_RANGE_CACHE)
+	  fprintf (dump_file, "  : No updates!\n");
+      }
+}
+
 // Make sure that the range-on-entry cache for NAME is set for block BB.
 // Work back through the CFG to DEF_BB ensuring the range is calculated
 // on the block/edges leading back to that point.
@@ -864,13 +921,13 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb)
     fprintf (dump_file, "\n");
 
   // Now fill in the marked blocks with values.
-  iterative_cache_update (name);
+  propagate_cache (name);
   if (DEBUG_RANGE_CACHE)
-    fprintf (dump_file, "  iterative update done.\n");
+    fprintf (dump_file, "  Propagation update done.\n");
 
   // Now that the cache has been updated, check to see if there were any 
   // SSA_NAMES used in filling the cache which were "poor values".
-  // We can evaluate them, and inject any new values into the iteration 
+  // Evaluate them, and inject any new values into the propagation
   // list, and see if it improves any on-entry values.
   if (poor_list_start !=  m_poor_value_list.length ())
     {
@@ -886,50 +943,25 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb)
 	  basic_block calc_bb = rec.bb;
 	  int_range_max tmp;
 
-	  // The update work list should be empty at this point.
-	  gcc_checking_assert (m_update_list.length () == 0);
-
 	  if (DEBUG_RANGE_CACHE)
 	    {
 	      fprintf (dump_file, "(%d:%d)Calculating ",
 		       m_poor_value_list.length () + 1, poor_list_start);
 	      print_generic_expr (dump_file, name, TDF_SLIM);
-	      fprintf (dump_file, " used poor value for ");
+	      fprintf (dump_file, " used POOR VALUE for ");
 	      print_generic_expr (dump_file, rec.calc, TDF_SLIM);
 	      fprintf (dump_file, " in bb%d, trying to improve:\n",
 		       calc_bb->index);
 	    }
 
-	  // It must have at least one edge, pick edge 0.  we just want to
-	  // calculate a range at the exit from the block so the caches feeding
-	  // this block will be filled up. 
-	  gcc_checking_assert (EDGE_SUCC (calc_bb, 0));
-	  query.range_on_edge (tmp, EDGE_SUCC (calc_bb, 0), rec.calc);
+	  // Calculate a range at the exit from the block so the caches feeding
+	  // this block will be filled, and we'll get a "better" value.
+	  query.range_on_exit (tmp, calc_bb, rec.calc);
 	  
-	  if (DEBUG_RANGE_CACHE)
-	    fprintf (dump_file, "    Checking successors of bb%d :",
-		     calc_bb->index);
-
-	  // Try recalculating any successor blocks with the new value.
-	  // Note that even if this value is refined from the initial value,
-	  // it may not affect the calculation, but the iterative update
-	  // will resolve that efficently.
-	  FOR_EACH_EDGE (e, ei, calc_bb->succs)
-	    {
-	      if (DEBUG_RANGE_CACHE)
-		fprintf (dump_file, "bb%d: ", e->dest->index);
-	      // Only update active cache entries.
-	      if (m_on_entry.bb_range_p (name, e->dest))
-		{
-		  if (DEBUG_RANGE_CACHE)
-		    fprintf (dump_file, "update ");
-		  add_to_update (e->dest);
-		}
-	    }
-	  if (DEBUG_RANGE_CACHE)
-	    fprintf (dump_file, "\n");
-	  // Now see if there is a new value.
-	  iterative_cache_update (name);
+	  // Then ask for NAME to be re-evaluated on outgoing edges and 
+	  // use any new values.
+	  propagate_updated_value (name, calc_bb);
 	}
     }
 }
+
diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index 599a2926b53..0e84ab0c4e6 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -74,7 +74,7 @@ public:
   ssa_global_cache ();
   ~ssa_global_cache ();
   bool get_global_range (irange &r, tree name) const;
-  void set_global_range (tree name, const irange &r);
+  bool set_global_range (tree name, const irange &r);
   void clear_global_range (tree name);
   void clear ();
   void dump (FILE *f = stderr);
@@ -90,7 +90,7 @@ private:
 class ranger_cache : public gori_compute_cache
 {
 public:
-  ranger_cache (class range_query &q);
+  ranger_cache (class gimple_ranger &q);
   ~ranger_cache ();
 
   virtual void ssa_range_in_bb (irange &r, tree name, basic_block bb);
@@ -108,7 +108,9 @@ private:
   block_range_cache m_on_entry;
   void add_to_update (basic_block bb);
   void fill_block_cache (tree name, basic_block bb, basic_block def_bb);
-  void iterative_cache_update (tree name);
+  void propagate_cache (tree name);
+
+  void propagate_updated_value (tree name, basic_block bb);
 
   vec<basic_block> m_workback;
   vec<basic_block> m_update_list;
@@ -121,7 +123,7 @@ private:
   };
   bool push_poor_value (basic_block bb, tree name);
   vec<update_record> m_poor_value_list;
-  class range_query &query;
+  class gimple_ranger &query;
 };
 
 #endif // GCC_SSA_RANGE_CACHE_H


More information about the Gcc-patches mailing list