[PATCH] More jump threading cleanups

Jeff Law law@redhat.com
Mon Oct 21 16:32:00 GMT 2013


Two cleanups I noticed late last week while working on the general case 
FSA optimization.

First, we were not cancelling jump threads through joiner blocks when 
there is any edge on the jump threading path with a recorded jump thread 
opportunity.  We were not nearly as aggressive at pruning as we should 
have been resulting in useless block duplication.

Second, in the general form of the FSA optimization, we can have jump 
threading path which starts with a join point and traverses through a 
normal jump threading block (ie, has side effects and a compile-time 
determinable out-edge).  To facilitate these opportunities, we want to 
call thread_through_normal_block after threading through a joiner block. 
  Thus we need thread_through_normal_block to accept the bitmap of 
blocks we have already visited and check it appropriately.

Bootstrapped & regression tested on x86_64-unknown-linux-gnu.  Installed 
on trunk.


-------------- next part --------------
diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index c5b794e..54298e4 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,15 @@
+2013-10-21  Jeff Law  <law@redhat.com>
+
+	* tree-ssa-threadedge.c (thread_through_normal_block): New argument VISITED.
+	Remove VISISTED as a local variable.  When we have a threadable jump, verify
+	the destination of the jump has not been visised.
+	(thread_across_edge): Allocate VISITED bitmap once at function scope and
+	use it throughout.  Make sure to set appropriate bits in VISITED for E (start
+	of jump thread path).
+
+	* tree-ssa-threadupdate.c (mark_threaded_blocks): Reject threading through
+	a joiner if any edge on the path has a recorded jump thread.
+
 2013-10-21  Ian Lance Taylor  <iant@google.com>
 
 	* doc/invoke.texi (Optimize Options): For -fno-toplevel-reorder,
diff --git a/gcc/tree-ssa-threadedge.c b/gcc/tree-ssa-threadedge.c
index f567557..ebd93cb 100644
--- a/gcc/tree-ssa-threadedge.c
+++ b/gcc/tree-ssa-threadedge.c
@@ -883,7 +883,8 @@ thread_through_normal_block (edge e,
 			     bool handle_dominating_asserts,
 			     vec<tree> *stack,
 			     tree (*simplify) (gimple, gimple),
-			     vec<jump_thread_edge *> *path)
+			     vec<jump_thread_edge *> *path,
+			     bitmap visited)
 {
   /* If E is a backedge, then we want to verify that the COND_EXPR,
      SWITCH_EXPR or GOTO_EXPR at the end of e->dest is not affected
@@ -922,11 +923,10 @@ thread_through_normal_block (edge e,
 	{
 	  edge taken_edge = find_taken_edge (e->dest, cond);
 	  basic_block dest = (taken_edge ? taken_edge->dest : NULL);
-	  bitmap visited;
 
 	  /* DEST could be NULL for a computed jump to an absolute
 	     address.  */
-	  if (dest == NULL || dest == e->dest)
+	  if (dest == NULL || dest == e->dest || bitmap_bit_p (visited, dest->index))
 	    return false;
 
           jump_thread_edge *x
@@ -944,7 +944,6 @@ thread_through_normal_block (edge e,
 	    {
 	      /* We don't want to thread back to a block we have already
  		 visited.  This may be overly conservative.  */
-	      visited = BITMAP_ALLOC (NULL);
 	      bitmap_set_bit (visited, dest->index);
 	      bitmap_set_bit (visited, e->dest->index);
 	      thread_around_empty_blocks (taken_edge,
@@ -953,7 +952,6 @@ thread_through_normal_block (edge e,
 					  simplify,
 					  visited,
 					  path);
-	      BITMAP_FREE (visited);
 	    }
 	  return true;
 	}
@@ -995,15 +993,21 @@ thread_across_edge (gimple dummy_cond,
 		    vec<tree> *stack,
 		    tree (*simplify) (gimple, gimple))
 {
+  bitmap visited = BITMAP_ALLOC (NULL);
+
   stmt_count = 0;
 
   vec<jump_thread_edge *> *path = new vec<jump_thread_edge *> ();
+  bitmap_clear (visited);
+  bitmap_set_bit (visited, e->src->index);
+  bitmap_set_bit (visited, e->dest->index);
   if (thread_through_normal_block (e, dummy_cond, handle_dominating_asserts,
-				   stack, simplify, path))
+				   stack, simplify, path, visited))
     {
       propagate_threaded_block_debug_into (path->last ()->e->dest,
 					   e->dest);
       remove_temporary_equivalences (stack);
+      BITMAP_FREE (visited);
       register_jump_thread (path);
       return;
     }
@@ -1030,7 +1034,6 @@ thread_across_edge (gimple dummy_cond,
     edge taken_edge;
     edge_iterator ei;
     bool found;
-    bitmap visited = BITMAP_ALLOC (NULL);
 
     /* Look at each successor of E->dest to see if we can thread through it.  */
     FOR_EACH_EDGE (taken_edge, ei, e->dest->succs)
diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
index e791269..737a6a2 100644
--- a/gcc/tree-ssa-threadupdate.c
+++ b/gcc/tree-ssa-threadupdate.c
@@ -1244,7 +1244,7 @@ mark_threaded_blocks (bitmap threaded_blocks)
      When this occurs ignore the jump thread request with the joiner
      block.  It's totally subsumed by the simpler jump thread request.
 
-     This results in less block copying, simpler CFGs.  More improtantly,
+     This results in less block copying, simpler CFGs.  More importantly,
      when we duplicate the joiner block, B, in this case we will create
      a new threading opportunity that we wouldn't be able to optimize
      until the next jump threading iteration.
@@ -1263,20 +1263,30 @@ mark_threaded_blocks (bitmap threaded_blocks)
 	}
     }
 
-
-  /* Now iterate again, converting cases where we threaded through
-     a joiner block, but ignoring those where we have already
-     threaded through the joiner block.  */
+  /* Now iterate again, converting cases where we want to thread
+     through a joiner block, but only if no other edge on the path
+     already has a jump thread attached to it.  */
   for (i = 0; i < paths.length (); i++)
     {
       vec<jump_thread_edge *> *path = paths[i];
 
-      if ((*path)[1]->type == EDGE_COPY_SRC_JOINER_BLOCK
-	  && (*path)[0]->e->aux == NULL)
+      
+      if ((*path)[1]->type == EDGE_COPY_SRC_JOINER_BLOCK)
 	{
-	  edge e = (*path)[0]->e;
-	  e->aux = path;
-	  bitmap_set_bit (tmp, e->dest->index);
+	  unsigned int j;
+
+	  for (j = 0; j < path->length (); j++)
+	    if ((*path)[j]->e->aux != NULL)
+	      break;
+
+	  /* If we iterated through the entire path without exiting the loop,
+	     then we are good to go, attach the path to the starting edge.  */
+	  if (j == path->length ())
+	    {
+	      edge e = (*path)[0]->e;
+	      e->aux = path;
+	      bitmap_set_bit (tmp, e->dest->index);
+	    }
 	}
     }
 


More information about the Gcc-patches mailing list