This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[patch] Reenable LOOP_HEADER prediction


Hello,

the predictor telling that the copied condition for the first iteration
of loop is usually true became unused when loop header copying on
rtl was removed.  This patch makes tree level header copying to use
it instead.

Bootstrapped & regtested on i686.

Zdenek

	* tree-ssa-loop-ch.c (predict_entry_edges_taken): New function.
	(copy_loop_headers): Call predict_entry_edges_taken.

Index: tree-ssa-loop-ch.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/tree-ssa-loop-ch.c,v
retrieving revision 2.14
diff -c -3 -p -r2.14 tree-ssa-loop-ch.c
*** tree-ssa-loop-ch.c	18 Mar 2005 20:15:06 -0000	2.14
--- tree-ssa-loop-ch.c	29 Mar 2005 08:23:08 -0000
*************** do_while_loop_p (struct loop *loop)
*** 116,121 ****
--- 116,147 ----
    return true;
  }
  
+ /* Predict the edges from COPIED_BBS that lead to entering the LOOP
+    by PRED_LOOP_HEADER.  N_BBS is number of COPIED_BBS.  */
+ 
+ static void
+ predict_entry_edges_taken (basic_block copied_bbs[], int n_bbs,
+ 			   struct loop *loop)
+ {
+   int i;
+   edge e;
+ 
+   for (i = 0; i < n_bbs - 1; i++)
+     {
+       e = EDGE_SUCC (copied_bbs[i], 0);
+       if (e->dest != copied_bbs[i + 1])
+ 	e = EDGE_SUCC (copied_bbs[i], 1);
+       gcc_assert (e->dest == copied_bbs[i + 1]);
+       predict_edge_def (e, PRED_LOOP_HEADER, TAKEN);
+     }
+ 
+   e = EDGE_SUCC (copied_bbs[n_bbs - 1], 0);
+   if (e->dest != loop->header)
+     e = EDGE_SUCC (copied_bbs[n_bbs - 1], 1);
+   gcc_assert (e->dest == loop->header);
+   predict_edge_def (e, PRED_LOOP_HEADER, TAKEN);
+ }
+ 
  /* For all loops, copy the condition at the end of the loop body in front
     of the loop.  This is beneficial since it increases efficiency of
     code motion optimizations.  It also saves one jump on entry to the loop.  */
*************** copy_loop_headers (void)
*** 211,216 ****
--- 237,245 ----
  	  continue;
  	}
  
+       /* Predict the edges that lead to the loop as taken.  */
+       predict_entry_edges_taken (copied_bbs, n_bbs, loop);
+ 
        /* Fix profiling info.  Scaling is done in gcov_type arithmetic to
  	 avoid losing information; this is slow, but is done at most
  	 once per loop.  We special case 0 to avoid division by 0;


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]