This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[patch] tree-cfg.c: Speed up thread_jumps - Part 4


Attached is a patch to speed up thread_jumps.

Currently, cleanup_tree_cfg calls thread_jumps in a fixed operation

It turns out we need to repeat thread_jumps only when a new forwarder
block arises during the execution of thread_jumps as described in


The patch avoids unnecessary iterations of thread_jumps by restricting
the condition for iterations, which is exactly when a new forwarder
block arises.

Note that thread_jumps maintains "bb_ann (bb)->forwardable"
up-to-date, so we do not need to recompute the forwardable marks.

   FOR_EACH_BB (bb)
     bb_ann (bb)->forwardable = tree_forwarder_block_p (bb);

is placed outside of the main loop of thread_jumps.

Here is the timing for 10 runs of "cc1 -quiet -O2 -o /dev/null fold-const.i"

      original      patched              patched*
time:  181.167  181.214 (0.025% up)  180.809  (0.197% down)
tfbp:   193646   200018 (3.185% up)   126046 (33.796% down)

Here, "original" is GCC with my clean up patch posted earilier.
"patched" refers to mainline GCC with this patch applied.  "patched*"
refers to mainline GCC with this patch applied but without the new
gcc_assert introduced in cleanup_tree_cfg.  "tfbp" refers to the
number of calls to tree_forwarder_block_p.

Tested on i686-pc-linux-gnu on top of

OK to apply?

To make the patch readable, I used a goto instead of do-while loop.
If you could preapprove a patch to convert the goto to a do-while
loop, that would be greately appreciated.

Kazu Hirata

2004-10-13  Kazu Hirata  <>

	* tree-cfg.c (cleanup_tree_cfg): Don't iterate on
	(thread_jumps): Iterate until no new forwarder block arises.

diff -u tree-cfg.c tree-cfg.c
--- tree-cfg.c	8 Oct 2004 14:09:12 -0000
+++ tree-cfg.c	13 Oct 2004 01:54:39 -0000
@@ -718,18 +718,14 @@
   retval = cleanup_control_flow ();
   retval |= delete_unreachable_blocks ();
-  /* thread_jumps sometimes leaves further transformation
-     opportunities for itself, so iterate on it until nothing
-     changes.  */
-  while (thread_jumps ())
-    retval = true;
+  retval |= thread_jumps ();
   if (retval)
       gcc_assert (!cleanup_control_flow ());
       gcc_assert (!delete_unreachable_blocks ());
+      gcc_assert (!thread_jumps ());
@@ -3766,10 +3762,13 @@
     tree phi;
   int arg;
   bool retval = false;
+  bool rerun;
   FOR_EACH_BB (bb)
     bb_ann (bb)->forwardable = tree_forwarder_block_p (bb);
+ restart:
+  rerun = false;
   FOR_EACH_BB (bb)
       edge_iterator ei;
@@ -3931,8 +3930,10 @@
 	 two arms eventually merge without any intervening
 	 statements.  */
       if (this_jump_threaded && tree_forwarder_block_p (bb))
-	bb_ann (bb)->forwardable = true;
+	bb_ann (bb)->forwardable = rerun = true;
+  if (rerun)
+    goto restart;
   return retval;

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]