[PATCH] Improve VRP assert creation for loops

Richard Biener rguenther@suse.de
Thu Nov 7 09:44:00 GMT 2013


On Thu, 7 Nov 2013, Jakub Jelinek wrote:

> On Thu, Nov 07, 2013 at 09:48:46AM +0100, Richard Biener wrote:
> > > --- gcc/tree-vrp.c.jj	2013-11-06 08:48:01.000000000 +0100
> > > +++ gcc/tree-vrp.c	2013-11-06 09:32:19.205104029 +0100
> > > @@ -92,6 +92,42 @@ static sbitmap *live;
> > >  static bool
> > >  live_on_edge (edge e, tree name)
> > >  {
> > > +  if (live[e->dest->index] == NULL)
> > > +    {
> > 
> > I'd rather have live[] computed "correctly" than the following
> > which I think is a hack ... as you say the issue is ordering
> > (or rather that there isn't an "order" for CFG cycles unless
> > we want to iterate).  For loop BB order we explicitely handle
> > the latch, maybe just using a different order than
> > RPO order, with special-casing the latch, makes more sense?
> 
> But is there any order that would help?  Sure, we could say just tweak
> the rpo/bb_rpo arrays and put there latch bbs before any other bbs from the
> same loop, but that would only help us in calling find_assert_locations_1
> on the latch before other bbs in the loop.  But that call does nothing,
> and we'd need to special case handling of the live for the latch block
> anyway, just not in live_on_edge, but in find_assert_locations instead.
> 
> Is that what what you are looking for?
> 
> Because without that special casing of handling latch edges, you'd need to
> process the header rather than latch first, and propagate through DFS_BACK
> edges, but then you couldn't propagate through other edges, nor in this
> testcase would be the latch live computed when processing the header which
> has there the condition.

I'm looking for adjusting of live compute - either by adjusting the
algorithm or by special casing the latches.  Like for example
with the following (untested, needs cleanup - the PHI processing
can be factored out from find_assert_locations_1 and re-used):

@@ -5904,6 +5898,27 @@ find_assert_locations (void)
   for (i = 0; i < rpo_cnt; ++i)
     bb_rpo[rpo[i]] = i;
 
+  /* Pre-seed loop latch liveness from loop header PHI nodes.  Due to
+     the order we compute liveness and insert asserts we otherwise
+     fail to insert asserts into the loop latch.  */
+  loop_p loop;
+  loop_iterator li;
+  FOR_EACH_LOOP (li, loop, 0)
+    {
+      i = loop->latch->index;
+      live[i] = sbitmap_alloc (num_ssa_names);
+      bitmap_clear (live[i]);
+      for (gimple_stmt_iterator gsi = gsi_start_phis (loop->header);
+          !gsi_end_p (gsi); gsi_next (&gsi))
+       {
+         gimple phi = gsi_stmt (gsi);
+         if (virtual_operand_p (gimple_phi_result (phi)))
+           continue;
+         for (unsigned j = 0; j < gimple_phi_num_args (phi); ++j)
+           if (TREE_CODE (gimple_phi_arg_def (phi, j)) == SSA_NAME)
+             bitmap_set_bit (live[i], SSA_NAME_VERSION 
(gimple_phi_arg_def (phi, j)));
+       }
+    }



More information about the Gcc-patches mailing list