[PATCH] Allow MEM_REF lhs on gimple_clobber_p stmts (PR c++/34949)

Jakub Jelinek jakub@redhat.com
Tue Apr 2 12:57:00 GMT 2013


On Tue, Apr 02, 2013 at 01:44:46PM +0200, Richard Biener wrote:
> > @@ -412,7 +412,8 @@ get_mem_ref_of_assignment (const gimple
> >  {
> >    gcc_assert (gimple_assign_single_p (assignment));
> >  
> > -  if (gimple_store_p (assignment))
> > +  if (gimple_store_p (assignment)
> > +      && !gimple_clobber_p (assignment))
> 
> hmm, maybe gimple_store_p should not consider a clobber a store?

gimple_store_p is young, so perhaps we can tweak its meaning.
By changing gimple_store_p, we could drop this hunk, but perhaps
would want to add early return for gimple_clobber_p
into will_be_nonconstant_predicate?  Your call.

> >      {
> >        ref->start = gimple_assign_lhs (assignment);
> >        *ref_is_store = true;
> > --- gcc/tree-ssa-dse.c.jj	2013-01-11 09:02:37.000000000 +0100
> > +++ gcc/tree-ssa-dse.c	2013-03-27 17:14:27.424466023 +0100
> > @@ -218,7 +218,10 @@ dse_optimize_stmt (gimple_stmt_iterator
> >    if (is_gimple_call (stmt) && gimple_call_fndecl (stmt))
> >      return;
> >  
> > -  if (gimple_has_volatile_ops (stmt))
> > +  /* Don't return early on *this_2(D) ={v} {CLOBBER}.  */
> > +  if (gimple_has_volatile_ops (stmt)
> > +      && (!gimple_clobber_p (stmt)
> > +	  || TREE_CODE (gimple_assign_lhs (stmt)) != MEM_REF))
> >      return;
> 
> But this only means the clobber was not considered as "dead"
> if there was a later store to the same location.  So, why would
> that change be needed?

Say with -O2:
struct A { virtual ~A (); char a[10]; };
struct B : public A { virtual ~B (); char b[10]; };
A::~A () { a[5] = 7; }
B::~B () { b[5] = 8; }
B b;
we end up in ~B with:
  this_3(D)->D.2229._vptr.A = &MEM[(void *)&_ZTV1B + 16B];
  this_3(D)->b[5] = 8;
  _6 = &this_3(D)->D.2229;
  MEM[(struct A *)this_3(D)]._vptr.A = &MEM[(void *)&_ZTV1A + 16B];
  MEM[(struct A *)this_3(D)].a[5] = 7;
  MEM[(struct A *)this_3(D)] ={v} {CLOBBER};
  *this_3(D) ={v} {CLOBBER};
and the intent of the above hunk (and the one immediately below this)
is that we DSE the first clobber (with smaller clobbered size), something
that IMHO happens very frequently in C++ code, and allows us to better
DSEthe other stores.  There is no point in keeping the smaller clobbers
around, on the other side, without the second hunk we'd remove pretty much
all the MEM_REF clobbers at the end of functions, which is undesriable.

> >    if (is_gimple_assign (stmt))
> > @@ -228,6 +231,12 @@ dse_optimize_stmt (gimple_stmt_iterator
> >        if (!dse_possible_dead_store_p (stmt, &use_stmt))
> >  	return;
> >  
> > +      /* But only remove *this_2(D) ={v} {CLOBBER} if killed by
> > +	 another clobber stmt.  */
> > +      if (gimple_clobber_p (stmt)
> > +	  && !gimple_clobber_p (use_stmt))
> > +	return;
> 
> Ah.  Hmm, can that ever happen?  I presume the issue with non-decl
> clobbers is that we will never remove them, even if the storage
> becomes "dead"?

See above.

> > @@ -827,7 +827,26 @@ remove_unused_locals (void)
> >  	    if (gimple_clobber_p (stmt))
> >  	      {
> >  		tree lhs = gimple_assign_lhs (stmt);
> > +		bool remove = false;
> > +		/* Remove clobbers referencing unused vars, or clobbers
> > +		   with MEM_REF lhs referencing unused vars or uninitialized
> > +		   pointers.  */
> >  		if (TREE_CODE (lhs) == VAR_DECL && !is_used_p (lhs))
> > +		  remove = true;
> > +		else if (TREE_CODE (lhs) == MEM_REF)
> > +		  {
> > +		    ssa_op_iter iter;
> > +		    tree op;
> > +		    FOR_EACH_SSA_TREE_OPERAND (op, stmt, iter, SSA_OP_USE)
> 
> The only SSA use on a clobber stmt can possibly be TREE_OPERAND (lhs, 0)
> when lhs is a MEM_REF, no need to use FOR_EACH_SSA_TREE_OPERAND.
> 
> So just 
> 
> > +             else if (TREE_CODE (lhs) == MEM_REF
>                          && TREE_CODE (TREE_OPERAND (lhs, 0)) == SSA_NAME)
> ...

If MEM_REF[&MEM_REF[...], 0] and similar won't ever get through, perhaps.

> 
> > +		      if (SSA_NAME_VAR (op)
> > +			  && TREE_CODE (SSA_NAME_VAR (op)) == VAR_DECL
> > +			  && !is_used_p (SSA_NAME_VAR (op)))
> > +			remove = true;
> 
> I'm not sure this is really desired ... isn't a better check to do
> has_single_use (op)?  (which means it would be better suited to
> be handled in DCE?)

The above change fixed tons of ICEs, fnsplit pass ignored clobber stmts
(desirable) when deciding what to split, and end up copying the clobber
into the *.part.0 function, but because nothing but the clobber used
the this parameter, it wasn't passed.  So, we ended up first referencing
a default definition of a local this variable (just useless stmt), and later
on when tree-ssa-live.c run we've ignored the clobber again for decisions,
so that local this variable became !is_used_p and we've ended up referencing
in-free-list SSA_NAME, which triggered assertion failure.  See a few lines
above this where we similarly remove !is_used_p VAR_DECLs.
So, IMHO the !is_used_p code belongs to this spot, we can do the clobber
with
SSA_NAME_IS_DEFAULT_DEF (op) && TREE_CODE (SSA_NAME_VAR (op)) != PARM_DECL)
MEM_REF removal in DCE (or both DCE and here).
Without this code in tree-ssa-live.c we couldn't do:
          if (gimple_clobber_p (stmt))
            {
              have_local_clobbers = true;
              continue;
            }
in remove_unused_locals safely (i.e. don't bother with mark_all_vars_used
on the lhs of the clobber).

> Btw, due to propagation you can end up with MEM[&decl] = {CLOBBER};
> which should be handled the same as the VAR_DECL case.  Eventually
> just use lhs = get_base_address (gimple_assign_lhs (stmt)) here.

Sure, MEM_REF[&decl] I've seen frequently, but unless it is a whole
var access and not say clobbering of just a portion of the var, we want
to treat it as the patch does for DSE reasons, but not for variable
coalescing reasons during expansion.  Perhaps for MEM_REF[&decl, 0] with
the size of access the same as decl's size cfgexpand.c could treat it as a
clobber for the whole decl (but then, won't we have there a non-MEM_REF
clobber in that case after it too and DSE supposedly already removed the
first MEM_REF clobber?).  I mean like adding
void bar (B &);
void
foo ()
{
  { B b; bar (b); } { B c; bar (c); } { B d; bar (d); }
}
to the above testcase.

	Jakub



More information about the Gcc-patches mailing list