[RFC PATCH] Optimize ASAN_CHECK checks

Yury Gribov y.gribov@samsung.com
Wed Nov 12 09:26:00 GMT 2014


On 11/11/2014 08:42 PM, Jakub Jelinek wrote:
> On Wed, Nov 05, 2014 at 11:50:20AM +0100, Jakub Jelinek wrote:
>> On Wed, Nov 05, 2014 at 11:29:19AM +0100, Marek Polacek wrote:
>>> On Wed, Nov 05, 2014 at 12:54:37PM +0300, Yury Gribov wrote:
>>>> Are you going to work on ASan soon?  I could rebase my patches on top of
>>>> Marek's infrastructure.
>>>
>>> I'm not going to work on ASan today or tomorrow, but it'd be nice to
>>> get this ASan opt in in this stage1.
>>>
>>> So if you can rebase your patch, I think that will be appreciated.
>>
>> Note, the algorithm we were discussing with Honza for the
>> "is there any possibility of a freeing call on the path between a dominating
>> and dominated ASAN_CHECK"
>> problem was to compute it lazily; have flags for asan per-bb:
>> 1) bb might contain a !nonfreeing_call_p
>> 2) there is a bb with flag 1) set in some path between imm(bb) and bb
>> 3) flag whether 2) has been computed already
>> 4) some temporary "being visited" flag
>> and the algorithm:
>> 1) when walking a bb, if you encounter a !nonfreeing_call_p call, either
>>     immediately nuke recorded earlier ASAN_CHECKs from the current bb,
>>     or use gimple_uids for lazily doing that; but in any case, record
>>     the flag 1) for the current bb
>> 2) if you are considering ASAN_CHECK in a different bb than ASAN_CHECK
>>     it is dominating, check the 2) flag on the current bb, then on
>>     get_immediate_dominator (bb) etc. until you reach the bb with the
>>     dominating bb, if the 2) flag is set on any of them, don't optimize;
>>     if the 2) flag is not computed on any of these (i.e. flag 3) unset),
>>     then compute it recursively; set the 4) flag on a bb, for incoming
>>     edges if the src bb is not the imm(bb) of the original bb, and does
>>     not have 4) flag set: if it has 1) set, use 1, if it has 3) flag set,
>>     use 2), otherwise recurse (and or the result); unset 4) flag before
>>     returning; or so.
>
> Here is a patch (partly written by Marek) implementing that algorithm,
> only very lightly tested beyond
> make -j16 -k check RUNTESTFLAGS="--target_board=\{-m32,-m64\} asan.exp ubsan.exp tsan.exp"

Yeah, would be interesting to see how many checks it removes from 
Asan-bootstrapped GCC.

> Honza, do you think you could look over the algorithm if it is sane?
> I couldn't decide if it is ok to cache negative results for the
> imm_dom_path_with_freeing_call_p flag (i.e. set
> imm_dom_path_with_freeing_call_computed_p and keep
> imm_dom_path_with_freeing_call_p cleared) for basic blocks encountered
> during the recursion, if they have different immediate dominator than
> the bb I've called the recursive function for originally, and during
> the search some being_visited_p flag has been encountered.
>
> I've been playing with testcase like:
>
> int b[64], b2[128];
> void bar (void);
>
> int
> foo (int *p, int *q, int x, int y)
> {
>    int v = *(char *) p;
>    __builtin_memcpy (b, b2, 17);
>    int w = (*p)++;
>    if (x)
>      *p = 6;
>    int z = *q;
>    if (y)
>      bar ();
>    *q = 8;
>    return v + w + z;
> }
>
> int
> baz (int *p, int x)
> {
>    int v = *p;
>    int i, j = 0;
>    for (i = 0; i < 64; i++)
>      if (b[i]++ < 20)
>        b2[i] += i < 76 ? ({asm ("" : "+r" (j)); 0;}) : i * 4;
>    v += *p;
>    for (i = 0; i < 64; i++)
>      if (b[i]++ < 20)
>        b2[i] += i < 76 ? ({asm ("" : "+r" (j)); 0;}) : i * 4;
>      else if (b2[i] > 17)
>        bar ();
>    v += *p;
>    return v;
> }
>
> but guess that isn't sufficient for proper test coverage.
>
> --- gcc/asan.c.jj	2014-11-11 00:06:18.000000000 +0100
> +++ gcc/asan.c	2014-11-11 11:42:29.327583317 +0100
> @@ -299,15 +299,6 @@ static GTY(()) tree shadow_ptr_types[2];
>   /* Decl for __asan_option_detect_stack_use_after_return.  */
>   static GTY(()) tree asan_detect_stack_use_after_return;
>
> -/* Various flags for Asan builtins.  */
> -enum asan_check_flags
> -{
> -  ASAN_CHECK_STORE = 1 << 0,
> -  ASAN_CHECK_SCALAR_ACCESS = 1 << 1,
> -  ASAN_CHECK_NON_ZERO_LEN = 1 << 2,
> -  ASAN_CHECK_LAST = 1 << 3
> -};
> -
>   /* Hashtable support for memory references used by gimple
>      statements.  */
>
> --- gcc/asan.h.jj	2014-11-11 00:06:18.000000000 +0100
> +++ gcc/asan.h	2014-11-11 11:43:24.999578043 +0100
> @@ -59,6 +59,15 @@ extern alias_set_type asan_shadow_set;
>   #define ASAN_STACK_FRAME_MAGIC		0x41b58ab3
>   #define ASAN_STACK_RETIRED_MAGIC	0x45e0360e
>
> +/* Various flags for Asan builtins.  */
> +enum asan_check_flags
> +{
> +  ASAN_CHECK_STORE = 1 << 0,
> +  ASAN_CHECK_SCALAR_ACCESS = 1 << 1,
> +  ASAN_CHECK_NON_ZERO_LEN = 1 << 2,
> +  ASAN_CHECK_LAST = 1 << 3
> +};
> +
>   /* Return true if DECL should be guarded on the stack.  */
>
>   static inline bool
> --- gcc/gimple.c.jj	2014-11-11 00:06:21.000000000 +0100
> +++ gcc/gimple.c	2014-11-11 10:02:17.385736225 +0100
> @@ -2538,6 +2538,9 @@ nonfreeing_call_p (gimple call)
>   	default:
>   	  return true;
>         }
> +  else if (gimple_call_internal_p (call)
> +	   && gimple_call_flags (call) & ECF_LEAF)
> +    return true;

Nice.

>
>     return false;
>   }
> --- gcc/sanopt.c.jj	2014-11-11 09:13:36.698280115 +0100
> +++ gcc/sanopt.c	2014-11-11 18:07:17.913539517 +0100
> @@ -49,6 +49,7 @@ along with GCC; see the file COPYING3.
>   #include "langhooks.h"
>   #include "ubsan.h"
>   #include "params.h"
> +#include "tree-ssa-operands.h"
>
>
>   /* This is used to carry information about basic blocks.  It is
> @@ -56,7 +57,29 @@ along with GCC; see the file COPYING3.
>
>   struct sanopt_info
>   {
> -  /* True if this BB has been visited.  */
> +  /* True if this BB might call (directly or indirectly) free/munmap
> +     or similar operation.  */

Frankly I think this is more about memory poisoned status then free.

> +  bool has_freeing_call_p;
> +
> +  /* True if HAS_FREEING_CALL_P flag has been computed.  */
> +  bool has_freeing_call_computed_p;
> +
> +  /* True if there is a block with HAS_FREEING_CALL_P flag set
> +     on any a path between imm(BB) and BB.  */
> +  bool imm_dom_path_with_freeing_call_p;
> +
> +  /* True if IMM_DOM_PATH_WITH_FREEING_CALL_P has been computed.  */
> +  bool imm_dom_path_with_freeing_call_computed_p;
> +
> +  /* Number of possibly freeing calls encountered in this bb
> +     (so far).  */
> +  uint64_t freeing_call_events;
> +
> +  /* True if BB is currently being visited during computation
> +     of IMM_DOM_PATH_WITH_FREEING_CALL_P flag.  */
> +  bool being_visited_p;
> +
> +  /* True if this BB has been visited in the dominator walk.  */
>     bool visited_p;
>   };
>
> @@ -69,11 +92,307 @@ struct sanopt_ctx
>        a vector of UBSAN_NULL call statements that check this pointer.  */
>     hash_map<tree, auto_vec<gimple> > null_check_map;
>
> +  /* This map maps a pointer (the second argument of ASAN_CHECK) to
> +     a vector of ASAN_CHECK call statements that check the access.  */
> +  hash_map<tree, auto_vec<gimple> > asan_check_map;

How about using traits class like the one below for both maps?

struct tree_map_traits : default_hashmap_traits
{
   static inline hashval_t hash (const_tree ref)
     {
       return iterative_hash_expr (ref, 0);
     }

   static inline bool equal_keys (const_tree ref1, const_tree ref2)
     {
       return operand_equal_p (ref1, ref2, 0);
     }
};

Also the hash_map probably deserves a typedef.

> +
>     /* Number of IFN_ASAN_CHECK statements.  */
>     int asan_num_accesses;
>   };
>
>
> +/* Return true if there might be any call to free/munmap operation
> +   on any path in between DOM (which should be imm(BB)) and BB.  */
> +
> +static bool
> +imm_dom_path_with_freeing_call (basic_block bb, basic_block dom)

Perhaps some parameter to bound search in pathalogical cases?

> +{
> +  sanopt_info *info = (sanopt_info *) bb->aux;
> +  edge e;
> +  edge_iterator ei;
> +
> +  if (info->imm_dom_path_with_freeing_call_computed_p)
> +    return info->imm_dom_path_with_freeing_call_p;
> +
> +  info->being_visited_p = true;
> +
> +  FOR_EACH_EDGE (e, ei, bb->preds)
> +    {
> +      sanopt_info *pred_info = (sanopt_info *) e->src->aux;
> +
> +      if (e->src == dom)
> +	continue;
> +
> +      if ((pred_info->imm_dom_path_with_freeing_call_computed_p
> +	  && pred_info->imm_dom_path_with_freeing_call_p)
> +	  || (pred_info->has_freeing_call_computed_p
> +	      && pred_info->has_freeing_call_p))
> +	{
> +	  info->imm_dom_path_with_freeing_call_computed_p = true;
> +	  info->imm_dom_path_with_freeing_call_p = true;
> +	  info->being_visited_p = false;
> +	  return true;
> +	}
> +    }
> +
> +  FOR_EACH_EDGE (e, ei, bb->preds)
> +    {
> +      sanopt_info *pred_info = (sanopt_info *) e->src->aux;
> +
> +      if (e->src == dom)
> +	continue;
> +
> +      if (pred_info->has_freeing_call_computed_p)
> +	continue;
> +
> +      gimple_stmt_iterator gsi;
> +      for (gsi = gsi_start_bb (e->src); !gsi_end_p (gsi); gsi_next (&gsi))
> +	{
> +	  gimple stmt = gsi_stmt (gsi);
> +
> +	  if (is_gimple_call (stmt) && !nonfreeing_call_p (stmt))
> +	    {
> +	      pred_info->has_freeing_call_p = true;
> +	      break;
> +	    }
> +	}
> +
> +      pred_info->has_freeing_call_computed_p = true;
> +      if (pred_info->has_freeing_call_p)
> +	{
> +	  info->imm_dom_path_with_freeing_call_computed_p = true;
> +	  info->imm_dom_path_with_freeing_call_p = true;
> +	  info->being_visited_p = false;
> +	  return true;
> +	}
> +    }
> +
> +  FOR_EACH_EDGE (e, ei, bb->preds)
> +    {
> +      if (e->src == dom)
> +	continue;
> +
> +      basic_block src;
> +      for (src = e->src; src != dom; )
> +	{
> +	  sanopt_info *pred_info = (sanopt_info *) src->aux;
> +	  if (pred_info->being_visited_p)
> +	    break;
> +	  basic_block imm = get_immediate_dominator (CDI_DOMINATORS, src);
> +	  if (imm_dom_path_with_freeing_call (src, imm))
> +	    {
> +	      info->imm_dom_path_with_freeing_call_computed_p = true;
> +	      info->imm_dom_path_with_freeing_call_p = true;
> +	      info->being_visited_p = false;
> +	      return true;
> +	    }
> +	  src = imm;
> +	}
> +    }
> +
> +  info->imm_dom_path_with_freeing_call_computed_p = true;
> +  info->imm_dom_path_with_freeing_call_p = false;
> +  info->being_visited_p = false;
> +  return false;
> +}
> +
> +/* Optimize away redundant UBSAN_NULL calls.  */
> +
> +static bool
> +maybe_optimize_ubsan_null_ifn (struct sanopt_ctx *ctx, gimple stmt)
> +{
> +  gcc_assert (gimple_call_num_args (stmt) == 3);
> +  tree ptr = gimple_call_arg (stmt, 0);
> +  tree cur_align = gimple_call_arg (stmt, 2);
> +  gcc_assert (TREE_CODE (cur_align) == INTEGER_CST);
> +  bool remove = false;
> +
> +  auto_vec<gimple> &v = ctx->null_check_map.get_or_insert (ptr);
> +  if (v.is_empty ())
> +    {
> +      /* For this PTR we don't have any UBSAN_NULL stmts recorded, so there's
> +	 nothing to optimize yet.  */
> +      v.safe_push (stmt);
> +      return false;
> +    }
> +
> +  /* We already have recorded a UBSAN_NULL check for this pointer. Perhaps we
> +     can drop this one.  But only if this check doesn't specify stricter
> +     alignment.  */
> +  while (!v.is_empty ())
> +    {
> +      gimple g = v.last ();
> +      /* Remove statements for BBs that have been already processed.  */
> +      sanopt_info *si = (sanopt_info *) gimple_bb (g)->aux;
> +      if (si->visited_p)
> +	{
> +	  v.pop ();
> +	  continue;
> +	}
> +
> +      /* At this point we shouldn't have any statements that aren't dominating
> +	 the current BB.  */
> +      tree align = gimple_call_arg (g, 2);
> +      int kind = tree_to_shwi (gimple_call_arg (g, 1));
> +      /* If this is a NULL pointer check where we had segv anyway, we can
> +	 remove it.  */
> +      if (integer_zerop (align)
> +	  && (kind == UBSAN_LOAD_OF
> +	      || kind == UBSAN_STORE_OF
> +	      || kind == UBSAN_MEMBER_ACCESS))
> +	remove = true;
> +      /* Otherwise remove the check in non-recovering mode, or if the
> +	 stmts have same location.  */
> +      else if (integer_zerop (align))
> +	remove = (flag_sanitize_recover & SANITIZE_NULL) == 0
> +		 || flag_sanitize_undefined_trap_on_error
> +		 || gimple_location (g) == gimple_location (stmt);
> +      else if (tree_int_cst_le (cur_align, align))
> +	remove = (flag_sanitize_recover & SANITIZE_ALIGNMENT) == 0
> +		 || flag_sanitize_undefined_trap_on_error
> +		 || gimple_location (g) == gimple_location (stmt);
> +      if (!remove && gimple_bb (g) == gimple_bb (stmt)
> +	  && tree_int_cst_compare (cur_align, align) == 0)
> +	v.pop ();
> +      break;
> +    }
> +
> +  if (!remove)
> +    v.safe_push (stmt);
> +  return remove;
> +}
> +
> +/* Optimize away redundant ASAN_CHECK calls.  */
> +
> +static bool
> +maybe_optimize_asan_check_ifn (struct sanopt_ctx *ctx, gimple stmt)
> +{
> +  gcc_assert (gimple_call_num_args (stmt) == 4);
> +  int flags = tree_to_uhwi (gimple_call_arg (stmt, 0));
> +  tree ptr = gimple_call_arg (stmt, 1);
> +  tree len = gimple_call_arg (stmt, 2);
> +  basic_block bb = gimple_bb (stmt);
> +  sanopt_info *info = (sanopt_info *) bb->aux;
> +
> +  if (TREE_CODE (len) != INTEGER_CST
> +      && (flags & ASAN_CHECK_NON_ZERO_LEN) == 0)
> +    return false;
> +  if (integer_zerop (len))
> +    return false;
> +
> +  gimple_set_uid (stmt, info->freeing_call_events);
> +
> +  auto_vec<gimple> &v = ctx->asan_check_map.get_or_insert (ptr);
> +  if (v.is_empty ())
> +    {
> +      /* For this PTR we don't have any ASAN_CHECK stmts recorded, so there's
> +	 nothing to optimize yet.  */
> +      v.safe_push (stmt);
> +      return false;
> +    }
> +
> +  /* We already have recorded a ASAN_CHECK check for this pointer.  Perhaps
> +     we can drop this one.  But only if this check doesn't specify larger
> +     size.  */
> +  while (!v.is_empty ())
> +    {
> +      gimple g = v.last ();
> +      /* Remove statements for BBs that have been already processed.  */
> +      sanopt_info *si = (sanopt_info *) gimple_bb (g)->aux;
> +      if (si->visited_p)
> +	v.pop ();
> +      else
> +	break;
> +    }

This part seems to be duplicated across maybe_optimize_ubsan_null_ifn 
and maybe_optimize_asan_check_ifn.  Perhaps make some 
maybe_get_dominating_check API?

> +
> +  unsigned int i;
> +  gimple g;
> +  gimple to_pop = NULL;
> +  bool remove = false;
> +  basic_block last_bb = bb;
> +  bool cleanup = false;
> +
> +  FOR_EACH_VEC_ELT_REVERSE (v, i, g)
> +    {
> +      basic_block gbb = gimple_bb (g);
> +      sanopt_info *si = (sanopt_info *) gbb->aux;
> +      if (gimple_uid (g) < si->freeing_call_events)
> +	{
> +	  /* If there is a potentially freeing call after g in gbb, we should
> +	     remove it from the vector, can't use in optimization.  */
> +	  cleanup = true;
> +	  continue;
> +	}
> +
> +      if (TREE_CODE (len) != INTEGER_CST)
> +	{
> +	  /* If there is some stmts not followed by freeing call event
> +	     for ptr already in the current bb, no need to insert anything.
> +	     Non-constant len is treated just as length 1.  */
> +	  if (gbb == bb)
> +	    return false;
> +	  break;
> +	}
> +
> +      tree glen = gimple_call_arg (g, 2);
> +      /* If glen is not integer, we'd have added it to the vector only if
> +	 ASAN_CHECK_NON_ZERO_LEN flag is set, so treat it as length 1.  */

Frankly I don't think we use ASAN_CHECK_NON_ZERO_LEN anymore (it's only 
set for trivial cases now).  Perhaps we should just nuke it from asan.c 
and sanopt.c alltogether?

> +      if (TREE_CODE (glen) != INTEGER_CST)

That's a matter of taste but why not a higher-level tree_fits_shwi and 
tree_to_shwi?

> +	glen = integer_one_node;
> +      /* If we've checked only smaller length than we want to check now,
> +	 we can't remove the current stmt.  If g is in the same basic block,
> +	 we want to remove it though, as the current stmt is better.  */
> +      if (tree_int_cst_lt (glen, len))
> +	{
> +	  if (gbb == bb)
> +	    {
> +	      to_pop = g;
> +	      cleanup = true;
> +	    }
> +	  continue;
> +	}
> +
> +      while (last_bb != gbb)
> +	{
> +	  /* Paths from last_bb to bb have been checked before.
> +	     gbb is necessarily a dominator of last_bb, but not necessarily
> +	     immediate dominator.  */
> +	  if (((sanopt_info *) last_bb->aux)->freeing_call_events)
> +	    break;
> +
> +	  basic_block imm = get_immediate_dominator (CDI_DOMINATORS, last_bb);
> +	  gcc_assert (imm);
> +	  if (imm_dom_path_with_freeing_call (last_bb, imm))
> +	    break;
> +
> +	  last_bb = imm;
> +	}
> +      if (last_bb == gbb)
> +	remove = true;
> +      break;
> +    }
> +
> +  if (cleanup)
> +    {
> +      unsigned int j = 0, l = v.length ();
> +      for (i = 0; i < l; i++)
> +	if (v[i] != to_pop
> +	    && (gimple_uid (v[i])
> +		== ((sanopt_info *)
> +		    gimple_bb (v[i])->aux)->freeing_call_events))
> +	  {
> +	    if (i != j)
> +	      v[j] = v[i];
> +	    j++;
> +	  }
> +      v.truncate (j);
> +    }
> +
> +  if (!remove)
> +    v.safe_push (stmt);
> +  return remove;
> +}
> +
>   /* Try to optimize away redundant UBSAN_NULL checks.
>
>      We walk blocks in the CFG via a depth first search of the dominator
> @@ -89,111 +408,77 @@ sanopt_optimize_walker (basic_block bb,
>   {
>     basic_block son;
>     gimple_stmt_iterator gsi;
> +  sanopt_info *info = (sanopt_info *) bb->aux;

> +  bool asan_check_optimize
> +    = (flag_sanitize & SANITIZE_ADDRESS)
> +      && ((flag_sanitize & flag_sanitize_recover
> +	   & SANITIZE_KERNEL_ADDRESS) == 0);

Why do we disable check optimizations for KASan?

>     for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi);)
>       {
>         gimple stmt = gsi_stmt (gsi);
>         bool remove = false;
>
> -      if (is_gimple_call (stmt)
> -	  && gimple_call_internal_p (stmt))
> +      if (!is_gimple_call (stmt))
> +	{
> +	  /* Handle asm volatile or asm with "memory" clobber
> +	     the same as potentionally freeing call.  */
> +	  if (gimple_code (stmt) == GIMPLE_ASM
> +	      && asan_check_optimize
> +	      && (gimple_asm_clobbers_memory_p (stmt)
> +		  || gimple_asm_volatile_p (stmt)))
> +	    info->freeing_call_events++;
> +	  gsi_next (&gsi);
> +	  continue;
> +	}
> +
> +      if (asan_check_optimize && !nonfreeing_call_p (stmt))
> +	info->freeing_call_events++;
> +
> +      if (gimple_call_internal_p (stmt))
>   	switch (gimple_call_internal_fn (stmt))
>   	  {
>   	  case IFN_UBSAN_NULL:
> -	    {
> -	      gcc_assert (gimple_call_num_args (stmt) == 3);
> -	      tree ptr = gimple_call_arg (stmt, 0);
> -	      tree cur_align = gimple_call_arg (stmt, 2);
> -	      gcc_assert (TREE_CODE (cur_align) == INTEGER_CST);
> -
> -	      auto_vec<gimple> &v = ctx->null_check_map.get_or_insert (ptr);
> -	      if (v.is_empty ())
> -		/* For this PTR we don't have any UBSAN_NULL stmts
> -		   recorded, so there's nothing to optimize yet.  */
> -		v.safe_push (stmt);
> -	      else
> -		{
> -		  /* We already have recorded a UBSAN_NULL check
> -		     for this pointer.  Perhaps we can drop this one.
> -		     But only if this check doesn't specify stricter
> -		     alignment.  */
> -		  while (!v.is_empty ())
> -		    {
> -		      gimple g = v.last ();
> -		      /* Remove statements for BBs that have been
> -			 already processed.  */
> -		      sanopt_info *si = (sanopt_info *) gimple_bb (g)->aux;
> -		      if (si->visited_p)
> -			v.pop ();
> -		      else
> -			{
> -			  /* At this point we shouldn't have any statements
> -			     that aren't dominating the current BB.  */
> -			  tree align = gimple_call_arg (g, 2);
> -			  int kind = tree_to_shwi (gimple_call_arg (g, 1));
> -			  /* If this is a NULL pointer check where we had segv
> -			     anyway, we can remove it.  */
> -			  if (integer_zerop (align)
> -			      && (kind == UBSAN_LOAD_OF
> -				  || kind == UBSAN_STORE_OF
> -				  || kind == UBSAN_MEMBER_ACCESS))
> -			    remove = true;
> -			  /* Otherwise remove the check in non-recovering
> -			     mode, or if the stmts have same location.  */
> -			  else if (integer_zerop (align))
> -			    remove = !(flag_sanitize_recover & SANITIZE_NULL)
> -				     || flag_sanitize_undefined_trap_on_error
> -				     || gimple_location (g)
> -					== gimple_location (stmt);
> -			  else if (tree_int_cst_le (cur_align, align))
> -			    remove = !(flag_sanitize_recover
> -				       & SANITIZE_ALIGNMENT)
> -				     || flag_sanitize_undefined_trap_on_error
> -				     || gimple_location (g)
> -					== gimple_location (stmt);
> -			  if (!remove && gimple_bb (g) == gimple_bb (stmt)
> -			      && tree_int_cst_compare (cur_align, align) == 0)
> -			    v.pop ();
> -			  break;
> -			}
> -		    }
> -
> -		  if (remove)
> -		    {
> -		      /* Drop this check.  */
> -		      if (dump_file && (dump_flags & TDF_DETAILS))
> -			{
> -			  fprintf (dump_file, "Optimizing out\n  ");
> -			  print_gimple_stmt (dump_file, stmt, 0,
> -					     dump_flags);
> -			  fprintf (dump_file, "\n");
> -			}
> -		      gsi_remove (&gsi, true);
> -		    }
> -		  else
> -		    v.safe_push (stmt);
> -		  }
> -	    }
> +	    remove = maybe_optimize_ubsan_null_ifn (ctx, stmt);
> +	    break;
>   	  case IFN_ASAN_CHECK:
> -	    ctx->asan_num_accesses++;
> +	    if (asan_check_optimize)
> +	      remove = maybe_optimize_asan_check_ifn (ctx, stmt);

It may be useful to also store base address in check-table:

static tree
maybe_get_single_definition (tree t)
{
   if (TREE_CODE (t) == SSA_NAME)
     {
       gimple g = SSA_NAME_DEF_STMT (t);
       if (gimple_assign_single_p (g))
         return gimple_assign_rhs1 (g);
     }
   return NULL_TREE;
}

> +	    if (!remove)
> +	      ctx->asan_num_accesses++;
>   	    break;
>   	  default:
>   	    break;
>   	  }
>
> -      /* If we were able to remove the current statement, gsi_remove
> -	 already pointed us to the next statement.  */
> -      if (!remove)
> +      if (remove)
> +	{
> +	  /* Drop this check.  */
> +	  if (dump_file && (dump_flags & TDF_DETAILS))
> +	    {
> +	      fprintf (dump_file, "Optimizing out\n  ");
> +	      print_gimple_stmt (dump_file, stmt, 0, dump_flags);
> +	      fprintf (dump_file, "\n");
> +	    }
> +	  unlink_stmt_vdef (stmt);
> +	  gsi_remove (&gsi, true);
> +	}
> +      else
>   	gsi_next (&gsi);
>       }
>
> +  if (asan_check_optimize)
> +    {
> +      info->has_freeing_call_p = info->freeing_call_events != 0;
> +      info->has_freeing_call_computed_p = true;
> +    }
> +
>     for (son = first_dom_son (CDI_DOMINATORS, bb);
>          son;
>          son = next_dom_son (CDI_DOMINATORS, son))
>       sanopt_optimize_walker (son, ctx);
>
>     /* We're leaving this BB, so mark it to that effect.  */
> -  sanopt_info *info = (sanopt_info *) bb->aux;
>     info->visited_p = true;
>   }
>
> @@ -259,7 +544,8 @@ pass_sanopt::execute (function *fun)
>
>     /* Try to remove redundant checks.  */
>     if (optimize
> -      && (flag_sanitize & (SANITIZE_NULL | SANITIZE_ALIGNMENT)))
> +      && (flag_sanitize
> +	  & (SANITIZE_NULL | SANITIZE_ALIGNMENT | SANITIZE_ADDRESS)))
>       asan_num_accesses = sanopt_optimize (fun);
>     else if (flag_sanitize & SANITIZE_ADDRESS)
>       {
>
>
> 	Jakub
>



More information about the Gcc-patches mailing list