This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC] Adjusted VRP


On Wed, Oct 29, 2014 at 4:35 PM, Marat Zakirov <m.zakirov@samsung.com> wrote:
> Hi folks!
>
> During asan performance tunning we tried to use VRP and found that it is
> path-insensitive and thus suboptimal. In example bellow "i" has VRP range
> 0..1000 across whole program but in loop body it is just 0..999.
>
> int a[1000];
> void foo ()
> {
>    for (int i=0;i<1000;i++)
>        a[i] = 0;
> }
>
> I think that path-sensitive approach could significantly increase VRP
> precision. I suggest to extend existing get_range_info (tree t) user
> interface by get_range_info (tree t, basic_block = NULL). So if user do not
> want specify basic_block he will get usual range info. In case when bb is
> specified if hash<pair<tree, basic_block>> has entry - bblock-accurate range
> info will be returned, usual otherwise. The goal of adjustment algorithm is
> to fill hash<pair<tree, basic_block>> where tree is a SSA which has
> bblock-accurate VRP. Memory usage of hash<pair<tree, basic_block>> could be
> reduced by memorizing just important values such as loop iterators, array
> indexes etc.
>
> I propose two approaches to fill up hash<pair<tree, basic_block>>. First one
> is built on top of existing VRP pass. Special CFG RPO iterative traverse
> truncates and stores VRP in every basic block during CFG traversal. It
> merges VRP on join bblocks and truncates on conditional bblocks. A draft
> implementation of this approach is in the attached patch. The patch was
> developed for asan but VRP enhancements are generic.
>
> Second approach is about making pre and post pass for VRP pass. Pre-pass
> splits every tree "i" in every conditional bblock ending with "if (i)" and
> build new phis to merge splited versions of each tree "i". After ordinary
> VRP pass did its job post-pass extract path sensitive info from tree copies
> build in pre-pass during gluing original trees with their copies.
>
> Second approach could be illustrated by using IR of simple example from the
> above.
> IR before transformation:
>
> int a[1000];
> void foo ()
> {
>    int i_0;
>    int i_1;
>    int i_2;
>
>    bb 0:
>    i_0 = 0;
>
>    bb 1:
>    i_1 = PHI (i_2,i_0));
>    if (i_1 == 1000)
>       goto bb 3;
>    else
>       goto bb 2;
>
>    bb 2:
>    a[i_1] = 1;
>    i_2 = i_1 + 1;
>    goto bb 1;
>
>    bb 3:
>    exit;
> }
>
> IR after pre-pass:
>
> int a[1000];
> void foo ()
> {
>    int i_0;
>    int i_1;
>    int i_2;
>    int i_3;
>    int i_4;
>
>    bb 0:
>    i_0 = 0;
>
>    bb 1:
>    i_1 = PHI (i_2,i_0));
>    if (i_1 == 1000)
>       goto bb 3;
>    else
>       goto bb 2;
>
>    bb 2:
>    i_3 = i_1;        << i_1 was splitted
>    a[i_3] = 1;
>    i_2 = i_3 + 1;
>    goto bb 1;
>
>    bb 3:
>    i_4 = i_1;      << i_1 was splitted
>    exit;
> }
>
> After ordinary VRP analysis SSA variables would have following ranges:
>
> i_0 = 0
> i_1 = [0..1000]
> i_2 = [1..999]
> i_3 = [0..999]
> i_4 = 1000
>
> After VRP adjustments gathering during copies deletion following info will
> be available: get_vrp (i_1, bb 2) = get_vrp (i_3) = [0..999] and get_vrp
> (i_1, bb 3) = get_vrp (i_4) = 1000
>
> First approach is less precise because it uses small and simple subset of
> ordinary VRP traverses. Second approach requires accurate SSA splitting with
> possibly complex def-use corrections. There is also third approach - just
> fix existing VRP. It will require unnecessary complication of well debugged
> code. At the same time, it won't be more accurate than second approach.
>
> We can teach CFG transformations (bblock splitting, etc.) to propagate
> collected info but even without that propagation range info remains correct
> - if there no entry in hash - just return ordinary range info.
>
> I will appreciate your notes and comments. Task is not seem to be trivial
> and I need to understand its importance for community to implement it.

Well, VRP is not path-insensitive - it is the value-ranges we are able
to retain after removing the ASSERT_EXPRs VRP inserts.

Why can't you do the ASAN optimizations in the VRP transform phase?

Thanks,
Richard.

> --Marat
>
> diff --git a/gcc/asan.c b/gcc/asan.c
> index 2a61a82..49e77eb 100644
> --- a/gcc/asan.c
> +++ b/gcc/asan.c
> @@ -369,6 +369,134 @@ asan_mem_ref_hasher::equal (const asan_mem_ref *m1,
>           && operand_equal_p (m1->start, m2->start, 0));
>  }
>
> +// Return true if last in of bb is condition
> +inline bool
> +maybe_parse_last_integer_comparison (basic_block bb, tree * op0_p = NULL,
> tree * op1_p = NULL, tree_code * cond_p = NULL);
> +
> +struct asan_tree_hasher : default_hashmap_traits
> +{
> +  static hashval_t hash (tree t)
> +  {
> +    return iterative_hash_expr (t, 0);
> +  }
> +  static bool equal_keys (const tree & t1, const tree & t2)
> +  {
> +    return operand_equal_p (t1, t2, 0);
> +  }
> +};
> +
> +class vrp_bounds
> +{
> +  HOST_WIDE_INT low;
> +  HOST_WIDE_INT up;
> +  inline HOST_WIDE_INT max (const HOST_WIDE_INT a, const HOST_WIDE_INT b)
> +  {
> +    return a > b ? a : b;
> +  }
> +  inline HOST_WIDE_INT min (const HOST_WIDE_INT a, const HOST_WIDE_INT b)
> +  {
> +    return a < b ? a : b;
> +  }
> +public:
> +  void check () { gcc_assert (up >= low); }
> +  vrp_bounds (HOST_WIDE_INT in_low, HOST_WIDE_INT in_up) : low (in_low), up
> (in_up) { ; }
> +  vrp_bounds () : low (HOST_WIDE_INT_MIN), up (HOST_WIDE_INT_MAX) { ; }
> +  vrp_bounds operator &= (const vrp_bounds & vrpb)
> +  {
> +    gcc_assert (!(low > vrpb.up || up < vrpb.low));
> +    low = max (low, vrpb.low);
> +    up  = min (up, vrpb.up);
> +    return *this;
> +  }
> +  vrp_bounds operator &= (HOST_WIDE_INT cval)
> +  {
> +    gcc_assert (cval >= low && cval <= up);
> +    low = cval;
> +    up  = cval;
> +    return *this;
> +  }
> +  vrp_bounds operator &= (tree t)
> +  {
> +    wide_int min_sub, max_sub;
> +    if (get_range_info (t, &min_sub, &max_sub) == VR_RANGE)
> +      {
> +       bool sign = TYPE_SIGN (TREE_TYPE (t)) == SIGNED;
> +       low = max (low, min_sub.slow());
> +       if (sign || max_sub.ulow () < HOST_WIDE_INT_MAX)
> +         up = min (up, max_sub.slow());
> +      }
> +    return *this;
> +  }
> +  bool and_with_edge (edge e, tree t)
> +  {
> +    tree op0, op1;
> +    tree_code cond;
> +
> +    if (!maybe_parse_last_integer_comparison (e->src, &op0, &op1, &cond)
> +       || !operand_equal_p (op0, t, 0)
> +       || !tree_fits_shwi_p (op1))
> +      return true;
> +
> +    HOST_WIDE_INT bnd = tree_to_shwi (op1);
> +
> +    if (e->flags & EDGE_FALSE_VALUE)
> +      cond = invert_tree_comparison (cond, false);
> +
> +    (*this) &= t;
> +    switch (cond)
> +      {
> +      case GT_EXPR: bnd++;
> +      case GE_EXPR: low = max (low, bnd); break;
> +      case LT_EXPR: bnd--;
> +      case LE_EXPR: up = min (up, bnd); break;
> +      case NE_EXPR:
> +      if (low == bnd) low++;
> +      else if (up == bnd) up--;
> +       break;
> +      case EQ_EXPR: low = bnd; up = bnd; break;
> +      default: break;
> +      }
> +    (*this) &= t;
> +    if (low > up)
> +      {
> +       low = up = 0;
> +       return false;
> +      }
> +    return true;
> +  }
> +  vrp_bounds operator |= (const vrp_bounds & vrpb)
> +  {
> +    low = min (low, vrpb.low);
> +    up  = max (up, vrpb.up);
> +    return *this;
> +  }
> +  vrp_bounds operator |= (HOST_WIDE_INT cval)
> +  {
> +    low = min (low, cval);
> +    up  = max (up, cval);
> +    return *this;
> +  }
> +  vrp_bounds operator = (HOST_WIDE_INT cval)
> +  {
> +    low = cval;
> +    up = cval;
> +    return *this;
> +  }
> +  bool includes (const vrp_bounds & other) const
> +  {
> +    return (other.low >= low) && (other.up <= up);
> +  }
> +  bool progress (const vrp_bounds & other) const
> +  {
> +     if (this->includes (other))
> +       if (other.low > low || other.up < up)
> +        return true;
> +     return false;
> +  }
> +};
> +
> +typedef hash_map<tree, vrp_bounds, asan_tree_hasher> node_vrp_t;
> +
>  static hash_table<asan_mem_ref_hasher> *asan_mem_ref_ht;
>
>  /* Returns a reference to the hash table containing memory references.
> @@ -1470,7 +1598,6 @@ create_cond_insert_point (gimple_stmt_iterator *iter,
>     statement pointed to by ITER.  The fallthrough block -- which is the
>     else block of the condition as well as the destination of the
>     outcoming edge of the 'then block' -- starts with the statement
> -   pointed to by ITER.
>
>     COND is the condition of the if.
>
> @@ -1687,6 +1814,33 @@ build_check_stmt (location_t loc, tree base, tree
> len,
>      }
>  }
>
> +/* Whether deref is safe */
> +static bool
> +deref_is_safe_p (tree t, gimple s)
> +{
> +  wide_int min_sub, max_sub;
> +  tree     low, up, idx = TREE_OPERAND (t, 1);
> +
> +  if (TREE_CODE (idx) != SSA_NAME)
> +    return false;
> +
> +  vrp_bounds index_vrpb;
> +  index_vrpb &= idx;
> +  node_vrp_t * node_vrp = (node_vrp_t *) gimple_bb (s)->aux;
> +  if (node_vrp && node_vrp->get (idx))
> +    index_vrpb &= *(node_vrp->get (idx));
> +
> +  low = array_ref_low_bound (t);
> +  up = array_ref_up_bound (t);
> +  vrp_bounds array_vrpb (low ? ((wide_int)low).slow () : HOST_WIDE_INT_MAX,
> +                        up ? ((wide_int)up).slow () : HOST_WIDE_INT_MIN);
> +
> +  if (array_vrpb.includes (index_vrpb))
> +    return true;
> +
> +  return false;
> +}
> +
>  /* If T represents a memory access, add instrumentation code before ITER.
>     LOCATION is source code location.
>     IS_STORE is either TRUE (for a store) or FALSE (for a load).  */
> @@ -1771,6 +1925,20 @@ instrument_derefs (gimple_stmt_iterator *iter, tree
> t,
>         }
>      }
>
> +   if (TREE_CODE (inner) == VAR_DECL
> +      && !DECL_EXTERNAL (inner)
> +      && TREE_CODE (t) == ARRAY_REF
> +      && bitpos >= 0
> +      && DECL_SIZE (inner)
> +      && tree_fits_shwi_p (DECL_SIZE (inner))
> +      && bitpos + bitsize <= tree_to_shwi (DECL_SIZE (inner)))
> +    {
> +      if (deref_is_safe_p (t, gsi_stmt (*iter)))
> +       {
> +         return;
> +       }
> +    }
> +
>    base = build_fold_addr_expr (t);
>    if (!has_mem_ref_been_instrumented (base, size_in_bytes))
>      {
> @@ -1978,6 +2146,7 @@ maybe_instrument_assignment (gimple_stmt_iterator
> *iter)
>    if (gimple_store_p (s))
>      {
>        ref_expr = gimple_assign_lhs (s);
> +
>        is_store = true;
>        instrument_derefs (iter, ref_expr,
>                          gimple_location (s),
> @@ -1988,6 +2157,7 @@ maybe_instrument_assignment (gimple_stmt_iterator
> *iter)
>    if (gimple_assign_load_p (s))
>      {
>        ref_expr = gimple_assign_rhs1 (s);
> +
>        is_store = false;
>        instrument_derefs (iter, ref_expr,
>                          gimple_location (s),
> @@ -2041,6 +2211,182 @@ maybe_instrument_call (gimple_stmt_iterator *iter)
>    return false;
>  }
>
> +inline bool
> +maybe_parse_last_integer_comparison (basic_block bb, tree * op0_p,
> +                                    tree * op1_p, tree_code * cond_p)
> +{
> +  gimple last_ins = gsi_stmt (gsi_last_bb (bb));
> +
> +  if (!(last_ins && gimple_code (last_ins) == GIMPLE_COND))
> +    return false;
> +
> +  tree op0 = gimple_op (last_ins, 0);
> +  tree op1 = gimple_op (last_ins, 1);
> +
> +  if (op0_p) *op0_p = op0;
> +  if (op1_p) *op1_p = op1;
> +  if (cond_p) *cond_p = gimple_cond_code (last_ins);
> +
> +  if (TREE_CODE (TREE_TYPE (op0)) != INTEGER_TYPE)
> +    return false;
> +
> +  if (!tree_fits_shwi_p (op1))
> +    return false;
> +
> +  vrp_bounds vrpb;
> +
> +  node_vrp_t * node_vrp = (node_vrp_t *) bb->aux;
> +  if (!node_vrp->get (op0))
> +    node_vrp->put (op0, vrpb);
> +
> +  return true;
> +}
> +
> +inline bool
> +is_phi_appropriate (gimple phi)
> +{
> +  for (unsigned arg = 0; arg < gimple_phi_num_args (phi); arg++)
> +    {
> +      tree op = gimple_phi_arg (phi, arg)->def;
> +      if (TREE_CODE (op) == INTEGER_CST && !tree_fits_shwi_p (op))
> +       return false;
> +    }
> +  return true;
> +}
> +
> +static bool
> +traverse_basic_block_vrp (basic_block bb)
> +{
> +  bool changed = false;
> +  node_vrp_t * node_vrp = (node_vrp_t *)bb->aux;
> +  edge_iterator ei;
> +  edge e;
> +
> +  for (gimple_stmt_iterator i = gsi_start_phis (bb); !gsi_end_p (i);
> gsi_next (&i))
> +    {
> +      gimple s = gsi_stmt (i);
> +      tree res = gimple_phi_result (s);
> +      vrp_bounds old_vrpb;
> +
> +      if (TREE_CODE (TREE_TYPE (res)) != INTEGER_TYPE)
> +       continue;
> +
> +      if (!is_phi_appropriate (s))
> +       continue;
> +
> +      if (node_vrp->get (res))
> +       old_vrpb = *(node_vrp->get (res));
> +      else
> +       node_vrp->put (res, old_vrpb);
> +
> +      bool first = true;
> +      for (unsigned arg = 0; arg < gimple_phi_num_args (s); arg++)
> +       {
> +         edge e = gimple_phi_arg_edge (s, arg);
> +         tree op = gimple_phi_arg (s, arg)->def;
> +
> +         if (tree_fits_shwi_p (op))
> +           {
> +             if (first)
> +               { *(node_vrp->get (res)) = tree_to_shwi (op); first=false; }
> +             else
> +               *(node_vrp->get (res)) |= tree_to_shwi (op);
> +           }
> +         else
> +           {
> +             vrp_bounds pred_vrpb;
> +
> +             if (e->src->aux && ((node_vrp_t *)e->src->aux)->get (op))
> +               pred_vrpb = *((node_vrp_t *)e->src->aux)->get (op);
> +
> +             if (!pred_vrpb.and_with_edge (e, op))
> +               continue;
> +
> +             if (first)
> +               { *(node_vrp->get (res)) = pred_vrpb; first = false; }
> +             else
> +               *(node_vrp->get (res)) |= pred_vrpb;
> +           }
> +       }
> +      *(node_vrp->get (res)) &= res;
> +      if (old_vrpb.progress (*(node_vrp->get (res))))
> +       changed = true;
> +    }
> +
> +  FOR_EACH_EDGE (e, ei, bb->preds)
> +    {
> +      node_vrp_t * pred_vrp = (node_vrp_t *) e->src->aux;
> +      for (node_vrp_t::iterator iter = pred_vrp->begin ();
> +          iter != pred_vrp->end (); ++iter)
> +       if (!node_vrp->get (iter.key()))
> +         {
> +           gimple def = SSA_NAME_DEF_STMT (iter.key ());
> +
> +           // propogate only through SSA liverange
> +           if (gimple_code (def) == GIMPLE_NOP
> +               || dominated_by_p (CDI_DOMINATORS, bb, gimple_bb (def)))
> +             node_vrp->put (iter.key(), vrp_bounds());
> +         }
> +    }
> +
> +  for (node_vrp_t::iterator iter = node_vrp->begin ();
> +       iter != node_vrp->end (); ++iter)
> +    {
> +      bool first = true;
> +      tree t = iter.key ();
> +      vrp_bounds temp;
> +
> +      FOR_EACH_EDGE (e, ei, bb->preds)
> +       {
> +         node_vrp_t * pred_vrp = (node_vrp_t *) e->src->aux;
> +         vrp_bounds pred_vrpb;
> +
> +         if (pred_vrp->get (t))
> +           pred_vrpb = *pred_vrp->get (t);
> +         else if (dominated_by_p (CDI_DOMINATORS, e->src, bb))
> +           continue;
> +
> +         if (!pred_vrpb.and_with_edge (e, t))
> +           continue;
> +
> +         if (first)
> +           { temp = pred_vrpb; first = false; }
> +         else
> +           temp |= pred_vrpb;
> +       }
> +      temp &= t;
> +      if (node_vrp->get (t)->progress (temp))
> +       *(node_vrp->get (t)) = temp;
> +    }
> +
> +   return changed;
> +}
> +
> +/* Clarify VRP using paths information */
> +static void
> +traverse_collect_bounds (void)
> +{
> +  int * rev_post_order = XALLOCAVEC (int, n_basic_blocks_for_fn (cfun));
> +  int n_blocks = pre_and_rev_post_order_compute_fn (cfun, NULL,
> rev_post_order, true);
> +
> +  for (int i = 0; i < n_blocks; i++)
> +    {
> +      basic_block bb = BASIC_BLOCK_FOR_FN(cfun, rev_post_order[i]);
> +      bb->aux = new node_vrp_t;
> +    }
> +  bool changed = true;
> +  while (changed)
> +    {
> +      changed = false;
> +      for (int i = 0; i < n_blocks; i++)
> +        {
> +         basic_block bb = BASIC_BLOCK_FOR_FN(cfun, rev_post_order[i]);
> +         changed |= traverse_basic_block_vrp (bb);
> +         maybe_parse_last_integer_comparison (bb);
> +        }
> +    }
> +}
> +
>  /* Walk each instruction of all basic block and instrument those that
>     represent memory references: loads, stores, or function calls.
>     In a given basic block, this function avoids instrumenting memory
> @@ -2053,6 +2399,9 @@ transform_statements (void)
>    gimple_stmt_iterator i;
>    int saved_last_basic_block = last_basic_block_for_fn (cfun);
>
> +  if (n_basic_blocks_for_fn (cfun) < PARAM_ASAN_VRP_THRESHOLD)
> +    traverse_collect_bounds();
> +
>    FOR_EACH_BB_FN (bb, cfun)
>      {
>        basic_block prev_bb = bb;
> @@ -2102,6 +2451,12 @@ transform_statements (void)
>               gsi_next (&i);
>             }
>         }
> +       if (bb->aux)
> +        {
> +          ((node_vrp_t *) bb->aux)->empty ();
> +          delete (node_vrp_t *) bb->aux;
> +          bb->aux = NULL;
> +        }
>      }
>    free_mem_ref_resources ();
>  }
> diff --git a/gcc/hash-map.h b/gcc/hash-map.h
> index 4988fe4..d69a96d 100644
> --- a/gcc/hash-map.h
> +++ b/gcc/hash-map.h
> @@ -98,7 +98,7 @@ private:
>    static void
>    mark_key_empty (T *&k)
>      {
> -      k = static_cast<T *> (0);
> +      k = reinterpret_cast<T *> (0);
>      }
>  };
>
> @@ -264,6 +264,43 @@ public:
>           break;
>      }
>
> +  /* Clear all entries.  */
> +
> +  void empty ()
> +    {
> +      m_table.empty ();
> +    }
> +
> +  class iterator
> +  {
> +  public:
> +    iterator () {}
> +
> +    iterator (hash_table<hash_entry> &ht) { hti = ht.begin (); }
> +
> +    inline iterator &operator ++ () { ++hti; return *this; }
> +    bool operator != (const iterator &other) const { return hti !=
> other.hti; }
> +
> +    inline const Key &key () const { return (*hti).m_key; }
> +
> +    inline const Value &operator * () const { return (*hti).m_value; }
> +
> +    inline Value &operator * () { return (*hti).m_value; }
> +
> +    inline const Value *operator -> () const { return &(*hti).m_value; }
> +
> +    inline Value *operator -> () { return &(*hti).m_value; }
> +
> +  private:
> +    typename hash_table<hash_entry>::iterator hti;
> +  };
> +
> +  iterator begin () { return iterator (m_table); }
> +
> +  iterator end () { return iterator (); }
> +
> +  unsigned elements () const { return m_table.elements (); }
> +
>  private:
>
>    template<typename T, typename U, typename V> friend void gt_ggc_mx
> (hash_map<T, U, V> *);
> diff --git a/gcc/hash-set.h b/gcc/hash-set.h
> index 0a500cb..85f0dfa 100644
> --- a/gcc/hash-set.h
> +++ b/gcc/hash-set.h
> @@ -230,6 +230,32 @@ public:
>
>    size_t elements () const { return m_table.elements (); }
>
> +  class iterator
> +  {
> +  public:
> +    iterator () {}
> +
> +    iterator (hash_table<hash_entry> &ht) { hti = ht.begin (); }
> +
> +    inline iterator &operator ++ () { ++hti; return *this; }
> +    bool operator != (const iterator &other) const { return hti !=
> other.hti; }
> +
> +    inline const Key &operator * () const { return (*hti).m_key; }
> +
> +    inline Key &operator * () { return (*hti).m_key; }
> +
> +    inline const Key *operator -> () const { return &(*hti).m_key; }
> +
> +    inline Key *operator -> () { return &(*hti).m_key; }
> +
> +  private:
> +    typename hash_table<hash_entry>::iterator hti;
> +  };
> +
> +  iterator begin () { return iterator (m_table); }
> +
> +  iterator end () { return iterator (); }
> +
>  private:
>
>    template<typename T, typename U> friend void gt_ggc_mx (hash_set<T, U>
> *);
> diff --git a/gcc/hash-table.h b/gcc/hash-table.h
> index 2493f2e..76c9916 100644
> --- a/gcc/hash-table.h
> +++ b/gcc/hash-table.h
> @@ -1124,6 +1124,7 @@ public:
>        m_slot (slot), m_limit (limit) {}
>
>      inline value_type &operator * () { return *m_slot; }
> +    inline const value_type &operator * () const { return *m_slot; }
>      void slide ();
>      inline iterator &operator ++ ();
>      bool operator != (const iterator &other) const
> @@ -1512,7 +1513,7 @@ hash_table<Descriptor, Allocator, true>
>  ::remove_elt_with_hash (const compare_type &comparable, hashval_t hash)
>  {
>    value_type *slot = find_slot_with_hash (comparable, hash, NO_INSERT);
> -  if (is_empty (*slot))
> +  if (slot == NULL || is_empty (*slot))
>      return;
>
>    Descriptor::remove (*slot);
> diff --git a/gcc/params.def b/gcc/params.def
> index beff7e6..5f05a7e 100644
> --- a/gcc/params.def
> +++ b/gcc/params.def
> @@ -1102,6 +1102,11 @@ DEFPARAM
> (PARAM_ASAN_INSTRUMENTATION_WITH_CALL_THRESHOLD,
>           "in function becomes greater or equal to this number",
>           7000, 0, INT_MAX)
>
> +DEFPARAM (PARAM_ASAN_VRP_THRESHOLD,
> +         "asan-vrp-threshold",
> +         "Skip vrp analysis if CFG has more than 999 nodes",
> +         1000, 0, INT_MAX)
> +
>  DEFPARAM (PARAM_UNINIT_CONTROL_DEP_ATTEMPTS,
>           "uninit-control-dep-attempts",
>           "Maximum number of nested calls to search for control dependencies
> "
> diff --git a/gcc/params.h b/gcc/params.h
> index d488e32..d1dd159 100644
> --- a/gcc/params.h
> +++ b/gcc/params.h
> @@ -234,5 +234,7 @@ extern void init_param_values (int *params);
>    PARAM_VALUE (PARAM_ASAN_USE_AFTER_RETURN)
>  #define ASAN_INSTRUMENTATION_WITH_CALL_THRESHOLD \
>    PARAM_VALUE (PARAM_ASAN_INSTRUMENTATION_WITH_CALL_THRESHOLD)
> +#define ASAN_VRP_THRESHOLD \
> +  PARAM_VALUE (PARAM_ASAN_VRP_THRESHOLD)
>
>  #endif /* ! GCC_PARAMS_H */
> diff --git a/gcc/testsuite/c-c++-common/asan/no-redundant-check-1.c
> b/gcc/testsuite/c-c++-common/asan/no-redundant-check-1.c
> new file mode 100644
> index 0000000..da123f2
> --- /dev/null
> +++ b/gcc/testsuite/c-c++-common/asan/no-redundant-check-1.c
> @@ -0,0 +1,14 @@
> +/* { dg-options "-fdump-tree-asan" } */
> +/* { dg-do compile } */
> +/* { dg-skip-if "" { *-*-* } { "*" } { "-O3" } } */
> +int a[1000];
> +
> +void foo () {
> +
> +  int i = 0;
> +  for (; i < 1000; i++)
> +    a[i] = 0;
> +}
> +
> +/* { dg-final { scan-tree-dump-times "ASAN_CHECK" 0 "asan1" } } */
> +/* { dg-final { cleanup-tree-dump "asan1" } } */
> diff --git a/gcc/testsuite/c-c++-common/asan/no-redundant-check-2.c
> b/gcc/testsuite/c-c++-common/asan/no-redundant-check-2.c
> new file mode 100644
> index 0000000..96e46d2
> --- /dev/null
> +++ b/gcc/testsuite/c-c++-common/asan/no-redundant-check-2.c
> @@ -0,0 +1,14 @@
> +/* { dg-options "-fdump-tree-asan" } */
> +/* { dg-do compile } */
> +/* { dg-skip-if "" { *-*-* } { "*" } { "-O3" } } */
> +
> +int a[1000];
> +
> +void foo () {
> +  int i;
> +  for (i = 999; i >= 0; i--)
> +    a[i] = 0;
> +}
> +
> +/* { dg-final { scan-tree-dump-times "ASAN_CHECK" 0 "asan1" } } */
> +/* { dg-final { cleanup-tree-dump "asan1" } } */
> diff --git a/gcc/testsuite/c-c++-common/asan/no-redundant-check-3.c
> b/gcc/testsuite/c-c++-common/asan/no-redundant-check-3.c
> new file mode 100644
> index 0000000..181af16
> --- /dev/null
> +++ b/gcc/testsuite/c-c++-common/asan/no-redundant-check-3.c
> @@ -0,0 +1,14 @@
> +/* { dg-options "-fdump-tree-asan" } */
> +/* { dg-do compile } */
> +/* { dg-skip-if "" { *-*-* } { "*" } { "-O3" } } */
> +
> +int a[1000];
> +
> +void foo () {
> +  unsigned i = 0;
> +  for (; i < 1000; i++)
> +    a[i] = 0;
> +}
> +
> +/* { dg-final { scan-tree-dump-times "ASAN_CHECK" 0 "asan1" } } */
> +/* { dg-final { cleanup-tree-dump "asan1" } } */
> diff --git a/gcc/testsuite/c-c++-common/asan/no-redundant-check-4.c
> b/gcc/testsuite/c-c++-common/asan/no-redundant-check-4.c
> new file mode 100644
> index 0000000..479b5d7
> --- /dev/null
> +++ b/gcc/testsuite/c-c++-common/asan/no-redundant-check-4.c
> @@ -0,0 +1,14 @@
> +/* { dg-options "-fdump-tree-asan" } */
> +/* { dg-do compile } */
> +/* { dg-skip-if "" { *-*-* } { "*" } { "-O3" } } */
> +
> +int a[1000];
> +
> +void foo () {
> +  unsigned i;
> +  for (i = 999; i >= 1; i--)
> +    a[i] = 0;
> +}
> +
> +/* { dg-final { scan-tree-dump-times "ASAN_CHECK" 0 "asan1" } } */
> +/* { dg-final { cleanup-tree-dump "asan1" } } */
> diff --git a/gcc/testsuite/c-c++-common/asan/no-redundant-check-5.c
> b/gcc/testsuite/c-c++-common/asan/no-redundant-check-5.c
> new file mode 100644
> index 0000000..039cd78
> --- /dev/null
> +++ b/gcc/testsuite/c-c++-common/asan/no-redundant-check-5.c
> @@ -0,0 +1,15 @@
> +/* { dg-options "-fdump-tree-asan" } */
> +/* { dg-do compile } */
> +/* { dg-skip-if "" { *-*-* } { "*" } { "-O3" } } */
> +int a[1000];
> +extern int f ();
> +
> +void foo () {
> +
> +  int i = 0;
> +  for (; i < f (); i++)
> +    a[i] = 0;
> +}
> +
> +/* { dg-final { scan-tree-dump-times "ASAN_CHECK" 1 "asan1" } } */
> +/* { dg-final { cleanup-tree-dump "asan1" } } */
> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
> index 7ca0528..dac95cc 100644
> --- a/gcc/tree-vrp.c
> +++ b/gcc/tree-vrp.c
> @@ -8634,7 +8634,8 @@ vrp_visit_phi_node (gimple phi)
>        if ((cmp_min > 0 || cmp_min < 0
>            || cmp_max < 0 || cmp_max > 0)
>           && (l = loop_containing_stmt (phi))
> -         && l->header == gimple_bb (phi))
> +         && l->header == gimple_bb (phi)
> +          && !(flag_sanitize & SANITIZE_ADDRESS))
>         adjust_range_with_scev (&vr_result, l, phi, lhs);
>
>        /* If we will end up with a (-INF, +INF) range, set it to
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]