[PATCH] Use bit-CCP in range-ops.

Aldy Hernandez aldyh@redhat.com
Tue Nov 8 14:19:09 GMT 2022


Pushed.

I'd still love to hear feedback though ;-).

Aldy

On Sun, Nov 6, 2022 at 5:14 PM Aldy Hernandez <aldyh@redhat.com> wrote:
>
> After Jakub and Richi's suggestion of using the same representation
> for tracking known bits as we do in CCP, I took a peek at the code and
> realized there's a plethora of bit-tracking code there that we could
> be sharing with range-ops.  For example, the multiplication
> optimizations are way better than what I had cobbled together.  For
> that matter, our maybe nonzero tracking as a whole has a lot of room
> for improvement.  Being the lazy ass that I am, I think we should just
> use one code base (CCP's).
>
> This patch provides a thin wrapper for converting the irange maybe
> nonzero bits to what CCP requires, and uses that to call into
> bit_value_binop().  I have so far converted the MULT_EXPR range-op
> entry to use it, as the DIV_EXPR entry we have gets a case CCP doesn't
> get so I'd like to contribute the enhancement to CCP before converting
> over.
>
> I'd like to use this approach with the dozen or so tree_code's that
> are handled in CCP, thus saving us from having to implement any of
> them :).
>
> Early next season I'd like to change irange's internal representation
> to a pair of value / mask, and start tracking all known bits.  This
> ties in nicely with our plan for tracking known set bits.
>
> Perhaps if the stars align, we could merge the bit twiddling in CCP
> into range-ops and have a central repository for it.  That is, once we
> make the switch to wide-ints, and assuming there are no performance
> issues.  Note that range-ops is our lowest level abstraction.
> i.e. it's just the math, there's no GORI or ranger, or even the
> concept of a symbolic or SSA.
>
> I'd love to hear comments and ideas, and if no one objects push this.
> Please let me know if I missed anything.
>
> Tested on x86-64 Linux.
>
> gcc/ChangeLog:
>
>         * range-op.cc (irange_to_masked_value): New.
>         (update_known_bitmask): New.
>         (operator_mult::fold_range): Call update_known_bitmask.
> ---
>  gcc/range-op.cc | 63 +++++++++++++++++++++++++++++++++++++++----------
>  1 file changed, 50 insertions(+), 13 deletions(-)
>
> diff --git a/gcc/range-op.cc b/gcc/range-op.cc
> index 25c004d8287..6d9914d8d12 100644
> --- a/gcc/range-op.cc
> +++ b/gcc/range-op.cc
> @@ -46,6 +46,54 @@ along with GCC; see the file COPYING3.  If not see
>  #include "wide-int.h"
>  #include "value-relation.h"
>  #include "range-op.h"
> +#include "tree-ssa-ccp.h"
> +
> +// Convert irange bitmasks into a VALUE MASK pair suitable for calling CCP.
> +
> +static void
> +irange_to_masked_value (const irange &r, widest_int &value, widest_int &mask)
> +{
> +  if (r.singleton_p ())
> +    {
> +      mask = 0;
> +      value = widest_int::from (r.lower_bound (), TYPE_SIGN (r.type ()));
> +    }
> +  else
> +    {
> +      mask = widest_int::from (r.get_nonzero_bits (), TYPE_SIGN (r.type ()));
> +      value = 0;
> +    }
> +}
> +
> +// Update the known bitmasks in R when applying the operation CODE to
> +// LH and RH.
> +
> +static void
> +update_known_bitmask (irange &r, tree_code code,
> +                     const irange &lh, const irange &rh)
> +{
> +  if (r.undefined_p ())
> +    return;
> +
> +  widest_int value, mask, lh_mask, rh_mask, lh_value, rh_value;
> +  tree type = r.type ();
> +  signop sign = TYPE_SIGN (type);
> +  int prec = TYPE_PRECISION (type);
> +  signop lh_sign = TYPE_SIGN (lh.type ());
> +  signop rh_sign = TYPE_SIGN (rh.type ());
> +  int lh_prec = TYPE_PRECISION (lh.type ());
> +  int rh_prec = TYPE_PRECISION (rh.type ());
> +
> +  irange_to_masked_value (lh, lh_value, lh_mask);
> +  irange_to_masked_value (rh, rh_value, rh_mask);
> +  bit_value_binop (code, sign, prec, &value, &mask,
> +                  lh_sign, lh_prec, lh_value, lh_mask,
> +                  rh_sign, rh_prec, rh_value, rh_mask);
> +
> +  int_range<2> tmp (type);
> +  tmp.set_nonzero_bits (value | mask);
> +  r.intersect (tmp);
> +}
>
>  // Return the upper limit for a type.
>
> @@ -1774,21 +1822,10 @@ operator_mult::fold_range (irange &r, tree type,
>    if (!cross_product_operator::fold_range (r, type, lh, rh, trio))
>      return false;
>
> -  if (lh.undefined_p ())
> +  if (lh.undefined_p () || rh.undefined_p ())
>      return true;
>
> -  tree t;
> -  if (rh.singleton_p (&t))
> -    {
> -      wide_int w = wi::to_wide (t);
> -      int shift = wi::exact_log2 (w);
> -      if (shift != -1)
> -       {
> -         wide_int nz = lh.get_nonzero_bits ();
> -         nz = wi::lshift (nz, shift);
> -         r.set_nonzero_bits (nz);
> -       }
> -    }
> +  update_known_bitmask (r, MULT_EXPR, lh, rh);
>    return true;
>  }
>
> --
> 2.38.1
>



More information about the Gcc-patches mailing list