Extend fold_vec_perm to fold VEC_PERM_EXPR in VLA manner
Prathamesh Kulkarni
prathamesh.kulkarni@linaro.org
Mon Oct 17 10:32:02 GMT 2022
On Mon, 10 Oct 2022 at 16:18, Prathamesh Kulkarni
<prathamesh.kulkarni@linaro.org> wrote:
>
> On Fri, 30 Sept 2022 at 21:38, Richard Sandiford
> <richard.sandiford@arm.com> wrote:
> >
> > Richard Sandiford via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> > > Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> > >> Sorry to ask a silly question but in which case shall we select 2nd vector ?
> > >> For num_poly_int_coeffs == 2,
> > >> a1 /trunc n1 == (a1 + 0x) / (n1.coeffs[0] + n1.coeffs[1]*x)
> > >> If a1/trunc n1 succeeds,
> > >> 0 / n1.coeffs[1] == a1/n1.coeffs[0] == 0.
> > >> So, a1 has to be < n1.coeffs[0] ?
> > >
> > > Remember that a1 is itself a poly_int. It's not necessarily a constant.
> > >
> > > E.g. the TRN1 .D instruction maps to a VEC_PERM_EXPR with the selector:
> > >
> > > { 0, 2 + 2x, 1, 4 + 2x, 2, 6 + 2x, ... }
> >
> > Sorry, should have been:
> >
> > { 0, 2 + 2x, 2, 4 + 2x, 4, 6 + 2x, ... }
> Hi Richard,
> Thanks for the clarifications, and sorry for late reply.
> I have attached POC patch that tries to implement the above approach.
> Passes bootstrap+test on x86_64-linux-gnu and aarch64-linux-gnu for VLS vectors.
>
> For VLA vectors, I have only done limited testing so far.
> It seems to pass couple of tests written in the patch for
> nelts_per_pattern == 3,
> and folds the following svld1rq test:
> int32x4_t v = {1, 2, 3, 4};
> return svld1rq_s32 (svptrue_b8 (), &v[0])
> into:
> return {1, 2, 3, 4, ...};
> I will try to bootstrap+test it on SVE machine to test further for VLA folding.
With the attached patch it seems to pass bootstrap+test with SVE enabled.
The only difference w.r.t previous patch is it adds check in
get_vector_for_pattern
if S is constant otherwise returns NULL_TREE.
I added this check because 930325-1.c ICE'd with previous patch
because it had following vec_perm_expr,
where S was non-constant:
vect__16.13_70 = VEC_PERM_EXPR <vect__16.12_69, vect__16.12_69, {
POLY_INT_CST [3, 4], POLY_INT_CST [6, 8], POLY_INT_CST [9, 12], ...
}>;
I am not sure how to proceed in this case, so chose to bail out.
Thanks,
Prathamesh
>
> I have a couple of questions:
> 1] When mask selects elements from same vector but from different patterns:
> For eg:
> arg0 = {1, 11, 2, 12, 3, 13, ...},
> arg1 = {21, 31, 22, 32, 23, 33, ...},
> mask = {0, 0, 0, 1, 0, 2, ... },
> All have npatterns = 2, nelts_per_pattern = 3.
>
> With above mask,
> Pattern {0, ...} selects arg0[0], ie {1, ...}
> Pattern {0, 1, 2, ...} selects arg0[0], arg0[1], arg0[2], ie {1, 11, 2, ...}
> While arg0[0] and arg0[2] belong to same pattern, arg0[1] belongs to different
> pattern in arg0.
> The result is:
> res = {1, 1, 1, 11, 1, 2, ...}
> In this case, res's 2nd pattern {1, 11, 2, ...} is encoded with:
> with a0 = 1, a1 = 11, S = -9.
> Is that expected tho ? It seems to create a new encoding which
> wasn't present in the input vector. For instance, the next elem in
> sequence would be -7,
> which is not present originally in arg0.
> I suppose it's fine since if the user defines mask to have pattern {0,
> 1, 2, ...}
> they intended result to have pattern with above encoding.
> Just wanted to confirm if this is correct ?
>
> 2] Could you please suggest a test-case for S < 0 ?
> I am not able to come up with one :/
>
> Thanks,
> Prathamesh
> >
> > > which is an interleaving of the two patterns:
> > >
> > > { 0, 2, 4, ... } a0 = 0, a1 = 2, S = 2
> > > { 2 + 2x, 4 + 2x, 6 + 2x } a0 = 2 + 2x, a1 = 4 + 2x, S = 2
-------------- next part --------------
diff --git a/gcc/fold-const.cc b/gcc/fold-const.cc
index 9f7beae14e5..e93f2c7b592 100644
--- a/gcc/fold-const.cc
+++ b/gcc/fold-const.cc
@@ -85,6 +85,9 @@ along with GCC; see the file COPYING3. If not see
#include "vec-perm-indices.h"
#include "asan.h"
#include "gimple-range.h"
+#include <algorithm>
+#include "tree-pretty-print.h"
+#include "print-tree.h"
/* Nonzero if we are folding constants inside an initializer or a C++
manifestly-constant-evaluated context; zero otherwise.
@@ -10494,38 +10497,56 @@ fold_mult_zconjz (location_t loc, tree type, tree expr)
build_zero_cst (itype));
}
+/* Check if PATTERN in SEL selects either ARG0 or ARG1,
+ and return the selected arg, otherwise return NULL_TREE. */
-/* Helper function for fold_vec_perm. Store elements of VECTOR_CST or
- CONSTRUCTOR ARG into array ELTS, which has NELTS elements, and return
- true if successful. */
-
-static bool
-vec_cst_ctor_to_array (tree arg, unsigned int nelts, tree *elts)
+static tree
+get_vector_for_pattern (tree arg0, tree arg1,
+ const vec_perm_indices &sel, unsigned pattern)
{
- unsigned HOST_WIDE_INT i, nunits;
+ unsigned sel_npatterns = sel.encoding ().npatterns ();
+ unsigned sel_nelts_per_pattern = sel.encoding ().nelts_per_pattern ();
- if (TREE_CODE (arg) == VECTOR_CST
- && VECTOR_CST_NELTS (arg).is_constant (&nunits))
+ poly_uint64 n1 = TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0));
+ poly_uint64 nsel = sel.length ();
+ poly_uint64 esel;
+
+ if (!multiple_p (nsel, sel_npatterns, &esel))
+ return NULL_TREE;
+
+ poly_uint64 a1 = sel[pattern + sel_npatterns];
+ int64_t S = 0;
+ if (sel_nelts_per_pattern == 3)
{
- for (i = 0; i < nunits; ++i)
- elts[i] = VECTOR_CST_ELT (arg, i);
+ poly_uint64 a2 = sel[pattern + 2 * sel_npatterns];
+ poly_uint64 diff = a2 - a1;
+ if (!diff.is_constant ())
+ return NULL_TREE;
+ S = diff.to_constant ();
}
- else if (TREE_CODE (arg) == CONSTRUCTOR)
+
+ poly_uint64 ae = a1 + (esel - 2) * S;
+ uint64_t q1, qe;
+ poly_uint64 r1, re;
+
+ if (!can_div_trunc_p (a1, n1, &q1, &r1)
+ || !can_div_trunc_p (ae, n1, &qe, &re)
+ || (q1 != qe))
+ return NULL_TREE;
+
+ tree arg = ((q1 & 1) == 0) ? arg0 : arg1;
+
+ if (S < 0)
{
- constructor_elt *elt;
+ poly_uint64 a0 = sel[pattern];
+ if (!known_eq (S, a1 - a0))
+ return NULL_TREE;
- FOR_EACH_VEC_SAFE_ELT (CONSTRUCTOR_ELTS (arg), i, elt)
- if (i >= nelts || TREE_CODE (TREE_TYPE (elt->value)) == VECTOR_TYPE)
- return false;
- else
- elts[i] = elt->value;
+ if (!known_gt (re, VECTOR_CST_NPATTERNS (arg)))
+ return NULL_TREE;
}
- else
- return false;
- for (; i < nelts; i++)
- elts[i]
- = fold_convert (TREE_TYPE (TREE_TYPE (arg)), integer_zero_node);
- return true;
+
+ return arg;
}
/* Attempt to fold vector permutation of ARG0 and ARG1 vectors using SEL
@@ -10539,41 +10560,112 @@ fold_vec_perm (tree type, tree arg0, tree arg1, const vec_perm_indices &sel)
unsigned HOST_WIDE_INT nelts;
bool need_ctor = false;
- if (!sel.length ().is_constant (&nelts))
- return NULL_TREE;
- gcc_assert (known_eq (TYPE_VECTOR_SUBPARTS (type), nelts)
- && known_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)), nelts)
- && known_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1)), nelts));
+ gcc_assert (known_eq (TYPE_VECTOR_SUBPARTS (type), sel.length ())
+ && known_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)),
+ TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1))));
if (TREE_TYPE (TREE_TYPE (arg0)) != TREE_TYPE (type)
|| TREE_TYPE (TREE_TYPE (arg1)) != TREE_TYPE (type))
return NULL_TREE;
- tree *in_elts = XALLOCAVEC (tree, nelts * 2);
- if (!vec_cst_ctor_to_array (arg0, nelts, in_elts)
- || !vec_cst_ctor_to_array (arg1, nelts, in_elts + nelts))
+ unsigned res_npatterns = 0;
+ unsigned res_nelts_per_pattern = 0;
+ unsigned sel_npatterns = 0;
+ tree *vector_for_pattern = NULL;
+
+ if (TREE_CODE (arg0) == VECTOR_CST
+ && TREE_CODE (arg1) == VECTOR_CST
+ && !sel.length ().is_constant ())
+ {
+ sel_npatterns = sel.encoding ().npatterns ();
+ vector_for_pattern = XALLOCAVEC (tree, sel_npatterns);
+ for (unsigned i = 0; i < sel_npatterns; i++)
+ {
+ tree op = get_vector_for_pattern (arg0, arg1, sel, i);
+ if (!op)
+ return NULL_TREE;
+ vector_for_pattern[i] = op;
+ }
+
+ unsigned arg0_npatterns = VECTOR_CST_NPATTERNS (arg0);
+ unsigned arg1_npatterns = VECTOR_CST_NPATTERNS (arg1);
+
+ res_npatterns
+ = least_common_multiple (sel_npatterns,
+ least_common_multiple (arg0_npatterns,
+ arg1_npatterns));
+ res_nelts_per_pattern
+ = std::max(sel.encoding ().nelts_per_pattern (),
+ std::max (VECTOR_CST_NELTS_PER_PATTERN (arg0),
+ VECTOR_CST_NELTS_PER_PATTERN (arg1)));
+ }
+ else if (sel.length ().is_constant (&nelts)
+ && TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)).is_constant ()
+ && TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)).to_constant () == nelts)
+ {
+ /* For VLS vectors, treat all vectors with
+ npatterns = nelts, nelts_per_pattern = 1. */
+ res_npatterns = sel_npatterns = nelts;
+ res_nelts_per_pattern = 1;
+ vector_for_pattern = XALLOCAVEC (tree, nelts);
+ for (unsigned i = 0; i < nelts; i++)
+ {
+ HOST_WIDE_INT index;
+ if (!sel[i].is_constant (&index))
+ return NULL_TREE;
+ vector_for_pattern[i] = (index < nelts) ? arg0 : arg1;
+ }
+ }
+ else
return NULL_TREE;
- tree_vector_builder out_elts (type, nelts, 1);
- for (i = 0; i < nelts; i++)
+ tree_vector_builder out_elts (type, res_npatterns,
+ res_nelts_per_pattern);
+ unsigned res_nelts = res_npatterns * res_nelts_per_pattern;
+ for (unsigned i = 0; i < res_nelts; i++)
{
- HOST_WIDE_INT index;
- if (!sel[i].is_constant (&index))
+ poly_uint64 n1 = TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0));
+ uint64_t q;
+ poly_uint64 r;
+
+ /* Divide sel[i] by input vector length, to obtain remainder,
+ which would be the index for either input vector. */
+ if (!can_div_trunc_p (sel[i], n1, &q, &r))
return NULL_TREE;
- if (!CONSTANT_CLASS_P (in_elts[index]))
- need_ctor = true;
- out_elts.quick_push (unshare_expr (in_elts[index]));
+
+ unsigned HOST_WIDE_INT index;
+ if (!r.is_constant (&index))
+ return NULL_TREE;
+
+ /* For VLA vectors, i % sel_npatterns would give the pattern
+ in sel that ith elem belongs to.
+ For VLS vectors, sel_npatterns == res_nelts == nelts,
+ so i % sel_npatterns == i since i < nelts */
+ tree arg = vector_for_pattern[i % sel_npatterns];
+ tree elem;
+ if (TREE_CODE (arg) == CONSTRUCTOR)
+ {
+ gcc_assert (index < nelts);
+ if (index >= vec_safe_length (CONSTRUCTOR_ELTS (arg)))
+ return NULL_TREE;
+ elem = CONSTRUCTOR_ELT (arg, index)->value;
+ if (VECTOR_TYPE_P (TREE_TYPE (elem)))
+ return NULL_TREE;
+ need_ctor = true;
+ }
+ else
+ elem = vector_cst_elt (arg, index);
+ out_elts.quick_push (elem);
}
if (need_ctor)
{
vec<constructor_elt, va_gc> *v;
- vec_alloc (v, nelts);
- for (i = 0; i < nelts; i++)
+ vec_alloc (v, res_nelts);
+ for (i = 0; i < res_nelts; i++)
CONSTRUCTOR_APPEND_ELT (v, NULL_TREE, out_elts[i]);
return build_constructor (type, v);
}
- else
- return out_elts.build ();
+ return out_elts.build ();
}
/* Try to fold a pointer difference of type TYPE two address expressions of
@@ -16910,6 +17002,97 @@ test_vec_duplicate_folding ()
ASSERT_TRUE (operand_equal_p (dup5_expr, dup5_cst, 0));
}
+static tree
+build_vec_int_cst (unsigned npatterns, unsigned nelts_per_pattern,
+ int *encoded_elems)
+{
+ scalar_int_mode int_mode = SCALAR_INT_TYPE_MODE (integer_type_node);
+ machine_mode vmode = targetm.vectorize.preferred_simd_mode (int_mode);
+ //machine_mode vmode = VNx4SImode;
+ poly_uint64 nunits = GET_MODE_NUNITS (vmode);
+ tree vectype = build_vector_type (integer_type_node, nunits);
+
+ tree_vector_builder builder (vectype, npatterns, nelts_per_pattern);
+ for (unsigned i = 0; i < npatterns * nelts_per_pattern; i++)
+ builder.quick_push (build_int_cst (integer_type_node, encoded_elems[i]));
+ return builder.build ();
+}
+
+static void
+test_vec_perm_vla_folding ()
+{
+ int arg0_elems[] = { 1, 11, 2, 12, 3, 13 };
+ tree arg0 = build_vec_int_cst (2, 3, arg0_elems);
+
+ int arg1_elems[] = { 21, 31, 22, 32, 23, 33 };
+ tree arg1 = build_vec_int_cst (2, 3, arg1_elems);
+
+ if (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)).is_constant ()
+ || TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg1)).is_constant ())
+ return;
+
+ /* Case 1: For mask: {0, 1, 2, ...}, npatterns == 1, nelts_per_pattern == 3,
+ should select arg0. */
+ {
+ int mask_elems[] = {0, 1, 2};
+ tree mask = build_vec_int_cst (1, 3, mask_elems);
+ tree res = fold_ternary (VEC_PERM_EXPR, TREE_TYPE (arg0), arg0, arg1, mask);
+ ASSERT_TRUE (VECTOR_CST_NPATTERNS (res) == 2);
+ ASSERT_TRUE (VECTOR_CST_NELTS_PER_PATTERN (res) == 3);
+
+ unsigned res_nelts = vector_cst_encoded_nelts (res);
+ for (unsigned i = 0; i < res_nelts; i++)
+ ASSERT_TRUE (operand_equal_p (VECTOR_CST_ELT (res, i),
+ VECTOR_CST_ELT (arg0, i), 0));
+ }
+
+ /* Case 2: For mask: {4, 5, 6, ...}, npatterns == 1, nelts_per_pattern == 3,
+ should return NULL because for len = 4 + 4x,
+ if x == 0, we select from arg1
+ if x > 0, we select from arg0
+ and thus cannot determine result at compile time. */
+ {
+ int mask_elems[] = {4, 5, 6};
+ tree mask = build_vec_int_cst (1, 3, mask_elems);
+ tree res = fold_ternary (VEC_PERM_EXPR, TREE_TYPE (arg0), arg0, arg1, mask);
+ gcc_assert (res == NULL_TREE);
+ }
+
+ /* Case 3:
+ mask: {0, 0, 0, 1, 0, 2, ...}
+ npatterns == 2, nelts_per_pattern == 3
+ Pattern {0, ...} should select arg0[0], ie, 1.
+ Pattern {0, 1, 2, ...} should select arg0: {1, 11, 2, ...},
+ so res = {1, 1, 1, 11, 1, 2, ...}. */
+ {
+ int mask_elems[] = {0, 0, 0, 1, 0, 2};
+ tree mask = build_vec_int_cst (2, 3, mask_elems);
+ tree res = fold_ternary (VEC_PERM_EXPR, TREE_TYPE (arg0), arg0, arg1, mask);
+
+ ASSERT_TRUE (VECTOR_CST_NPATTERNS (res) == 2);
+ ASSERT_TRUE (VECTOR_CST_NELTS_PER_PATTERN (res) == 3);
+
+ /* Check encoding: {1, 11, 2, ...} */
+ int res_encoded_elems[] = {1, 1, 1, 11, 1, 2};
+ for (unsigned i = 0; i < vector_cst_encoded_nelts (res); i++)
+ ASSERT_TRUE (wi::to_wide(VECTOR_CST_ELT (res, i)) == res_encoded_elems[i]);
+ }
+
+ /* Case 4:
+ mask: {0, 4 + 4x, 0, 5 + 4x, 0, 6 + 4x, ...}
+ npatterns == 2, nelts_per_pattern == 3
+ Pattern {0, ...} should select arg0[1]
+ Pattern {4 + 4x, 5 + 4x, 6 + 4x, ...} should select from arg1, since:
+ a1 = 5 + 4x
+ ae = (5 + 4x) + ((4 + 4x) / 2 - 2) * 1
+ = 5 + 6x
+ Since a1/4+4x == ae/4+4x == 1, we select arg1[0], arg1[1], arg1[2], ...
+ res: {1, 21, 1, 31, 1, 22, ... }
+ FIXME: How to build vector with poly_int elems ? */
+
+ /* Case 5: S < 0. */
+}
+
/* Run all of the selftests within this file. */
void
@@ -16918,6 +17101,7 @@ fold_const_cc_tests ()
test_arithmetic_folding ();
test_vector_folding ();
test_vec_duplicate_folding ();
+ test_vec_perm_vla_folding ();
}
} // namespace selftest
More information about the Gcc-patches
mailing list