The vectoriser supports peeling for alignment using predication:
we move back to the previous aligned boundary and make the skipped
elements inactive in the first loop iteration. As it happens,
the costs for existing CPUs give an equal cost to aligned and
unaligned accesses, so this feature is rarely used.
However, the PR shows that when the feature was forced on, we were
still trying to align to a full-vector boundary even when using
partial vectors.
gcc/
PR target/98119
* config/aarch64/aarch64.c
(aarch64_vectorize_preferred_vector_alignment): Query the size
of the provided SVE vector; do not assume that all SVE vectors
have the same size.
gcc/testsuite/
PR target/98119
* gcc.target/aarch64/sve/pr98119.c: New test.
{
if (aarch64_sve_data_mode_p (TYPE_MODE (type)))
{
- /* If the length of the vector is fixed, try to align to that length,
- otherwise don't try to align at all. */
+ /* If the length of the vector is a fixed power of 2, try to align
+ to that length, otherwise don't try to align at all. */
HOST_WIDE_INT result;
- if (!BITS_PER_SVE_VECTOR.is_constant (&result))
+ if (!GET_MODE_BITSIZE (TYPE_MODE (type)).is_constant (&result)
+ || !pow2p_hwi (result))
result = TYPE_ALIGN (TREE_TYPE (type));
return result;
}
--- /dev/null
+/* { dg-options "-O3 -msve-vector-bits=512 -mtune=thunderx" } */
+
+void
+f (unsigned short *x)
+{
+ for (int i = 0; i < 1000; ++i)
+ x[i] += x[i - 16];
+}
+
+/* { dg-final { scan-assembler-not {\tubfx\t[wx][0-9]+, [wx][0-9]+, #?1, #?5\n} } } */
+/* { dg-final { scan-assembler-not {\tand\tx[0-9]+, x[0-9]+, #?-63\n} } } */
+/* { dg-final { scan-assembler {\tubfx\t[wx][0-9]+, [wx][0-9]+, #?1, #?4\n} } } */
+/* { dg-final { scan-assembler {\tand\tx[0-9]+, x[0-9]+, #?-31\n} } } */