[committed][AArch64] Prefer FPRs over GPRs for CLASTB
Richard Sandiford
richard.sandiford@arm.com
Wed Aug 7 19:17:00 GMT 2019
This patch makes the SVE CLASTB GPR alternative more expensive than the
FPR alternative in order to avoid unnecessary cross-file moves. It also
fixes the prefix used to print the FPR; <vw> only handles 32-bit and
64-bit elements.
Tested on aarch64-linux-gnu (with and without SVE) and aarch64_be-elf.
Applied as r274191.
Richard
2019-08-07 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64-sve.md (fold_extract_last_<mode>):
Disparage the GPR alternative relative to the FPR one.
Fix handling of 8-bit and 16-bit FPR values.
gcc/testsuite/
* gcc.target/aarch64/sve/clastb_8.c: New test.
Index: gcc/config/aarch64/aarch64-sve.md
===================================================================
--- gcc/config/aarch64/aarch64-sve.md 2019-08-07 20:05:39.025879238 +0100
+++ gcc/config/aarch64/aarch64-sve.md 2019-08-07 20:07:56.256858738 +0100
@@ -3104,7 +3104,7 @@ (define_insn "ptest_ptrue<mode>"
;; Set operand 0 to the last active element in operand 3, or to tied
;; operand 1 if no elements are active.
(define_insn "fold_extract_last_<mode>"
- [(set (match_operand:<VEL> 0 "register_operand" "=r, w")
+ [(set (match_operand:<VEL> 0 "register_operand" "=?r, w")
(unspec:<VEL>
[(match_operand:<VEL> 1 "register_operand" "0, 0")
(match_operand:<VPRED> 2 "register_operand" "Upl, Upl")
@@ -3113,7 +3113,7 @@ (define_insn "fold_extract_last_<mode>"
"TARGET_SVE"
"@
clastb\t%<vwcore>0, %2, %<vwcore>0, %3.<Vetype>
- clastb\t%<vw>0, %2, %<vw>0, %3.<Vetype>"
+ clastb\t%<Vetype>0, %2, %<Vetype>0, %3.<Vetype>"
)
;; -------------------------------------------------------------------------
Index: gcc/testsuite/gcc.target/aarch64/sve/clastb_8.c
===================================================================
--- /dev/null 2019-07-30 08:53:31.317691683 +0100
+++ gcc/testsuite/gcc.target/aarch64/sve/clastb_8.c 2019-08-07 20:07:56.256858738 +0100
@@ -0,0 +1,25 @@
+/* { dg-do assemble { target aarch64_asm_sve_ok } } */
+/* { dg-options "-O2 -ftree-vectorize -msve-vector-bits=256 --save-temps" } */
+
+#include <stdint.h>
+
+#define TEST_TYPE(TYPE) \
+ void \
+ test_##TYPE (TYPE *ptr, TYPE *a, TYPE *b, TYPE min_v) \
+ { \
+ TYPE last = *ptr; \
+ for (int i = 0; i < 1024; i++) \
+ if (a[i] < min_v) \
+ last = b[i]; \
+ *ptr = last; \
+ }
+
+TEST_TYPE (uint8_t);
+TEST_TYPE (uint16_t);
+TEST_TYPE (uint32_t);
+TEST_TYPE (uint64_t);
+
+/* { dg-final { scan-assembler {\tclastb\t(b[0-9]+), p[0-7], \1, z[0-9]+\.b\n} } } */
+/* { dg-final { scan-assembler {\tclastb\t(h[0-9]+), p[0-7], \1, z[0-9]+\.h\n} } } */
+/* { dg-final { scan-assembler {\tclastb\t(s[0-9]+), p[0-7], \1, z[0-9]+\.s\n} } } */
+/* { dg-final { scan-assembler {\tclastb\t(d[0-9]+), p[0-7], \1, z[0-9]+\.d\n} } } */
More information about the Gcc-patches
mailing list