[PATCH] [og10] vect: Add target hook to prefer gather/scatter instructions

Julian Brown julian@codesourcery.com
Wed Jan 13 23:48:42 GMT 2021


For AMD GCN, the instructions available for loading/storing vectors are
always scatter/gather operations (i.e. there are separate addresses for
each vector lane), so the current heuristic to avoid gather/scatter
operations with too many elements in get_group_load_store_type is
counterproductive. Avoiding such operations in that function can
subsequently lead to a missed vectorization opportunity whereby later
analyses in the vectorizer try to use a very wide array type which is
not available on this target, and thus it bails out.

The attached patch adds a target hook to override the "single_element_p"
heuristic in the function as a target hook, and activates it for GCN. This
allows much better code to be generated for affected loops.

Tested with offloading to AMD GCN. I will apply to the og10 branch
shortly.

Julian

2021-01-13  Julian Brown  <julian@codesourcery.com>

gcc/
	* doc/tm.texi.in (TARGET_VECTORIZE_PREFER_GATHER_SCATTER): Add
	documentation hook.
	* doc/tm.texi: Regenerate.
	* target.def (prefer_gather_scatter): Add target hook under vectorizer.
	* tree-vect-stmts.c (get_group_load_store_type): Optionally prefer
	gather/scatter instructions to scalar/elementwise fallback.
	* config/gcn/gcn.c (TARGET_VECTORIZE_PREFER_GATHER_SCATTER): Define
	hook.
---
 gcc/config/gcn/gcn.c  | 2 ++
 gcc/doc/tm.texi       | 5 +++++
 gcc/doc/tm.texi.in    | 2 ++
 gcc/target.def        | 8 ++++++++
 gcc/tree-vect-stmts.c | 9 +++++++--
 5 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/gcc/config/gcn/gcn.c b/gcc/config/gcn/gcn.c
index ee9f00558305..ea88b5e91244 100644
--- a/gcc/config/gcn/gcn.c
+++ b/gcc/config/gcn/gcn.c
@@ -6501,6 +6501,8 @@ gcn_dwarf_register_span (rtx rtl)
   gcn_vector_alignment_reachable
 #undef  TARGET_VECTOR_MODE_SUPPORTED_P
 #define TARGET_VECTOR_MODE_SUPPORTED_P gcn_vector_mode_supported_p
+#undef  TARGET_VECTORIZE_PREFER_GATHER_SCATTER
+#define TARGET_VECTORIZE_PREFER_GATHER_SCATTER true
 
 struct gcc_target targetm = TARGET_INITIALIZER;
 
diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi
index 581b7b51eeb0..bd0b2eea477a 100644
--- a/gcc/doc/tm.texi
+++ b/gcc/doc/tm.texi
@@ -6122,6 +6122,11 @@ The default is @code{NULL_TREE} which means to not vectorize scatter
 stores.
 @end deftypefn
 
+@deftypevr {Target Hook} bool TARGET_VECTORIZE_PREFER_GATHER_SCATTER
+This hook is set to TRUE if gather loads or scatter stores are cheaper on
+this target than a sequence of elementwise loads or stores.
+@end deftypevr
+
 @deftypefn {Target Hook} int TARGET_SIMD_CLONE_COMPUTE_VECSIZE_AND_SIMDLEN (struct cgraph_node *@var{}, struct cgraph_simd_clone *@var{}, @var{tree}, @var{int})
 This hook should set @var{vecsize_mangle}, @var{vecsize_int}, @var{vecsize_float}
 fields in @var{simd_clone} structure pointed by @var{clone_info} argument and also
diff --git a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in
index afa19d4ac63c..c0883e5da82c 100644
--- a/gcc/doc/tm.texi.in
+++ b/gcc/doc/tm.texi.in
@@ -4195,6 +4195,8 @@ address;  but often a machine-dependent strategy can generate better code.
 
 @hook TARGET_VECTORIZE_BUILTIN_SCATTER
 
+@hook TARGET_VECTORIZE_PREFER_GATHER_SCATTER
+
 @hook TARGET_SIMD_CLONE_COMPUTE_VECSIZE_AND_SIMDLEN
 
 @hook TARGET_SIMD_CLONE_ADJUST
diff --git a/gcc/target.def b/gcc/target.def
index 00421f3a6acd..0b34ab5c3d52 100644
--- a/gcc/target.def
+++ b/gcc/target.def
@@ -2027,6 +2027,14 @@ all zeros.  GCC can then try to branch around the instruction instead.",
  (unsigned ifn),
  default_empty_mask_is_expensive)
 
+/* Prefer gather/scatter loads/stores to e.g. elementwise accesses if\n\
+we cannot use a contiguous access.  */
+DEFHOOKPOD
+(prefer_gather_scatter,
+ "This hook is set to TRUE if gather loads or scatter stores are cheaper on\n\
+this target than a sequence of elementwise loads or stores.",
+ bool, false)
+
 /* Target builtin that implements vector gather operation.  */
 DEFHOOK
 (builtin_gather,
diff --git a/gcc/tree-vect-stmts.c b/gcc/tree-vect-stmts.c
index 9ace345fc5e2..e117d3d16afc 100644
--- a/gcc/tree-vect-stmts.c
+++ b/gcc/tree-vect-stmts.c
@@ -2444,9 +2444,14 @@ get_group_load_store_type (stmt_vec_info stmt_info, tree vectype, bool slp,
 	 it probably isn't a win to use separate strided accesses based
 	 on nearby locations.  Or, even if it's a win over scalar code,
 	 it might not be a win over vectorizing at a lower VF, if that
-	 allows us to use contiguous accesses.  */
+	 allows us to use contiguous accesses.
+
+	 On some targets (e.g. AMD GCN), always use gather/scatter accesses
+	 here since those are the only types of vector loads/stores available,
+	 and the fallback case of using elementwise accesses is very
+	 inefficient.  */
       if (*memory_access_type == VMAT_ELEMENTWISE
-	  && single_element_p
+	  && (targetm.vectorize.prefer_gather_scatter || single_element_p)
 	  && loop_vinfo
 	  && vect_use_strided_gather_scatters_p (stmt_info, loop_vinfo,
 						 masked_p, gs_info))
-- 
2.29.2



More information about the Gcc-patches mailing list