This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] Improve vectorizer peeling for alignment costmodel


The following extends the very simplistic cost modeling I added somewhen
late in the release process to, for all unknown misaligned refs, also
apply this model for loops containing stores.

The model basically says it's useless to peel for alignment if there's
only a single DR that is affected or if, in case we'll end up using
hw-supported misaligned loads, the cost of misaligned loads is the same
as of aligned ones.  Previously we'd usually align one of the stores
with the theory that this improves (precious) store-bandwith.

Note this is only a so slightly conservative (aka less peeling).  We'll
still apply peeling for alignment if you make the testcase use +=
because then we'll align both the load and the store from v1.

Bootstrap / regtest running on x86_64-unknown-linux-gnu.

Richard.

2017-05-03  Richard Biener  <rguenther@suse.de>

	* tree-vect-data-refs.c (vect_enhance_data_refs_alignment):
	When all DRs have unknown misaligned do not always peel
	when there is a store but apply the same costing model as if
	there were only loads.

	* gcc.dg/vect/costmodel/x86_64/costmodel-alignpeel.c: New testcase.

Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	(revision 247498)
+++ gcc/tree-vect-data-refs.c	(working copy)
@@ -1715,18 +1741,18 @@ vect_enhance_data_refs_alignment (loop_v
             dr0 = first_store;
         }
 
-      /* In case there are only loads with different unknown misalignments, use
-         peeling only if it may help to align other accesses in the loop or
+      /* Use peeling only if it may help to align other accesses in the loop or
 	 if it may help improving load bandwith when we'd end up using
 	 unaligned loads.  */
       tree dr0_vt = STMT_VINFO_VECTYPE (vinfo_for_stmt (DR_STMT (dr0)));
-      if (!first_store
-	  && !STMT_VINFO_SAME_ALIGN_REFS (
-		  vinfo_for_stmt (DR_STMT (dr0))).length ()
+      if (STMT_VINFO_SAME_ALIGN_REFS
+	    (vinfo_for_stmt (DR_STMT (dr0))).length () == 0
 	  && (vect_supportable_dr_alignment (dr0, false)
 	      != dr_unaligned_supported
-	      || (builtin_vectorization_cost (vector_load, dr0_vt, 0)
-		  == builtin_vectorization_cost (unaligned_load, dr0_vt, -1))))
+	      || (DR_IS_READ (dr0)
+		  && (builtin_vectorization_cost (vector_load, dr0_vt, 0)
+		      == builtin_vectorization_cost (unaligned_load,
+						     dr0_vt, -1)))))
         do_peeling = false;
     }
 

Index: gcc/testsuite/gcc.dg/vect/costmodel/x86_64/costmodel-alignpeel.c
===================================================================
--- gcc/testsuite/gcc.dg/vect/costmodel/x86_64/costmodel-alignpeel.c	(nonexistent)
+++ gcc/testsuite/gcc.dg/vect/costmodel/x86_64/costmodel-alignpeel.c	(working copy)
@@ -0,0 +1,9 @@
+/* { dg-do compile } */
+
+void func(double * __restrict__ v1, double * v2, unsigned n)
+{
+  for (unsigned i = 0; i < n; ++i)
+    v1[i] = v2[i];
+}
+
+/* { dg-final { scan-tree-dump-not "Alignment of access forced using peeling" "vect" } } */


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]