This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug tree-optimization/47001] segmentation fault in vect_mark_slp_stmts


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=47001

Ira Rosen <irar at il dot ibm.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |ASSIGNED
   Last reconfirmed|                            |2010.12.20 09:00:13
                 CC|                            |irar at il dot ibm.com
         AssignedTo|unassigned at gcc dot       |irar at gcc dot gnu.org
                   |gnu.org                     |
     Ever Confirmed|0                           |1

--- Comment #1 from Ira Rosen <irar at il dot ibm.com> 2010-12-20 09:00:13 UTC ---
The vectorizer tries to do SLP for reduction here, but there is only one scalar
load for both reductions, which is not supported, but also is not checked. I am
testing a patch that adds such check.

Index: tree-vect-slp.c
===================================================================
--- tree-vect-slp.c     (revision 167365)
+++ tree-vect-slp.c     (working copy)
@@ -1002,7 +1002,35 @@ vect_supported_load_permutation_p (slp_i

       if (!bad_permutation)
         {
-          /* This permutaion is valid for reduction.  Since the order of the
+          /* Check that all the loads are different.  */
+          load_index = sbitmap_alloc (group_size);
+          sbitmap_zero (load_index);
+          for (k = 0; k < group_size; k++)
+            {
+              first_group_load_index = VEC_index (int, load_permutation, k);
+              if (TEST_BIT (load_index, first_group_load_index))
+                {
+                  bad_permutation = true;
+                  break;
+                }
+
+              SET_BIT (load_index, first_group_load_index);
+            }
+
+          if (!bad_permutation)
+            for (k = 0; k < group_size; k++)
+              if (!TEST_BIT (load_index, k))
+                {
+                  bad_permutation = true;
+                  break;
+                }
+
+          sbitmap_free (load_index);
+        }
+
+      if (!bad_permutation)
+        {
+          /* This permutation is valid for reduction.  Since the order of the
              statements in the nodes is not important unless they are memory
              accesses, we can rearrange the statements in all the nodes
              according to the order of the loads.  */


Ira


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]