unsigned short a[2], b[2]; void foo (void) { int i; for (i = 0; i < 2; ++i) a[i] = b[i]; } unsigned char x[4], y[4]; void bar (void) { int i; for (i = 0; i < 4; ++i) x[i] = y[i]; } vectorizing this on i?86 (without SSE) fails for the first testcase at -O3 because we unroll the loop and SLP refuses to handle the "unaligned" load. For the 2nd case we loop-vectorize it but apply versioning for alignment. The alignment checks in the vectorizer do not account for non-vector modes. If we fix that the first loop fails to SLP vectorize because of bogus cost calculation: t.c:6:13: note: Cost model analysis: Vector inside of basic block cost: 4 Vector prologue cost: 0 Vector epilogue cost: 0 Scalar cost of basic block: 4 t.c:6:13: note: not vectorized: vectorization is not profitable. because of the unaligned load/store cost: t.c:6:13: note: vect_model_load_cost: unaligned supported by hardware. t.c:6:13: note: vect_model_load_cost: inside_cost = 2, prologue_cost = 0 . ... t.c:6:13: note: vect_model_store_cost: unaligned supported by hardware. t.c:6:13: note: vect_model_store_cost: inside_cost = 2, prologue_cost = 0 . that's a backend bug which doesn't consider !VECTOR_MODE_P vector types in ix86_builtin_vectorization_cost. OTOH for SLP vectorization if the cost is equal we can assume less stmts will be used so eventually just vectorize anyway if the costs are equal. The real issue of course is that generic vectorization is not attempted if a vector ISA is available - but that fails to vectorize the above cases where SLP vectorization would take care of combining small loads and stores. So we'd need to support HImode, SImode (and DImode on x86_64) vectorization sizes which probably comes at a too big cost to consider that though basic-block vectorization (knowing the size of the loads) could try anyway. But that needs some re-org of the analysis.
Mine. The alignment issue is easily fixed (I have a patch), the cost model issue is, well, a cost model issue also easily fixed. A big required change is to re-structure basic-block vectorization to perform SLP analysis independent of vector types/sizes and to vectorize independent SLP instances separately (allowing different vector sizes in a BB). Loop vectorization could also do SLP analysis first (basically splitting it) to reduce the number of applicable vectorization factors. Other analysis phases could also contribute to that and it would also help compile-time to not re-do dataref and dependence analysis for each size.
Created attachment 34545 [details] patch for the alingment issue
Both are caught in GCC 10+ now for SLP. Note store merging is able to catch it in GCC 8+ too. So closing as fixed in GCC 10 for the SLP part of the bug.