[Bug target/105066] New: GCC thinks pinsrw xmm, mem, 0 requires SSE4.1, not SSE2? _mm_loadu_si16 bounces through integer reg
peter at cordes dot ca
gcc-bugzilla@gcc.gnu.org
Sat Mar 26 22:42:35 GMT 2022
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105066
Bug ID: 105066
Summary: GCC thinks pinsrw xmm, mem, 0 requires SSE4.1, not
SSE2? _mm_loadu_si16 bounces through integer reg
Product: gcc
Version: 12.0
Status: UNCONFIRMED
Keywords: missed-optimization
Severity: normal
Priority: P3
Component: target
Assignee: unassigned at gcc dot gnu.org
Reporter: peter at cordes dot ca
Target Milestone: ---
Target: x86_64-*-*, i?86-*-*
PR99754 fixed the wrong-code for _mm_loadu_si16, but the resulting asm is not
efficient without -msse4.1 (as part of -march= most things). It seems GCC
thinks that pinsrw / pextrw with a memory operand requires SSE4.1, like
pinsr/extr for b/d/q operand-size. But actually 16-bit insr/extr only needs
SSE2
(We're also not efficiently folding it into a memory source operand for
PMOVZXBQ, see below)
https://godbolt.org/z/dYchb6hec shows GCC trunk 12.0.1 20220321
__m128i load16(void *p){
return _mm_loadu_si16( p );
}
load16(void*): # no options, or -march=core2 or -mssse3
movzwl (%rdi), %eax
pxor %xmm1, %xmm1
pinsrw $0, %eax, %xmm1 # should be MOVD %eax, or PINSRW mem
movdqa %xmm1, %xmm0
ret
vs.
load16(void*): # -msse4.1
pxor %xmm1, %xmm1
pinsrw $0, (%rdi), %xmm1
movdqa %xmm1, %xmm0
ret
The second version is actually 100% fine with SSE2:
https://www.felixcloutier.com/x86/pinsrw shows that there's only a single
opcode for PINSRW xmm, r32/m16, imm8 and it requires SSE2; reg vs. mem source
is just a matter of the modr/m byte.
The same problem exists for _mm_storeu_si16 not using pextrw to memory (which
is also SSE2), instead bouncing through EAX. (Insanely still PEXTRW instead of
MOVD).
----
There is a choice of strategy here, but pinsrw/extrw between eax and xmm0 is
clearly sub-optimal everywhere. Once we factor out the dumb register
allocation that wastes a movdqa, the interesting options are:
movzwl (%rdi), %eax # 1 uop on everything
movd %eax, %xmm0 # 1 uop on everything
vs.
pxor %xmm0, %xmm0 # 1 uop for the front-end, eliminated on Intel
pinsrw $0, (%rdi), %xmm0 # 2 uops (load + shuffle/merge)
Similarly for extract,
pextrw $0, %xmm0, (%rdi) # 2 uops on most
vs.
movd %xmm0, %eax # 1 uop, only 1/clock even on Ice Lake
movw %ax, (%rdi) # 1 uop
On Bulldozer-family, bouncing through an integer reg adds a lot of latency vs.
loading straight into the SIMD unit. (2 integer cores share a SIMD/FP unit, so
movd between XMM and GP-integer is higher latency than most.) So that would
definitely favour pinsrw/pextrw with memory.
On Ice Lake, pextrw to mem is 2/clock throughput: the SIMD shuffle can run on
p1/p5. But MOVD r,v is still p0 only, and MOVD v,r is still p5 only. So that
also favours pinsrw/pextrw with memory, despite the extra front-end uop for
pxor-zeroing the destination on load.
Of course, if _mm_storeu_si16 is used on a temporary that's later reloaded,
being able to optimize to a movd (and optionally movzx) is very good. Similar
for _mm_loadu_si16 on a value we have in an integer reg, especially if we know
it's already zero-extended to 32-bit for just a movd, we'd like to be able to
do that.
---
It's also essential that these loads fold efficiently into memory source
operands for PMOVZX; pmovzxbq is one of the major use-cases for a 16-bit load.
That may be a separate bug, IDK
https://godbolt.org/z/3a9T55n3q shows _mm_cvtepu8_epi32(_mm_loadu_si32(p)) does
fold a 32-bit memory source operand nicely to pmovzxbd (%rdi), %xmm0 which can
micro-fuse into a single uop on Intel CPUs (for the 128-bit destination
version, not YMM), but disaster with 16-bit loads:
__m128i pmovzxbq(void *p){
return _mm_cvtepu8_epi64(_mm_loadu_si16(p));
}
pmovzxbq(void*): # -O3 -msse4.1 -mtune=haswell
pxor %xmm0, %xmm0 # 1 uop
pinsrw $0, (%rdi), %xmm0 # 2 uops, one for shuffle port
pmovzxbq %xmm0, %xmm0 # 1 uop for the same shuffle port
ret
(_mm_cvtepu8_epi64 requires SSE4.1 so there's no interaction with the
-mno-sse4.1 implementation of the load.)
More information about the Gcc-bugs
mailing list