[Bug target/35767] x86 backend uses aligned load on unaligned memory
ppluzhnikov at google dot com
gcc-bugzilla@gcc.gnu.org
Tue Feb 4 21:26:00 GMT 2014
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35767
Paul Pluzhnikov <ppluzhnikov at google dot com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |ppluzhnikov at google dot com
--- Comment #5 from Paul Pluzhnikov <ppluzhnikov at google dot com> ---
gcc.target/i386/pr35767-5.c is failing for me in both -m32 and -m64 mode on
trunk: xgcc (GCC) 4.9.0 20140204 (experimental)
The assembly produced:
test:
subq $24, %rsp
movaps .LC0(%rip), %xmm0
movups %xmm0, (%rsp)
movaps %xmm0, %xmm7
movaps %xmm0, %xmm6
movaps %xmm0, %xmm5
movaps %xmm0, %xmm4
movaps %xmm0, %xmm3
movaps %xmm0, %xmm2
movaps %xmm0, %xmm1
call foo
movl $0, %eax
addq $24, %rsp
ret
The movups appears to be especially bogus since it's moving to 0(%rsp) that is
guaranteed to be 16-byte aligned by the ABI.
More information about the Gcc-bugs
mailing list