This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug target/35767] x86 backend uses aligned load on unaligned memory
- From: "ppluzhnikov at google dot com" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Tue, 04 Feb 2014 21:26:06 +0000
- Subject: [Bug target/35767] x86 backend uses aligned load on unaligned memory
- Auto-submitted: auto-generated
- References: <bug-35767-4 at http dot gcc dot gnu dot org/bugzilla/>
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35767
Paul Pluzhnikov <ppluzhnikov at google dot com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |ppluzhnikov at google dot com
--- Comment #5 from Paul Pluzhnikov <ppluzhnikov at google dot com> ---
gcc.target/i386/pr35767-5.c is failing for me in both -m32 and -m64 mode on
trunk: xgcc (GCC) 4.9.0 20140204 (experimental)
The assembly produced:
test:
subq $24, %rsp
movaps .LC0(%rip), %xmm0
movups %xmm0, (%rsp)
movaps %xmm0, %xmm7
movaps %xmm0, %xmm6
movaps %xmm0, %xmm5
movaps %xmm0, %xmm4
movaps %xmm0, %xmm3
movaps %xmm0, %xmm2
movaps %xmm0, %xmm1
call foo
movl $0, %eax
addq $24, %rsp
ret
The movups appears to be especially bogus since it's moving to 0(%rsp) that is
guaranteed to be 16-byte aligned by the ABI.