This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug target/35767] x86 backend uses aligned load on unaligned memory


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35767

Paul Pluzhnikov <ppluzhnikov at google dot com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |ppluzhnikov at google dot com

--- Comment #5 from Paul Pluzhnikov <ppluzhnikov at google dot com> ---
gcc.target/i386/pr35767-5.c is failing for me in both -m32 and -m64 mode on
trunk: xgcc (GCC) 4.9.0 20140204 (experimental)

The assembly produced:

test:
        subq    $24, %rsp
        movaps  .LC0(%rip), %xmm0
        movups  %xmm0, (%rsp)
        movaps  %xmm0, %xmm7
        movaps  %xmm0, %xmm6
        movaps  %xmm0, %xmm5
        movaps  %xmm0, %xmm4
        movaps  %xmm0, %xmm3
        movaps  %xmm0, %xmm2
        movaps  %xmm0, %xmm1
        call    foo
        movl    $0, %eax
        addq    $24, %rsp
        ret

The movups appears to be especially bogus since it's moving to 0(%rsp) that is
guaranteed to be 16-byte aligned by the ABI.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]