[Bug rtl-optimization/54421] Extra movdqa when accessing quadwords in a 128-bit SSE register
law at redhat dot com
gcc-bugzilla@gcc.gnu.org
Thu Dec 8 19:51:00 GMT 2016
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=54421
Jeffrey A. Law <law at redhat dot com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |RESOLVED
CC| |law at redhat dot com
Resolution|--- |FIXED
--- Comment #2 from Jeffrey A. Law <law at redhat dot com> ---
It looks like we went through a series of improvements resulting in the current
compiler generating:
movhlps %xmm0, %xmm1
movq %xmm0, %rdx
movq %xmm1, %rax
orq %rax, %rdx
sete %al
movzbl %al, %eax
ret
ie, it operates solely on registers and thus avoids the unnecessary stack
loads/stores.
Given it was a progression over time, I'm not going to bisect each improvement.
It's just not worth the effort.
More information about the Gcc-bugs
mailing list