This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug middle-end/24929] long long shift/mask operations should be better optimized
- From: "ian at airs dot com" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: 2 Feb 2006 18:14:29 -0000
- Subject: [Bug middle-end/24929] long long shift/mask operations should be better optimized
- References: <bug-24929-11148@http.gcc.gnu.org/bugzilla/>
- Reply-to: gcc-bugzilla at gcc dot gnu dot org
------- Comment #2 from ian at airs dot com 2006-02-02 18:14 -------
With an updated version of RTH's subreg lowering pass, I get this instruction
sequence:
f:
movl 16(%esp), %eax
movl 4(%esp), %edx
movl 8(%esp), %ecx
shrl $16, %eax
andl $255, %eax
shldl $8, %edx, %ecx
sall $8, %edx
orl %edx, %eax
movl %ecx, %edx
ret
This is one instruction shorter than the icc sequence, due to the use of shldl.
It could be improved by switching the roles of %ecx and %edx to avoid the
final move, although that is complex to implement give the way the register
allocator currently handles pseudo-registers larger than word mode.
--
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=24929