This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug tree-optimization/64308] Missed optimization: 64-bit divide used when 32-bit divide would work
- From: "olegendo at gcc dot gnu.org" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Sat, 20 Dec 2014 01:29:28 +0000
- Subject: [Bug tree-optimization/64308] Missed optimization: 64-bit divide used when 32-bit divide would work
- Auto-submitted: auto-generated
- References: <bug-64308-4 at http dot gcc dot gnu dot org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64308
Oleg Endo <olegendo at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |olegendo at gcc dot gnu.org
--- Comment #4 from Oleg Endo <olegendo at gcc dot gnu.org> ---
(In reply to Marc Glisse from comment #2)
> You would need symbolic ranges, b and ret are in [0,m-1]. And then you are
> using that very specific x86 instruction that divides 64 bits by 32 bits but
> only works if the result fits in 32 bits. It works here because
> (m-1)*(m-1)/m<m is small enough, but that's very hard for the compiler to
> prove, and I don't know of another architecture with a similar instruction.
On SH a 64/32 -> 32 bit unsigned division can be done by repeating rotcl,div1
32 times (once for each result bit). It's one of the examples for the 1-step
division insn 'div1' in the manuals. Effectively it's the same as the x86 divl
insn.