This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug tree-optimization/64308] Missed optimization: 64-bit divide used when 32-bit divide would work


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64308

Oleg Endo <olegendo at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |olegendo at gcc dot gnu.org

--- Comment #4 from Oleg Endo <olegendo at gcc dot gnu.org> ---
(In reply to Marc Glisse from comment #2)
> You would need symbolic ranges, b and ret are in [0,m-1]. And then you are
> using that very specific x86 instruction that divides 64 bits by 32 bits but
> only works if the result fits in 32 bits. It works here because
> (m-1)*(m-1)/m<m is small enough, but that's very hard for the compiler to
> prove, and I don't know of another architecture with a similar instruction.

On SH a 64/32 -> 32 bit unsigned division can be done by repeating rotcl,div1
32 times (once for each result bit).  It's one of the examples for the 1-step
division insn 'div1' in the manuals.  Effectively it's the same as the x86 divl
insn.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]