This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug tree-optimization/64308] Missed optimization: 64-bit divide used when 32-bit divide would work
- From: "glisse at gcc dot gnu.org" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Mon, 15 Dec 2014 10:34:16 +0000
- Subject: [Bug tree-optimization/64308] Missed optimization: 64-bit divide used when 32-bit divide would work
- Auto-submitted: auto-generated
- References: <bug-64308-4 at http dot gcc dot gnu dot org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64308
--- Comment #2 from Marc Glisse <glisse at gcc dot gnu.org> ---
You would need symbolic ranges, b and ret are in [0,m-1]. And then you are
using that very specific x86 instruction that divides 64 bits by 32 bits but
only works if the result fits in 32 bits. It works here because (m-1)*(m-1)/m<m
is small enough, but that's very hard for the compiler to prove, and I don't
know of another architecture with a similar instruction.
(Related: PR58897, PR53100)