This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug tree-optimization/62173] [5.0 regression] 64bit Arch can't ivopt while 32bit Arch can
- From: "amker at gcc dot gnu.org" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Fri, 30 Jan 2015 06:41:42 +0000
- Subject: [Bug tree-optimization/62173] [5.0 regression] 64bit Arch can't ivopt while 32bit Arch can
- Auto-submitted: auto-generated
- References: <bug-62173-4 at http dot gcc dot gnu dot org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=62173
--- Comment #31 from amker at gcc dot gnu.org ---
So cand_value_at (loop, cand, use->stmt, desc->niter, &bnd) with arguments as
below:
cand->iv->base:
(unsigned long) ((char *) &A + (sizetype) i_6(D))
cand->iv->step:
0xFFFFFFFFFFFFFFFF
desc->niter:
(unsigned int)(i_6(D) + -1)
use->stmt:
is after increment
The result calculated should like below:
iv->base + iv->step * (unsigned long)niter + step
<=>
(unsigned long) ((char *) &A + (sizetype) i_6(D))
+
0xFFFFFFFFFFFFFFFF * ((unsigned long)(unsigned int)(i_6(D) + -1))
+
0xFFFFFFFFFFFFFFFF
Even with range information [1, 10] for i_6(D), and we can prove that niter's
range is [0, 9]. We can't prove it equals to:
(unsigned long)((char *) &A)
This is because we use unsigned type for step and lose the sign information?
And this can't be fixed even with the proper range information. Is this
understanding correct? Or anything I should do to achieve that?
Thanks very much.