This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug tree-optimization/87954] VRP can transform a * b where a,b are [0,1] to a & b
- From: "aldyh at gcc dot gnu.org" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Fri, 09 Nov 2018 10:38:47 +0000
- Subject: [Bug tree-optimization/87954] VRP can transform a * b where a,b are [0,1] to a & b
- Auto-submitted: auto-generated
- References: <bug-87954-4@http.gcc.gnu.org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87954
Aldy Hernandez <aldyh at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |NEW
Last reconfirmed| |2018-11-09
Ever confirmed|0 |1
--- Comment #1 from Aldy Hernandez <aldyh at gcc dot gnu.org> ---
Indeed, if you compile imul() with -fdump-tree-all-details-alias -O2 and look
at the vrp1 dump, one can see:
# RANGE [0, 1] NONZERO 1
is_rec_12 = (int) _4;
...
# RANGE [0, 1] NONZERO 1
_6 = (int) _15;
# RANGE [0, 1] NONZERO 1
_7 = _6 * is_rec_12;
This pattern persists throughout the optimization pipeline, so any remaining
optimizer could potentially see the range of the operands and strength reduce
this.
What would be the best place to do this?