This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug regression/71231] [7 Regression]: 300% runtime increase for rnflow
- From: "glisse at gcc dot gnu.org" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Mon, 23 May 2016 21:33:27 +0000
- Subject: [Bug regression/71231] [7 Regression]: 300% runtime increase for rnflow
- Auto-submitted: auto-generated
- References: <bug-71231-4 at http dot gcc dot gnu dot org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71231
--- Comment #4 from Marc Glisse <glisse at gcc dot gnu.org> ---
(In reply to Andrew Pinski from comment #3)
> Maybe a missing :s or this could be just increasing register pressure.
:s would have no effect, you would need to do it manually
(simplify
(bit_and SSA_NAME@0 INTEGER_CST@1)
(if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
+ && single_use (@0)
&& (get_nonzero_bits (@0) & wi::bit_not (@1)) == 0)
@0))
but that seems wrong to me.
This is a transformation that only removes an operation, as far as I can tell
it shouldn't even increase register pressure... Maybe some other optimization
relies on the presence of & cst and could be improved to use get_nonzero_bits?
Someone with access to the source needs to see how the missing bit_and changes
the dumps after later passes.