[Bug tree-optimization/105374] [12 Regression] ICE in fold_convert_loc, at fold-const.cc:2580 during GIMPLE pass: reassoc since r12-7338-g884f77b4222289510e1df9db2889b60c5df6fcda
jakub at gcc dot gnu.org
gcc-bugzilla@gcc.gnu.org
Mon Apr 25 13:43:59 GMT 2022
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105374
Jakub Jelinek <jakub at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |jakub at gcc dot gnu.org
--- Comment #2 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
This can (and IMHO should no matter what) be fixed in reassoc by:
--- gcc/tree-ssa-reassoc.cc.jj 2022-04-14 13:46:59.690140053 +0200
+++ gcc/tree-ssa-reassoc.cc 2022-04-25 15:34:03.811473537 +0200
@@ -2254,7 +2254,11 @@ eliminate_redundant_comparison (enum tre
BIT_AND_EXPR or BIT_IOR_EXPR was of a wider integer type,
we need to convert. */
if (!useless_type_conversion_p (TREE_TYPE (curr->op), TREE_TYPE (t)))
- t = fold_convert (TREE_TYPE (curr->op), t);
+ {
+ if (!fold_convertible_p (TREE_TYPE (curr->op), t))
+ continue;
+ t = fold_convert (TREE_TYPE (curr->op), t);
+ }
if (TREE_CODE (t) != INTEGER_CST
&& !operand_equal_p (t, curr->op, 0))
But another question is if we shouldn't actually optimize it rather than
punting out.
The reason why that happens is that while eliminate_redundant_comparison
indirectly passes the V4BImode vector type as TYPE argument, the function
doesn't actually use it and uses truth_type (V4SImode) instead.
truth_type use has been introduced in r0-119133-gae22ac3c62db451bae
but at that point the type argument didn't exist.
That has been only introduced in r10-3154-g5f487a349de62613d7fa429 .
I wonder if we can't just kill the truth_type computation and replace all uses
with type...
More information about the Gcc-bugs
mailing list