This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [PATCH] Use bitwise AND/OR/XOR if possible to implement &&, ||, !
- From: Paolo Bonzini <bonzini at gnu dot org>
- To: gcc-patches at gcc dot gnu dot org
- Date: Tue, 26 Aug 2008 12:50:37 +0200
- Subject: Re: [PATCH] Use bitwise AND/OR/XOR if possible to implement &&, ||, !
Paolo Bonzini wrote:
> This patch allows GCC to compile boolean &&, ||, ! to simple bitwise
> AND, OR and XOR operations, when the operands are known to be truth
> values. For example, this sequence
>
> testl %eax, %eax
> setne %dl
> xorl %eax, %eax
> testl %ecx, %ecx
> setne %al
> andl %edx, %eax
>
> becomes simply
>
> andl %ecx, %eax
>
> Or this sequence
>
> orl 12(%ebp), %eax
> setne %al
> movzbl %al, %eax
>
> becomes just a single orl. Likewise, compiling ! changes from
> these four instructions:
>
> movl 8(%ebp), %edx
> xorl %eax, %eax
> testl %edx, %edx
> sete %al
>
> to just two:
>
> movl 8(%ebp), %eax
> xorl $1, %eax
>
> One problem is that folding TRUTH_NOT_EXPR to BIT_XOR_EXPR requires
> reallocating the statement, and this in turn requires reordering the
> code in tree-ssa-propagate.c's substitute_and_fold to update the
> immediate uses properly.
>
> The new usage of BIT_AND_EXPR and BIT_IOR_EXPR also requires a little
> tweaking here and there. We need to extract simple ranges from bitwise
> ORs (I restricted it to positive ranges for simplicity); this part can
> go away if the more complete patch from Denis goes in.
>
> We also need to cope with "if" conditions being BIT_AND_EXPR or
> BIT_IOR_EXPR, and compile them efficiently. Luckily, the code is
> already there as part of the fix for PR tree-optimization/19672
> (which I had fixed in 2005): we just have "convert" them back to
> TRUTH_AND_EXPR and TRUTH_OR_EXPR just by falling through to the
> appropriate part of do_jump. The BIT_AND_EXPR case is restricted to
> boolean arguments, while the BIT_IOR_EXPR part applies in general:
>
> while (a | b)
> f ();
>
> can always be compiled to
>
> xx:
> if (a == 0) goto end;
> if (b == 0) goto end;
> f ();
> goto xx;
>
> which is beneficial if the BRANCH_COST is low enough.
>
> The testcase is duplicated in gcc.dg/tree-ssa and gcc.target/i386,
> meaning the first one to test tree-* changes, and the second to test
> dojump.c changes too. The original testcase is in a well-known
> proprietary embedded systems benchmark.
>
> Bootstrapped/regtested i686-pc-linux-gnu, all languages including Ada, ok?
Call me "the fastest Send button of the West".
Paolo
2008-08-26 Paolo Bonzini <bonzini@gnu.org>
* dojump.c (do_jump) [BIT_AND_EXPR]: Move below. Fall through to
TRUTH_AND_EXPR for boolean (1-bit precision) expressions.
(do_jump) [BIT_IOR_EXPR]: Compile as TRUTH_OR_EXPR.
* tree-flow.h (simplify_stmt_using_ranges): Accept a GSI, return a bool.
* tree-ssa-propagate.c (substitute_and_fold): Pass a GSI to
VRP's simplify_stmt_using_ranges. Do simplify_stmt_using_ranges
before finalizing the changes.
* tree-vrp.c (extract_range_from_binary_expr): Add limited support
for BIT_IOR_EXPR.
(simplify_truth_ops_using_ranges): New.
(simplify_div_or_mod_using_ranges, simplify_abs_using_ranges,
simplify_cond_using_ranges, simplify_switch_using_ranges): Return
whether a simplification was made.
(simplify_stmt_using_ranges): Ditto, and accept a GSI. For GS_ASSIGN,
use a switch statement and also call simplify_truth_ops_using_ranges.
testsuite:
2008-08-26 Paolo Bonzini <bonzini@gnu.org>
* gcc.dg/tree-ssa/vrp46.c: New.
* gcc.target/i386/andor-2.c: New.
Index: tree-flow.h
===================================================================
--- tree-flow.h (revision 139423)
+++ tree-flow.h (working copy)
@@ -911,7 +911,7 @@ tree fold_const_aggregate_ref (tree);
/* In tree-vrp.c */
tree vrp_evaluate_conditional (enum tree_code, tree, tree, gimple);
-void simplify_stmt_using_ranges (gimple);
+bool simplify_stmt_using_ranges (gimple_stmt_iterator *);
/* In tree-ssa-dom.c */
extern void dump_dominator_optimization_stats (FILE *);
Index: tree-ssa-propagate.c
===================================================================
--- tree-ssa-propagate.c (revision 139423)
+++ tree-ssa-propagate.c (working copy)
@@ -1263,6 +1263,7 @@ substitute_and_fold (prop_value_t *prop_
{
bool did_replace;
gimple stmt = gsi_stmt (i);
+ gimple old_stmt;
enum gimple_code code = gimple_code (stmt);
/* Ignore ASSERT_EXPRs. They are used by VRP to generate
@@ -1333,12 +1334,24 @@ substitute_and_fold (prop_value_t *prop_
did_replace |= replace_vuses_in (stmt, prop_value);
}
- /* If we made a replacement, fold and cleanup the statement. */
+ /* If we made a replacement, fold the statement. */
+
+ old_stmt = stmt;
if (did_replace)
- {
- gimple old_stmt = stmt;
+ fold_stmt (&i);
+
+ /* Some statements may be simplified using ranges. For
+ example, division may be replaced by shifts, modulo
+ replaced with bitwise and, etc. Do this after
+ substituting constants, folding, etc so that we're
+ presented with a fully propagated, canonicalized
+ statement. */
+ if (use_ranges_p)
+ did_replace |= simplify_stmt_using_ranges (&i);
- fold_stmt (&i);
+ /* Now cleanup. */
+ if (did_replace)
+ {
stmt = gsi_stmt (i);
/* If we cleaned up EH information from the statement,
@@ -1378,15 +1391,6 @@ substitute_and_fold (prop_value_t *prop_
fprintf (dump_file, "Not folded\n");
}
- /* Some statements may be simplified using ranges. For
- example, division may be replaced by shifts, modulo
- replaced with bitwise and, etc. Do this after
- substituting constants, folding, etc so that we're
- presented with a fully propagated, canonicalized
- statement. */
- if (use_ranges_p)
- simplify_stmt_using_ranges (stmt);
-
gsi_prev (&i);
}
}
Index: tree-vrp.c
===================================================================
--- tree-vrp.c (revision 139423)
+++ tree-vrp.c (working copy)
@@ -2054,6 +2054,7 @@ extract_range_from_binary_expr (value_ra
&& code != MIN_EXPR
&& code != MAX_EXPR
&& code != BIT_AND_EXPR
+ && code != BIT_IOR_EXPR
&& code != TRUTH_AND_EXPR
&& code != TRUTH_OR_EXPR)
{
@@ -2411,6 +2412,26 @@ extract_range_from_binary_expr (value_ra
return;
}
}
+ else if (code == BIT_IOR_EXPR)
+ {
+ if (vr0.type == VR_RANGE
+ && vr1.type == VR_RANGE
+ && TREE_CODE (vr0.min) == INTEGER_CST
+ && TREE_CODE (vr1.min) == INTEGER_CST
+ && TREE_CODE (vr0.max) == INTEGER_CST
+ && TREE_CODE (vr1.max) == INTEGER_CST
+ && tree_int_cst_sgn (vr0.min) >= 0
+ && tree_int_cst_sgn (vr1.min) >= 0)
+ {
+ min = vrp_int_const_binop (BIT_IOR_EXPR, vr0.min, vr1.min);
+ max = vrp_int_const_binop (BIT_IOR_EXPR, vr0.max, vr1.max);
+ }
+ else
+ {
+ set_value_range_to_varying (vr);
+ return;
+ }
+ }
else
gcc_unreachable ();
@@ -6209,11 +6230,129 @@ varying:
return SSA_PROP_VARYING;
}
+/* Simplify boolean operations if the source is known
+ to be already a boolean. */
+static bool
+simplify_truth_ops_using_ranges (gimple_stmt_iterator *gsi, gimple stmt)
+{
+ enum tree_code rhs_code = gimple_assign_rhs_code (stmt);
+ tree val = NULL;
+ tree op0, op1;
+ value_range_t *vr;
+ bool sop = false;
+
+ op0 = gimple_assign_rhs1 (stmt);
+ vr = get_value_range (op0);
+ if (TYPE_PRECISION (TREE_TYPE (op0)) != 1)
+ {
+ val = compare_range_with_value (GE_EXPR, vr, integer_zero_node, &sop);
+ if (!val || !integer_onep (val))
+ return false;
+
+ val = compare_range_with_value (LE_EXPR, vr, integer_one_node, &sop);
+ if (!val || !integer_onep (val))
+ return false;
+ }
+
+ if (rhs_code == TRUTH_NOT_EXPR)
+ {
+ rhs_code = NE_EXPR;
+ op1 = integer_one_node;
+ }
+
+ else
+ {
+ op1 = gimple_assign_rhs2 (stmt);
+
+ /* Reduce number of cases to handle. */
+ if (is_gimple_min_invariant (op1))
+ {
+ /* Exclude anything that should have been already folded. */
+ gcc_assert (rhs_code == EQ_EXPR || rhs_code == NE_EXPR
+ || rhs_code == TRUTH_XOR_EXPR);
+ gcc_assert (integer_zerop (op1) || integer_onep (op1));
+
+ /* Limit the number of cases we have to consider. */
+ if (rhs_code == EQ_EXPR)
+ {
+ rhs_code = NE_EXPR;
+ op1 = fold_unary (TRUTH_NOT_EXPR, TREE_TYPE (op1), op1);
+ }
+ }
+ else
+ {
+ /* Punt on A == B as there is no BIT_XNOR_EXPR. */
+ if (rhs_code == EQ_EXPR)
+ return false;
+
+ if (TYPE_PRECISION (TREE_TYPE (op1)) != 1)
+ {
+ vr = get_value_range (op1);
+ val = compare_range_with_value (GE_EXPR, vr, integer_zero_node, &sop);
+ if (!val || !integer_onep (val))
+ return false;
+
+ val = compare_range_with_value (LE_EXPR, vr, integer_one_node, &sop);
+ if (!val || !integer_onep (val))
+ return false;
+ }
+ }
+ }
+
+ if (sop && issue_strict_overflow_warning (WARN_STRICT_OVERFLOW_MISC))
+ {
+ location_t location;
+
+ if (!gimple_has_location (stmt))
+ location = input_location;
+ else
+ location = gimple_location (stmt);
+
+ if (rhs_code == TRUTH_AND_EXPR || rhs_code == TRUTH_OR_EXPR)
+ warning_at (location, OPT_Wstrict_overflow,
+ _("assuming signed overflow does not occur when "
+ "simplifying && or || to & or |"));
+ else
+ warning_at (location, OPT_Wstrict_overflow,
+ _("assuming signed overflow does not occur when "
+ "simplifying ==, != or ! to identity or ^"));
+ }
+
+ switch (rhs_code)
+ {
+ case TRUTH_AND_EXPR:
+ gimple_assign_set_rhs_code (stmt, BIT_AND_EXPR);
+ break;
+ case TRUTH_OR_EXPR:
+ gimple_assign_set_rhs_code (stmt, BIT_IOR_EXPR);
+ break;
+ case TRUTH_XOR_EXPR:
+ case NE_EXPR:
+ if (integer_zerop (op1))
+ {
+ bool need_conversion =
+ !useless_type_conversion_p (TREE_TYPE (gimple_assign_lhs (stmt)),
+ TREE_TYPE (op0));
+ gimple_assign_set_rhs_with_ops (gsi,
+ need_conversion ? NOP_EXPR : SSA_NAME,
+ op0, NULL);
+ }
+ else
+ gimple_assign_set_rhs_with_ops (gsi, BIT_XOR_EXPR, op0, op1);
+ break;
+ default:
+ gcc_unreachable ();
+ }
+
+ update_stmt (gsi_stmt (*gsi));
+ return true;
+}
+
/* Simplify a division or modulo operator to a right shift or
bitwise and if the first operand is unsigned or is greater
than zero and the second operand is an exact power of two. */
-static void
+static bool
simplify_div_or_mod_using_ranges (gimple stmt)
{
enum tree_code rhs_code = gimple_assign_rhs_code (stmt);
@@ -6273,14 +6412,17 @@ simplify_div_or_mod_using_ranges (gimple
}
update_stmt (stmt);
+ return true;
}
+
+ return false;
}
/* If the operand to an ABS_EXPR is >= 0, then eliminate the
ABS_EXPR. If the operand is <= 0, then simplify the
ABS_EXPR into a NEGATE_EXPR. */
-static void
+static bool
simplify_abs_using_ranges (gimple stmt)
{
tree val = NULL;
@@ -6335,8 +6477,11 @@ simplify_abs_using_ranges (gimple stmt)
else
gimple_assign_set_rhs_code (stmt, SSA_NAME);
update_stmt (stmt);
+ return true;
}
}
+
+ return false;
}
/* We are comparing trees OP0 and OP1 using COND_CODE. OP0 has
@@ -6411,7 +6556,7 @@ test_for_singularity (enum tree_code con
test if the range information indicates only one value can satisfy
the original conditional. */
-static void
+static bool
simplify_cond_using_ranges (gimple stmt)
{
tree op0 = gimple_cond_lhs (stmt);
@@ -6452,8 +6597,8 @@ simplify_cond_using_ranges (gimple stmt)
print_gimple_stmt (dump_file, stmt, 0, 0);
fprintf (dump_file, "\n");
}
- return;
+ return true;
}
/* Try again after inverting the condition. We only deal
@@ -6482,17 +6627,19 @@ simplify_cond_using_ranges (gimple stmt)
print_gimple_stmt (dump_file, stmt, 0, 0);
fprintf (dump_file, "\n");
}
- return;
+ return true;
}
}
}
+
+ return false;
}
/* Simplify a switch statement using the value range of the switch
argument. */
-static void
+static bool
simplify_switch_using_ranges (gimple stmt)
{
tree op = gimple_switch_index (stmt);
@@ -6505,14 +6652,14 @@ simplify_switch_using_ranges (gimple stm
switch_update su;
if (TREE_CODE (op) != SSA_NAME)
- return;
+ return false;
vr = get_value_range (op);
/* We can only handle integer ranges. */
if (vr->type != VR_RANGE
|| symbolic_range_p (vr))
- return;
+ return false;
/* Find case label for min/max of the value range. */
n = gimple_switch_num_labels (stmt);
@@ -6522,7 +6669,7 @@ simplify_switch_using_ranges (gimple stm
if (i == 1
&& j == n - 1
&& take_default)
- return;
+ return false;
/* Build a new vector of taken case labels. */
vec2 = make_tree_vec (j - i + 1 + (int)take_default);
@@ -6563,35 +6710,62 @@ simplify_switch_using_ranges (gimple stm
su.stmt = stmt;
su.vec = vec2;
VEC_safe_push (switch_update, heap, to_update_switch_stmts, &su);
+ return false;
}
/* Simplify STMT using ranges if possible. */
-void
-simplify_stmt_using_ranges (gimple stmt)
+bool
+simplify_stmt_using_ranges (gimple_stmt_iterator *gsi)
{
+ gimple stmt = gsi_stmt (*gsi);
if (is_gimple_assign (stmt))
{
enum tree_code rhs_code = gimple_assign_rhs_code (stmt);
+ switch (rhs_code)
+ {
+ case EQ_EXPR:
+ case NE_EXPR:
+ case TRUTH_NOT_EXPR:
+ case TRUTH_AND_EXPR:
+ case TRUTH_OR_EXPR:
+ case TRUTH_XOR_EXPR:
+ /* Transform EQ_EXPR, NE_EXPR, TRUTH_NOT_EXPR into BIT_XOR_EXPR
+ or identity if the RHS is zero or one, and the LHS are known
+ to be boolean values. Transform all TRUTH_*_EXPR into
+ BIT_*_EXPR if both arguments are known to be boolean values. */
+ if (INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (stmt))))
+ return simplify_truth_ops_using_ranges (gsi, stmt);
+ break;
+
/* Transform TRUNC_DIV_EXPR and TRUNC_MOD_EXPR into RSHIFT_EXPR
and BIT_AND_EXPR respectively if the first operand is greater
than zero and the second operand is an exact power of two. */
- if ((rhs_code == TRUNC_DIV_EXPR || rhs_code == TRUNC_MOD_EXPR)
- && INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (stmt)))
- && integer_pow2p (gimple_assign_rhs2 (stmt)))
- simplify_div_or_mod_using_ranges (stmt);
+ case TRUNC_DIV_EXPR:
+ case TRUNC_MOD_EXPR:
+ if (INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (stmt)))
+ && integer_pow2p (gimple_assign_rhs2 (stmt)))
+ return simplify_div_or_mod_using_ranges (stmt);
+ break;
/* Transform ABS (X) into X or -X as appropriate. */
- if (rhs_code == ABS_EXPR
- && TREE_CODE (gimple_assign_rhs1 (stmt)) == SSA_NAME
- && INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (stmt))))
- simplify_abs_using_ranges (stmt);
+ case ABS_EXPR:
+ if (TREE_CODE (gimple_assign_rhs1 (stmt)) == SSA_NAME
+ && INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (stmt))))
+ return simplify_abs_using_ranges (stmt);
+ break;
+
+ default:
+ break;
+ }
}
else if (gimple_code (stmt) == GIMPLE_COND)
- simplify_cond_using_ranges (stmt);
+ return simplify_cond_using_ranges (stmt);
else if (gimple_code (stmt) == GIMPLE_SWITCH)
- simplify_switch_using_ranges (stmt);
+ return simplify_switch_using_ranges (stmt);
+
+ return false;
}
/* Stack of dest,src equivalency pairs that need to be restored after
Index: dojump.c
===================================================================
--- dojump.c (revision 139423)
+++ dojump.c (working copy)
@@ -207,79 +207,6 @@ do_jump (tree exp, rtx if_false_label, r
do_jump (TREE_OPERAND (exp, 0), if_false_label, if_true_label);
break;
- case BIT_AND_EXPR:
- /* fold_single_bit_test() converts (X & (1 << C)) into (X >> C) & 1.
- See if the former is preferred for jump tests and restore it
- if so. */
- if (integer_onep (TREE_OPERAND (exp, 1)))
- {
- tree exp0 = TREE_OPERAND (exp, 0);
- rtx set_label, clr_label;
-
- /* Strip narrowing integral type conversions. */
- while (CONVERT_EXPR_P (exp0)
- && TREE_OPERAND (exp0, 0) != error_mark_node
- && TYPE_PRECISION (TREE_TYPE (exp0))
- <= TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (exp0, 0))))
- exp0 = TREE_OPERAND (exp0, 0);
-
- /* "exp0 ^ 1" inverts the sense of the single bit test. */
- if (TREE_CODE (exp0) == BIT_XOR_EXPR
- && integer_onep (TREE_OPERAND (exp0, 1)))
- {
- exp0 = TREE_OPERAND (exp0, 0);
- clr_label = if_true_label;
- set_label = if_false_label;
- }
- else
- {
- clr_label = if_false_label;
- set_label = if_true_label;
- }
-
- if (TREE_CODE (exp0) == RSHIFT_EXPR)
- {
- tree arg = TREE_OPERAND (exp0, 0);
- tree shift = TREE_OPERAND (exp0, 1);
- tree argtype = TREE_TYPE (arg);
- if (TREE_CODE (shift) == INTEGER_CST
- && compare_tree_int (shift, 0) >= 0
- && compare_tree_int (shift, HOST_BITS_PER_WIDE_INT) < 0
- && prefer_and_bit_test (TYPE_MODE (argtype),
- TREE_INT_CST_LOW (shift)))
- {
- HOST_WIDE_INT mask = (HOST_WIDE_INT) 1
- << TREE_INT_CST_LOW (shift);
- do_jump (build2 (BIT_AND_EXPR, argtype, arg,
- build_int_cst_type (argtype, mask)),
- clr_label, set_label);
- break;
- }
- }
- }
-
- /* If we are AND'ing with a small constant, do this comparison in the
- smallest type that fits. If the machine doesn't have comparisons
- that small, it will be converted back to the wider comparison.
- This helps if we are testing the sign bit of a narrower object.
- combine can't do this for us because it can't know whether a
- ZERO_EXTRACT or a compare in a smaller mode exists, but we do. */
-
- if (! SLOW_BYTE_ACCESS
- && TREE_CODE (TREE_OPERAND (exp, 1)) == INTEGER_CST
- && TYPE_PRECISION (TREE_TYPE (exp)) <= HOST_BITS_PER_WIDE_INT
- && (i = tree_floor_log2 (TREE_OPERAND (exp, 1))) >= 0
- && (mode = mode_for_size (i + 1, MODE_INT, 0)) != BLKmode
- && (type = lang_hooks.types.type_for_mode (mode, 1)) != 0
- && TYPE_PRECISION (type) < TYPE_PRECISION (TREE_TYPE (exp))
- && (optab_handler (cmp_optab, TYPE_MODE (type))->insn_code
- != CODE_FOR_nothing))
- {
- do_jump (fold_convert (type, exp), if_false_label, if_true_label);
- break;
- }
- goto normal;
-
case TRUTH_NOT_EXPR:
do_jump (TREE_OPERAND (exp, 0), if_true_label, if_false_label);
break;
@@ -503,8 +430,86 @@ do_jump (tree exp, rtx if_false_label, r
do_jump (cmp0, 0, if_true_label);
do_jump (cmp1, if_false_label, if_true_label);
}
- }
break;
+ }
+
+ case BIT_AND_EXPR:
+ /* fold_single_bit_test() converts (X & (1 << C)) into (X >> C) & 1.
+ See if the former is preferred for jump tests and restore it
+ if so. */
+ if (integer_onep (TREE_OPERAND (exp, 1)))
+ {
+ tree exp0 = TREE_OPERAND (exp, 0);
+ rtx set_label, clr_label;
+
+ /* Strip narrowing integral type conversions. */
+ while (CONVERT_EXPR_P (exp0)
+ && TREE_OPERAND (exp0, 0) != error_mark_node
+ && TYPE_PRECISION (TREE_TYPE (exp0))
+ <= TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (exp0, 0))))
+ exp0 = TREE_OPERAND (exp0, 0);
+
+ /* "exp0 ^ 1" inverts the sense of the single bit test. */
+ if (TREE_CODE (exp0) == BIT_XOR_EXPR
+ && integer_onep (TREE_OPERAND (exp0, 1)))
+ {
+ exp0 = TREE_OPERAND (exp0, 0);
+ clr_label = if_true_label;
+ set_label = if_false_label;
+ }
+ else
+ {
+ clr_label = if_false_label;
+ set_label = if_true_label;
+ }
+
+ if (TREE_CODE (exp0) == RSHIFT_EXPR)
+ {
+ tree arg = TREE_OPERAND (exp0, 0);
+ tree shift = TREE_OPERAND (exp0, 1);
+ tree argtype = TREE_TYPE (arg);
+ if (TREE_CODE (shift) == INTEGER_CST
+ && compare_tree_int (shift, 0) >= 0
+ && compare_tree_int (shift, HOST_BITS_PER_WIDE_INT) < 0
+ && prefer_and_bit_test (TYPE_MODE (argtype),
+ TREE_INT_CST_LOW (shift)))
+ {
+ HOST_WIDE_INT mask = (HOST_WIDE_INT) 1
+ << TREE_INT_CST_LOW (shift);
+ do_jump (build2 (BIT_AND_EXPR, argtype, arg,
+ build_int_cst_type (argtype, mask)),
+ clr_label, set_label);
+ break;
+ }
+ }
+ }
+
+ /* If we are AND'ing with a small constant, do this comparison in the
+ smallest type that fits. If the machine doesn't have comparisons
+ that small, it will be converted back to the wider comparison.
+ This helps if we are testing the sign bit of a narrower object.
+ combine can't do this for us because it can't know whether a
+ ZERO_EXTRACT or a compare in a smaller mode exists, but we do. */
+
+ if (! SLOW_BYTE_ACCESS
+ && TREE_CODE (TREE_OPERAND (exp, 1)) == INTEGER_CST
+ && TYPE_PRECISION (TREE_TYPE (exp)) <= HOST_BITS_PER_WIDE_INT
+ && (i = tree_floor_log2 (TREE_OPERAND (exp, 1))) >= 0
+ && (mode = mode_for_size (i + 1, MODE_INT, 0)) != BLKmode
+ && (type = lang_hooks.types.type_for_mode (mode, 1)) != 0
+ && TYPE_PRECISION (type) < TYPE_PRECISION (TREE_TYPE (exp))
+ && (optab_handler (cmp_optab, TYPE_MODE (type))->insn_code
+ != CODE_FOR_nothing))
+ {
+ do_jump (fold_convert (type, exp), if_false_label, if_true_label);
+ break;
+ }
+
+ if (TYPE_PRECISION (TREE_TYPE (exp)) > 1
+ || TREE_CODE (TREE_OPERAND (exp, 1)) == INTEGER_CST)
+ goto normal;
+
+ /* Boolean comparisons can be compiled as TRUTH_AND_EXPR. */
case TRUTH_AND_EXPR:
/* High branch cost, expand as the bitwise AND of the conditions.
@@ -527,6 +532,7 @@ do_jump (tree exp, rtx if_false_label, r
}
break;
+ case BIT_IOR_EXPR:
case TRUTH_OR_EXPR:
/* High branch cost, expand as the bitwise OR of the conditions.
Do the same if the RHS has side effects, because we're effectively
Index: gcc.dg/tree-ssa/vrp46.c
===================================================================
--- gcc.dg/tree-ssa/vrp46.c (revision 0)
+++ gcc.dg/tree-ssa/vrp46.c (revision 0)
@@ -0,0 +1,42 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-vrp -fdump-tree-dom" } */
+
+int h(int x, int y)
+{
+ if ((x >= 0 && x <= 1) && (y >= 0 && y <= 1))
+ return x && y;
+ else
+ return -1;
+}
+
+int g(int x, int y)
+{
+ if ((x >= 0 && x <= 1) && (y >= 0 && y <= 1))
+ return x || y;
+ else
+ return -1;
+}
+
+int f(int x)
+{
+ if (x != 0 && x != 1)
+ return -2;
+
+ else
+ return !x;
+}
+
+/* Test that x and y are never compared to 0 -- they're always known to be
+ 0 or 1. */
+/* { dg-final { scan-tree-dump-times "\[xy\]\[^ \]* !=" 0 "vrp1" } } */
+
+/* This one needs more copy propagation that only happens in dom1. */
+/* { dg-final { scan-tree-dump-times "x\[^ \]* & y" 1 "dom1" } } */
+/* { dg-final { scan-tree-dump-times "x\[^ \]* & y" 1 "vrp1" { xfail *-*-* } } } */
+
+/* These two are fully simplified by VRP. */
+/* { dg-final { scan-tree-dump-times "x\[^ \]* \[|\] y" 1 "vrp1" } } */
+/* { dg-final { scan-tree-dump-times "x\[^ \]* \\^ 1" 1 "vrp1" } } */
+
+/* { dg-final { cleanup-tree-dump "vrp\[0-9\]" } } */
+/* { dg-final { cleanup-tree-dump "dom\[0-9\]" } } */
Index: gcc.target/i386/andor-2.c
===================================================================
--- gcc.target/i386/andor-2.c (revision 0)
+++ gcc.target/i386/andor-2.c (revision 0)
@@ -0,0 +1,30 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -mtune=i686" } */
+
+int h(int x, int y)
+{
+ if ((x >= 0 && x <= 1) && (y >= 0 && y <= 1))
+ return x && y;
+ else
+ return -1;
+}
+
+int g(int x, int y)
+{
+ if ((x >= 0 && x <= 1) && (y >= 0 && y <= 1))
+ return x || y;
+ else
+ return -1;
+}
+
+int f(int x, int y)
+{
+ if (x != 0 && x != 1)
+ return -2;
+
+ else
+ return !x;
+}
+
+/* { dg-final { scan-assembler-not "setne" } } */
+/* { dg-final { scan-assembler-not "sete" } } */