This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
[PATCH][3/n] Merge from match-and-simplify, first patterns and questions
- From: Richard Biener <rguenther at suse dot de>
- To: gcc-patches at gcc dot gnu dot org
- Date: Wed, 15 Oct 2014 13:40:49 +0200 (CEST)
- Subject: [PATCH][3/n] Merge from match-and-simplify, first patterns and questions
- Authentication-results: sourceware.org; auth=none
This adds a bunch of simplifications with constant operands
or ones that simplify to constants, such as a + 0, x * 1.
It's a patch mainly to get a few questions answered for further
pattern merges:
- The branch uses multiple .pd files and includes them from
match.pd trying to group related stuff together. It has
become somewhat difficult to do that grouping in some
sensible manner so I am not sure this is the best approach.
Any opinion? We can simply put everything into match.pd
and group visually by overall comments.
- Each pattern I will add will either be already implemented
in some form in fold-const.c or tree-ssa-forwprop.c. Once
the machinery is exercised from fold-const.c and
tree-ssa-forwprop.c I can remove the duplicates at the
same time I add a pattern. Should I do that?
Caveat: as you can see in the comments below the fold-const.c
parts sometimes do "frontend" stuff like wrapping things
in non_lvalue_expr (but only the C++ frontend seems to care,
and only for very few select cases).
Caveat2: the GENERIC code-path of match-and-simplify does
not handle everything fold-const.c does - for example
it does nothing on operands with side-effects - foo () * 0
is not simplified to (foo(), 0). It also does not
get the benefit from "loose" type-matching by means of
the STRIP_[SIGN_]NOPS fold-const.c performs on operands
before doing its pattern matching. This means that
when I remove stuff from fold-const.c there may be
regressions that are not anticipated (in frontend code
and for -O0 only - with optimization the pattern should
apply on GIMPLE later).
So - are we happy to lose some oddball cases of GENERIC
folding? (hopefully oddball cases only...)
The patch as below is for illustration purposes only - I'll
insert two patches before it (enable the machinery from fold-const.c
and from tree-ssa-forwprop.c).
Thanks,
Richard.
2014-10-15 Richard Biener <rguenther@suse.de>
* match.pd: Add constant folding patterns.
Index: trunk/gcc/match.pd
===================================================================
*** trunk.orig/gcc/match.pd 2014-10-15 12:27:41.354241352 +0200
--- trunk/gcc/match.pd 2014-10-15 13:26:51.133996954 +0200
*************** along with GCC; see the file COPYING3.
*** 27,29 ****
--- 27,86 ----
integer_onep integer_zerop integer_all_onesp
real_zerop real_onep
CONSTANT_CLASS_P)
+
+
+ /* Simplifications of operations with one constant operand and
+ simplifications to constants. */
+
+ (for op (plus pointer_plus minus bit_ior bit_xor)
+ (simplify
+ (op @0 integer_zerop)
+ (if (GENERIC && !in_gimple_form)
+ /* ??? fold_binary adds non_lvalue here and "fixes" the C++
+ run of Wsizeof-pointer-memaccess1.c, preserving enough of
+ sizeof (&a) + 0 because sizeof (&a) is maybe_lvalue_p ()
+ for no good reason. The C frontend is fine as it doesn't
+ fold too early. */
+ (non_lvalue @0))
+ @0))
+
+ (simplify
+ (minus @0 @0)
+ (if (!HONOR_NANS (TYPE_MODE (type)))
+ { build_zero_cst (type); }))
+
+ (simplify
+ (mult @0 integer_zerop@1)
+ @1)
+
+ /* Make sure to preserve divisions by zero. This is the reason why
+ we don't simplify x / x to 1 or 0 / x to 0. */
+ (for op (mult trunc_div ceil_div floor_div round_div)
+ (simplify
+ (op @0 integer_onep)
+ @0))
+
+ /* Same applies to modulo operations, but fold is inconsistent here
+ and simplifies 0 % x to 0, only preserving literal 0 % 0. */
+ (simplify
+ (trunc_mod integer_zerop@0 @1)
+ (if (!integer_zerop (@1))
+ @0))
+ (simplify
+ (trunc_mod @0 integer_onep)
+ { build_zero_cst (type); })
+
+ /* x | ~0 -> ~0 */
+ (simplify
+ (bit_ior @0 integer_all_onesp@1)
+ @1)
+
+ /* x & 0 -> 0 */
+ (simplify
+ (bit_and @0 integer_zerop@1)
+ @1)
+
+ /* x ^ x -> 0 */
+ (simplify
+ (bit_xor @0 @0)
+ { build_zero_cst (type); })