]> gcc.gnu.org Git - gcc.git/commit
Fold truncations of left shifts in match.pd
authorRoger Sayle <roger@nextmovesoftware.com>
Wed, 15 Jun 2022 07:31:13 +0000 (09:31 +0200)
committerRoger Sayle <roger@nextmovesoftware.com>
Wed, 15 Jun 2022 07:31:13 +0000 (09:31 +0200)
commitacb1e6f43dc2bbedd1248ea61c7ab537a11fe59b
tree03dd0822fbb02776d3141bd9f0f80c517a6dcf68
parent4b1a827f024234aaf83ecfe90415e88b525d3969
Fold truncations of left shifts in match.pd

Whilst investigating PR 55278, I noticed that the tree-ssa optimizers
aren't eliminating the promotions of shifts to "int" as inserted by the
c-family front-ends, instead leaving this simplification to be left to
the RTL optimizers.  This patch allows match.pd to do this itself earlier,
narrowing (T)(X << C) to (T)X << C when the constant C is known to be
valid for the (narrower) type T.

Hence for this simple test case:
short foo(short x) { return x << 5; }

the .optimized dump currently looks like:

short int foo (short int x)
{
  int _1;
  int _2;
  short int _4;

  <bb 2> [local count: 1073741824]:
  _1 = (int) x_3(D);
  _2 = _1 << 5;
  _4 = (short int) _2;
  return _4;
}

but with this patch, now becomes:

short int foo (short int x)
{
  short int _2;

  <bb 2> [local count: 1073741824]:
  _2 = x_1(D) << 5;
  return _2;
}

This is always reasonable as RTL expansion knows how to use
widening optabs if it makes sense at the RTL level to perform
this shift in a wider mode.

Of course, there's often a catch.  The above simplification not only
reduces the number of statements in gimple, but also allows further
optimizations, for example including the perception of rotate idioms
and bswap16.  Alas, optimizing things earlier than anticipated
requires several testsuite changes [though all these tests have
been confirmed to generate identical assembly code on x86_64].
The only significant change is that the vectorization pass wouldn't
previously lower rotations of signed integer types.  Hence this
patch includes a refinement to tree-vect-patterns to allow signed
types, by using the equivalent unsigned shifts.

2022-06-15  Roger Sayle  <roger@nextmovesoftware.com>
    Richard Biener  <rguenther@suse.de>

gcc/ChangeLog
* match.pd (convert (lshift @1 INTEGER_CST@2)): Narrow integer
left shifts by a constant when the result is truncated, and the
shift constant is well-defined.
* tree-vect-patterns.cc (vect_recog_rotate_pattern): Add
support for rotations of signed integer types, by lowering
using unsigned vector shifts.

gcc/testsuite/ChangeLog
* gcc.dg/fold-convlshift-4.c: New test case.
* gcc.dg/optimize-bswaphi-1.c: Update found bswap count.
* gcc.dg/tree-ssa/pr61839_3.c: Shift is now optimized before VRP.
* gcc.dg/vect/vect-over-widen-1-big-array.c: Remove obsolete tests.
* gcc.dg/vect/vect-over-widen-1.c: Likewise.
* gcc.dg/vect/vect-over-widen-3-big-array.c: Likewise.
* gcc.dg/vect/vect-over-widen-3.c: Likewise.
* gcc.dg/vect/vect-over-widen-4-big-array.c: Likewise.
* gcc.dg/vect/vect-over-widen-4.c: Likewise.
gcc/match.pd
gcc/testsuite/gcc.dg/fold-convlshift-4.c [new file with mode: 0644]
gcc/testsuite/gcc.dg/optimize-bswaphi-1.c
gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c
gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c
gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c
gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c
gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c
gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c
gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c
gcc/tree-vect-patterns.cc
This page took 0.088051 seconds and 6 git commands to generate.