[Bug rtl-optimization/55278] [10/11/12/13 Regression] Botan performance regressions, other compilers generate better code than gcc

cvs-commit at gcc dot gnu.org gcc-bugzilla@gcc.gnu.org
Wed Jun 15 07:32:29 GMT 2022


--- Comment #30 from CVS Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Roger Sayle <sayle@gcc.gnu.org>:


commit r13-1100-gacb1e6f43dc2bbedd1248ea61c7ab537a11fe59b
Author: Roger Sayle <roger@nextmovesoftware.com>
Date:   Wed Jun 15 09:31:13 2022 +0200

    Fold truncations of left shifts in match.pd

    Whilst investigating PR 55278, I noticed that the tree-ssa optimizers
    aren't eliminating the promotions of shifts to "int" as inserted by the
    c-family front-ends, instead leaving this simplification to be left to
    the RTL optimizers.  This patch allows match.pd to do this itself earlier,
    narrowing (T)(X << C) to (T)X << C when the constant C is known to be
    valid for the (narrower) type T.

    Hence for this simple test case:
    short foo(short x) { return x << 5; }

    the .optimized dump currently looks like:

    short int foo (short int x)
      int _1;
      int _2;
      short int _4;

      <bb 2> [local count: 1073741824]:
      _1 = (int) x_3(D);
      _2 = _1 << 5;
      _4 = (short int) _2;
      return _4;

    but with this patch, now becomes:

    short int foo (short int x)
      short int _2;

      <bb 2> [local count: 1073741824]:
      _2 = x_1(D) << 5;
      return _2;

    This is always reasonable as RTL expansion knows how to use
    widening optabs if it makes sense at the RTL level to perform
    this shift in a wider mode.

    Of course, there's often a catch.  The above simplification not only
    reduces the number of statements in gimple, but also allows further
    optimizations, for example including the perception of rotate idioms
    and bswap16.  Alas, optimizing things earlier than anticipated
    requires several testsuite changes [though all these tests have
    been confirmed to generate identical assembly code on x86_64].
    The only significant change is that the vectorization pass wouldn't
    previously lower rotations of signed integer types.  Hence this
    patch includes a refinement to tree-vect-patterns to allow signed
    types, by using the equivalent unsigned shifts.

    2022-06-15  Roger Sayle  <roger@nextmovesoftware.com>
                Richard Biener  <rguenther@suse.de>

            * match.pd (convert (lshift @1 INTEGER_CST@2)): Narrow integer
            left shifts by a constant when the result is truncated, and the
            shift constant is well-defined.
            * tree-vect-patterns.cc (vect_recog_rotate_pattern): Add
            support for rotations of signed integer types, by lowering
            using unsigned vector shifts.

            * gcc.dg/fold-convlshift-4.c: New test case.
            * gcc.dg/optimize-bswaphi-1.c: Update found bswap count.
            * gcc.dg/tree-ssa/pr61839_3.c: Shift is now optimized before VRP.
            * gcc.dg/vect/vect-over-widen-1-big-array.c: Remove obsolete tests.
            * gcc.dg/vect/vect-over-widen-1.c: Likewise.
            * gcc.dg/vect/vect-over-widen-3-big-array.c: Likewise.
            * gcc.dg/vect/vect-over-widen-3.c: Likewise.
            * gcc.dg/vect/vect-over-widen-4-big-array.c: Likewise.
            * gcc.dg/vect/vect-over-widen-4.c: Likewise.

More information about the Gcc-bugs mailing list