This is the mail archive of the
mailing list for the GCC project.
Re: [RFA] [target/87369] Prefer "bit" over "bfxil"
- From: "Richard Earnshaw (lists)" <Richard dot Earnshaw at arm dot com>
- To: Jeff Law <law at redhat dot com>, gcc-patches <gcc-patches at gcc dot gnu dot org>, James Greenhalgh <james dot greenhalgh at arm dot com>
- Date: Fri, 7 Dec 2018 17:31:02 +0000
- Subject: Re: [RFA] [target/87369] Prefer "bit" over "bfxil"
- References: <firstname.lastname@example.org>
On 07/12/2018 15:52, Jeff Law wrote:
> As I suggested in the BZ, this patch rejects constants with just the
> high bit set for the recently added "bfxil" pattern. As a result we'll
> return to using "bit" for the test in the BZ.
> I'm not versed enough in aarch64 performance tuning to know if "bit" is
> actually a better choice than "bfxil". "bit" results in better code for
> the testcase, but that seems more a function of register allocation than
> "bit" being inherently better than "bfxil". Obviously someone with
> more aarch64 knowledge needs to make a decision here.
> My first iteration of the patch changed "aarch64_high_bits_all_ones_p".
> We could still go that way too, though the name probably needs to change.
> I've bootstrapped and regression tested on aarch64-linux-gnu and it
> fixes the regression. I've also bootstrapped aarch64_be-linux-gnu, but
> haven't done any kind of regression tested on that platform.
> OK for the trunk?
The problem here is that the optimum solution depends on the register
classes involved and we don't know this during combine. If we have
general register, then we want bfi/bfxil to be used; if we have a vector
register, then bit is preferable as it changes 3 inter-bank register
copies to a single inter-bank copy; and that copy might be hoisted out
of a loop.
For example, this case:
f (unsigned long a, unsigned long b)
return (b & 0x7fffffffffffffff) | (a & 0x8000000000000000);
before your patch this expands to just a single bfxil instruction and
that's exactly what we'd want here. With it, however, I'm now seeing
and x1, x1, 9223372036854775807
and x0, x0, -9223372036854775808
orr x0, x1, x0
which seems to be even worse than gcc-8 where we got a bfi instruction.
Ultimately, the best solution here will probably depend on which we
think is more likely, copysign or the example I give above.
It might be that for copysign we'll need to expand initially to some
unspec that uses a register initialized with a suitable immediate, but
otherwise hides the operation from combine until after that has run,
thus preventing the compiler from doing the otherwise right thing. We'd
lose in the (hopefully) rare case where the operands really were in
general registers, but otherwise win for the more common case where they
> PR target/87369
> * config/aarch64/aarch64.md (aarch64_bfxil<mode>): Do not accept
> constant with just the high bit set. That's better handled by
> the "bit" pattern.
> diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
> index 88f66104db3..ad6822410c2 100644
> --- a/gcc/config/aarch64/aarch64.md
> +++ b/gcc/config/aarch64/aarch64.md
> @@ -5342,9 +5342,11 @@
> (match_operand:GPI 3 "const_int_operand" "n, Ulc"))
> (and:GPI (match_operand:GPI 2 "register_operand" "0,r")
> (match_operand:GPI 4 "const_int_operand" "Ulc, n"))))]
> - "(INTVAL (operands) == ~INTVAL (operands))
> - && (aarch64_high_bits_all_ones_p (INTVAL (operands))
> - || aarch64_high_bits_all_ones_p (INTVAL (operands)))"
> + "(INTVAL (operands) == ~INTVAL (operands)
> + && ((aarch64_high_bits_all_ones_p (INTVAL (operands))
> + && popcount_hwi (INTVAL (operands)) != 1)
> + || (aarch64_high_bits_all_ones_p (INTVAL (operands))
> + && popcount_hwi (INTVAL (operands)) != 1)))"
> switch (which_alternative)