This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
RE: [PATCH] [Aarch64] Optimize subtract in shift counts
This is exactly the approach that was taken with this patch. An earlier patch actually contains the patterns that match the truncation:
https://gcc.gnu.org/ml/gcc-patches/2017-06/msg01095.html
-----Original Message-----
From: Richard Biener [mailto:richard.guenther@gmail.com]
Sent: Monday, August 14, 2017 1:27 AM
To: Richard Kenner <kenner@vlsi1.ultra.nyu.edu>
Cc: Michael Collison <Michael.Collison@arm.com>; GCC Patches <gcc-patches@gcc.gnu.org>; nd <nd@arm.com>; Andrew Pinski <pinskia@gmail.com>
Subject: Re: [PATCH] [Aarch64] Optimize subtract in shift counts
On Tue, Aug 8, 2017 at 10:20 PM, Richard Kenner <kenner@vlsi1.ultra.nyu.edu> wrote:
>> Correct. It is truncated for integer shift, but not simd shift
>> instructions. We generate a pattern in the split that only generates
>> the integer shift instructions.
>
> That's unfortunate, because it would be nice to do this in
> simplify_rtx, since it's machine-independent, but that has to be
> conditioned on SHIFT_COUNT_TRUNCATED, so you wouldn't get the benefit of it.
SHIFT_COUNT_TRUNCATED should go ... you should express this in the patterns, like for example with
(define_insn ashlSI3
[(set (match_operand 0 "")
(ashl:SI (match_operand ... )
(subreg:QI (match_operand:SI ...)))]
or an explicit and:SI and combine / simplify_rtx should apply the magic optimization we expect.
Richard.