[AArch64, PATCH] Improve Neon store of zero
James Greenhalgh
james.greenhalgh@arm.com
Tue Sep 12 16:28:00 GMT 2017
On Wed, Sep 06, 2017 at 10:02:52AM +0100, Jackson Woodruff wrote:
> Hi all,
>
> I've attached a new patch that addresses some of the issues raised with
> my original patch.
>
> On 08/23/2017 03:35 PM, Wilco Dijkstra wrote:
> > Richard Sandiford wrote:
> >>
> >> Sorry for only noticing now, but the call to aarch64_legitimate_address_p
> >> is asking whether the MEM itself is a legitimate LDP/STP address. Also,
> >> it might be better to pass false for strict_p, since this can be called
> >> before RA. So maybe:
> >>
> >> if (GET_CODE (operands[0]) == MEM
> >> && !(aarch64_simd_imm_zero (operands[1], <MODE>mode)
> >> && aarch64_mem_pair_operand (operands[0], <MODE>mode)))
>
> There were also some issues with the choice of mode for the call the
> aarch64_mem_pair_operand.
>
> For a 128-bit wide mode, we want to check `aarch64_mem_pair_operand
> (operands[0], DImode)` since that's what the stp will be.
>
> For a 64-bit wide mode, we don't need to do that check because a normal
> `str` can be issued.
>
> I've updated the condition as such.
>
> >
> > Is there any reason for doing this check at all (or at least this early during
> > expand)?
>
> Not doing this check means that the zero is forced into a register, so
> we then carry around a bit more RTL and rely on combine to merge things.
>
> >
> > There is a similar issue with this part:
> >
> > (define_insn "*aarch64_simd_mov<mode>"
> > [(set (match_operand:VQ 0 "nonimmediate_operand"
> > - "=w, m, w, ?r, ?w, ?r, w")
> > + "=w, Ump, m, w, ?r, ?w, ?r, w")
> >
> > The Ump causes the instruction to always split off the address offset. Ump
> > cannot be used in patterns that are generated before register allocation as it
> > also calls laarch64_legitimate_address_p with strict_p set to true.
>
> I've changed the constraint to a new constraint 'Umq', that acts the
> same as Ump, but calls aarch64_legitimate_address_p with strict_p set to
> false and uses DImode for the mode to pass.
This looks mostly OK to me, but this conditional:
> + if (GET_CODE (operands[0]) == MEM
> + && !(aarch64_simd_imm_zero (operands[1], <MODE>mode)
> + && ((GET_MODE_SIZE (<MODE>mode) == 16
> + && aarch64_mem_pair_operand (operands[0], DImode))
> + || GET_MODE_SIZE (<MODE>mode) == 8)))
Has grown a bit too big in such a general pattern to live without a comment
explaining what is going on.
> +(define_memory_constraint "Umq"
> + "@internal
> + A memory address which uses a base register with an offset small enough for
> + a load/store pair operation in DI mode."
> + (and (match_code "mem")
> + (match_test "aarch64_legitimate_address_p (DImode, XEXP (op, 0),
> + PARALLEL, 0)")))
And here you want 'false' rather than '0'.
I'll happily merge the patch with those changes, please send an update.
Thanks,
James
>
> ChangeLog:
>
> gcc/
>
> 2017-08-29 Jackson Woodruff <jackson.woodruff@arm.com>
>
> * config/aarch64/constraints.md (Umq): New constraint.
> * config/aarch64/aarch64-simd.md (*aarch64_simd_mov<mode>):
> Change to use Umq.
> (mov<mode>): Update condition.
>
> gcc/testsuite
>
> 2017-08-29 Jackson Woodruff <jackson.woodruff@arm.com>
>
> * gcc.target/aarch64/simd/vect_str_zero.c:
> Update testcase.
More information about the Gcc-patches
mailing list