This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: Optimise constant IFN_WHILE_ULTs
- From: Jeff Law <law at redhat dot com>
- To: gcc-patches at gcc dot gnu dot org, richard dot sandiford at arm dot com
- Date: Tue, 13 Aug 2019 12:25:22 -0600
- Subject: Re: Optimise constant IFN_WHILE_ULTs
- References: <mptblwts5ch.fsf@arm.com>
On 8/13/19 4:54 AM, Richard Sandiford wrote:
> This patch is a combination of two changes that have to be committed as
> a single unit, one target-independent and one target-specific:
>
> (1) Try to fold IFN_WHILE_ULTs with constant arguments to a VECTOR_CST
> (which is always possible for fixed-length vectors but is not
> necessarily so for variable-length vectors)
>
> (2) Make the SVE port recognise constants that map to PTRUE VLn,
> which includes those generated by the new fold.
>
> (2) can't be tested without (1) and (1) would be a significant
> pessimisation without (2).
>
> The target-specific parts also start moving towards doing predicate
> manipulation in a canonical VNx16BImode form, using rtx_vector_builders.
>
> Tested on aarch64-linux-gnu (with and without SVE), aarch64_be-elf and
> x86_64-linux-gnu. OK for the generic bits (= the first three files
> in the diff)?
>
> Thanks,
> Richard
>
>
> 2019-08-13 Richard Sandiford <richard.sandiford@arm.com>
>
> gcc/
> * tree.h (build_vector_a_then_b): Declare.
> * tree.c (build_vector_a_then_b): New function.
> * fold-const-call.c (fold_while_ult): Likewise.
> (fold_const_call): Use it to handle IFN_WHILE_ULT.
> * config/aarch64/aarch64-protos.h (AARCH64_FOR_SVPATTERN): New macro.
> (aarch64_svpattern): New enum.
> * config/aarch64/aarch64-sve.md (mov<PRED_ALL:mode>): Pass
> constants through aarch64_expand_mov_immediate.
> (*aarch64_sve_mov<PRED_ALL:mode>): Use aarch64_mov_operand rather
> than general_operand as the predicate for operand 1.
> (while_ult<GPI:mode><PRED_ALL:mode>): Add a '@' marker.
> * config/aarch64/aarch64.c (simd_immediate_info::PTRUE): New
> insn_type.
> (simd_immediate_info::simd_immediate_info): New overload that
> takes a scalar_int_mode and an svpattern.
> (simd_immediate_info::u): Add a "pattern" field.
> (svpattern_token): New function.
> (aarch64_get_sve_pred_bits, aarch64_widest_sve_pred_elt_size)
> (aarch64_partial_ptrue_length, aarch64_svpattern_for_vl)
> (aarch64_sve_move_pred_via_while): New functions.
> (aarch64_expand_mov_immediate): Try using
> aarch64_sve_move_pred_via_while for predicates that contain N ones
> followed by M zeros but that do not correspond to a VLnnn pattern.
> (aarch64_sve_pred_valid_immediate): New function.
> (aarch64_simd_valid_immediate): Use it instead of dealing directly
> with PTRUE and PFALSE.
> (aarch64_output_sve_mov_immediate): Handle new simd_immediate_info
> forms.
>
> gcc/testsuite/
> * gcc.target/aarch64/sve/spill_2.c: Increase iteration counts
> beyond the range of a PTRUE.
> * gcc.target/aarch64/sve/while_6.c: New test.
> * gcc.target/aarch64/sve/while_7.c: Likewise.
> * gcc.target/aarch64/sve/while_8.c: Likewise.
> * gcc.target/aarch64/sve/while_9.c: Likewise.
> * gcc.target/aarch64/sve/while_10.c: Likewise.
>
Generic bits are fine.
jeff