This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][GCC][AArch64] optimize integer immediate moves with partial masks.


On Wed, Jun 07, 2017 at 12:38:22PM +0100, Tamar Christina wrote:
> Hi All, 
> 
> This patch optimizes integer moves for cases where where the move could be done
> move efficiently using a smaller mode.
> 
> For example:
> 
> long long f1(void)
> {
>   return 0xffff6666;
> }
> 
> long f2(void)
> {
>   return 0x11110000ffff6666;
> }
> 
> generates:
> 
> 
> f1:
> 	mov	w0, 4294927974
> 	ret
> 
> f2:
> 	mov	w0, 4294927974
> 	movk	x0, 0x1111, lsl 48
> 	ret
> 
> instead of:
> 
> f1:
> 	mov	x0, 26214
> 	movk	x0, 0xffff, lsl 16
> 	ret
> 
> f2:
> 	mov	x0, 26214
> 	movk	x0, 0xffff, lsl 16
> 	movk	x0, 0x1111, lsl 48
> 
> This works when the low 32 bits are either 0xffffXXXX or 0xXXXXffff (with XXXX non-zero), 
> a 32-bit MOVN instruction can be used as if the type was int rather than long (due to zero-extend to 64 bits).
> 
> Regression tested on aarch64-none-linux-gnu and no regressions.
> 
> OK for trunk?

I have a couple of comments below asking for comments on the new code
you're adding, and on an unrealted whitespace change but otherwise this
patch is OK.

> gcc/
> 2017-06-07  Tamar Christina  <tamar.christina@arm.com>
> 
> 	* config/aarch64/aarch64.c
> 	(aarch64_internal_mov_immediate): Add new special pattern.
> 	* config/aarch64/aarch64.md (*movdi_aarch64):
> 	Add reg/32bit const mov case.
> 
> gcc/testsuite/
> 2017-06-07  Tamar Christina  <tamar.christina@arm.com>
> 
> 	* gcc.target/aarch64/int_mov_immediate_1.c: New.

> diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
> index a99a13460c2314ca9b40f82a466b6d492c49db97..e91586fa03c64b22c4c8efdf7b98d48c0086126d 100644
> --- a/gcc/config/aarch64/aarch64.c
> +++ b/gcc/config/aarch64/aarch64.c
> @@ -1794,6 +1794,27 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate,
>        return 1;
>      }

A comment similar to that in your cover letter describing what patterrn
you are catching would be useful for future reference.

> +  val2 = val & 0xffffffff;
> +  if (mode == DImode
> +      && aarch64_move_imm (val2, SImode)
> +      && (((val >> 32) & 0xffff) == 0 || (val >> 48) == 0))
> +    {
> +      if (generate)
> +	emit_insn (gen_rtx_SET (dest, GEN_INT (val2)));
> +
> +      /* Check if we have to emit a second instruction.  */

Likewise here, for which conditions would require a second instruction.

> +      if (val == val2)
> +	return 1;
> +
> +      i = (val >> 48) ? 48 : 32;
> +
> +      if (generate)
> +	 emit_insn (gen_insv_immdi (dest, GEN_INT (i),
> +				    GEN_INT ((val >> i) & 0xffff)));
> +
> +      return 2;
> +    }
> +
>    if ((val >> 32) == 0 || mode == SImode)
>      {
>        if (generate)
> @@ -1810,7 +1831,6 @@ aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate,
>      }
>  
>    /* Remaining cases are all for DImode.  */
> -

Unrelated change?

>    mask = 0xffff;
>    zero_match = ((val & mask) == 0) + ((val & (mask << 16)) == 0) +
>      ((val & (mask << 32)) == 0) + ((val & (mask << 48)) == 0);

Thanks,
James


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]