This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 3/4][PR target/65697][5.1][Aarch64] Backport stronger barriers for __sync,compare-and-swap builtins.


On Fri, Jun 26, 2015 at 01:08:50PM +0100, Matthew Wahab wrote:
> This patch backports the changes made to strengthen the barriers emitted for
> the __sync compare-and-swap builtins.
> 
> The trunk patch submission is at
> https://gcc.gnu.org/ml/gcc-patches/2015-05/msg01990.html
> The commit is at https://gcc.gnu.org/ml/gcc-cvs/2015-06/msg00077.html
> 
> Tested the series for aarch64-none-linux-gnu with check-gcc
> 
> Ok for the branch?
> Matthew

OK.

Thanks,
James

> 
> 2015-06-26  Matthew Wahab  <matthew.wahab@arm.com>
> 
> 	Backport from trunk.
> 	2015-06-01  Matthew Wahab  <matthew.wahab@arm.com>
> 
> 	PR target/65697
> 	* config/aarch64/aarch64.c (aarch64_split_compare_and_swap): Check
> 	for __sync memory models, emit initial loads and final barriers as
> 	appropriate.
> 
> 

> From 5fbfcc46e6eb2b8b61aa96c9c96da9a572bc4d12 Mon Sep 17 00:00:00 2001
> From: mwahab <mwahab@138bc75d-0d04-0410-961f-82ee72b054a4>
> Date: Mon, 1 Jun 2015 15:21:02 +0000
> Subject: [PATCH 3/4] [Aarch64][5.1] Strengthen barriers for sync-compare-swap
>  builtins
> 
> 	PR target/65697
> 	* config/aarch64/aarch64.c (aarch64_split_compare_and_swap): Check
> 	for __sync memory models, emit initial loads and final barriers as
> 	appropriate.
> 
> Change-Id: I65d8000c081d582246b81c7f3892c509a64b136c
> git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@223984 138bc75d-0d04-0410-961f-82ee72b054a4
> ---
>  gcc/config/aarch64/aarch64.c | 18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
> 
> diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
> index 708fc23..59d2e3a 100644
> --- a/gcc/config/aarch64/aarch64.c
> +++ b/gcc/config/aarch64/aarch64.c
> @@ -9093,14 +9093,18 @@ aarch64_split_compare_and_swap (rtx operands[])
>    bool is_weak;
>    rtx_code_label *label1, *label2;
>    rtx x, cond;
> +  enum memmodel model;
> +  rtx model_rtx;
>  
>    rval = operands[0];
>    mem = operands[1];
>    oldval = operands[2];
>    newval = operands[3];
>    is_weak = (operands[4] != const0_rtx);
> +  model_rtx = operands[5];
>    scratch = operands[7];
>    mode = GET_MODE (mem);
> +  model = memmodel_from_int (INTVAL (model_rtx));
>  
>    label1 = NULL;
>    if (!is_weak)
> @@ -9110,7 +9114,13 @@ aarch64_split_compare_and_swap (rtx operands[])
>      }
>    label2 = gen_label_rtx ();
>  
> -  aarch64_emit_load_exclusive (mode, rval, mem, operands[5]);
> +  /* The initial load can be relaxed for a __sync operation since a final
> +     barrier will be emitted to stop code hoisting.  */
> +  if (is_mm_sync (model))
> +    aarch64_emit_load_exclusive (mode, rval, mem,
> +				 GEN_INT (MEMMODEL_RELAXED));
> +  else
> +    aarch64_emit_load_exclusive (mode, rval, mem, model_rtx);
>  
>    cond = aarch64_gen_compare_reg (NE, rval, oldval);
>    x = gen_rtx_NE (VOIDmode, cond, const0_rtx);
> @@ -9118,7 +9128,7 @@ aarch64_split_compare_and_swap (rtx operands[])
>  			    gen_rtx_LABEL_REF (Pmode, label2), pc_rtx);
>    aarch64_emit_unlikely_jump (gen_rtx_SET (VOIDmode, pc_rtx, x));
>  
> -  aarch64_emit_store_exclusive (mode, scratch, mem, newval, operands[5]);
> +  aarch64_emit_store_exclusive (mode, scratch, mem, newval, model_rtx);
>  
>    if (!is_weak)
>      {
> @@ -9135,6 +9145,10 @@ aarch64_split_compare_and_swap (rtx operands[])
>      }
>  
>    emit_label (label2);
> +
> +  /* Emit any final barrier needed for a __sync operation.  */
> +  if (is_mm_sync (model))
> +    aarch64_emit_post_barrier (model);
>  }
>  
>  /* Split an atomic operation.  */
> -- 
> 1.9.1
> 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]