This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] Fix 65697. Add memory model support for stronger __sync operations.


There has been some debate over the strength requirement of barriers for __sync operations... This is documented within the PR.

Originally __sync was suppose to be synonymous with SEQ_CST, but there has been a slight slackening of the barrier-ness of SEQ_CST from the language lawyers. Under some circumstances, is is possible to move loads or stores past a SEQ_CST barrier... which means that since __sync is documented as being a "full barrier", using SEQ_CST is technically not always the same. There are similar issues with ACQUIRE and __sync_lock_test_and_set.

__sync and __atomic processing were previously merged such that __sync is now implemented in terms of __atomic under the covers, making unwinding this a bit tricky.

In any case, I settled on adding a bit to the memory model field indicating the atomic call originated with a __sync. I used the upper bit of the field and added specific entries in enum memmodel for the 3 possibilities: MEMMODEL_SYNC_SEQ_CST, MEMMODEL_SYNC_ACQUIRE, and MEMMODEL_SYNC_RELEASE. This are *not* exposed to the user, and are only created internally when expanding __sync built-ins.

In order to make this transparent to targets which do not care (which is all of them except aarch64 right now), I provided access routines to check the model, and converted the generic and target code to use these routines instead of the current masking and comparisons.
ie:

    if ((model & MEMMODEL_MASK) == MEMMODEL_SEQ_CST)
becomes
    if (is_mm_seq_cst (model))

These routines ignore the sync bit, so it will return true for both MEMMODEL_SEQ_CST and MEMMODEL_SYNC_SEQ_CST, making the bit transparent to existing code. For ports like aarch64 that do care about the bit, they can check for the __sync bit (is_mm_sync(model)) or look for a specific model such as MEMMODEL_SYNC_SEQ_CST.

This bootstraps on x86_64-unknown-linux-gnu with no new regressions. It has also been tested on aarch64, along with patches to verify it does enable the appropriate changes to be made to the target, also with no runtime regressions. I have also built all the targets in config-list.mk with no new compile errors. I hope I caught everything... :-)

OK for trunk?

Andrew

	PR target/65697
	* coretypes.h (MEMMODEL_SYNC, MEMMODEL_BASE_MASK): New macros.
	(enum memmodel): Add SYNC_{ACQUIRE,RELEASE,SEQ_CST}.
	* tree.h (memmodel_from_int, memmodel_base, is_mm_relaxed,
	is_mm_consume,is_mm_acquire, is_mm_release, is_mm_acq_rel,
	is_mm_seq_cst, is_mm_sync): New accessor functions.
	* builtins.c (expand_builtin_sync_operation,
	expand_builtin_compare_and_swap): Use MEMMODEL_SYNC_SEQ_CST.
	(expand_builtin_sync_lock_release): Use MEMMODEL_SYNC_RELEASE.
	(get_memmodel,  expand_builtin_atomic_compare_exchange,
	expand_builtin_atomic_load, expand_builtin_atomic_store,
	expand_builtin_atomic_clear): Use new accessor routines.
	(expand_builtin_sync_synchronize): Use MEMMODEL_SYNC_SEQ_CST.
	* optabs.c (expand_compare_and_swap_loop): Use MEMMODEL_SYNC_SEQ_CST.
	(maybe_emit_sync_lock_test_and_set): Use new accessors and
	MEMMODEL_SYNC_ACQUIRE.
	(expand_sync_lock_test_and_set): Use MEMMODEL_SYNC_ACQUIRE.
	(expand_mem_thread_fence, expand_mem_signal_fence, expand_atomic_load,
	expand_atomic_store): Use new accessors.
	* emit-rtl.c (need_atomic_barrier_p): Add additional enum cases.
	* tsan.c (instrument_builtin_call): Update check for memory model beyond
	final enum to use MEMMODEL_LAST.
	* config/aarch64/aarch64.c (aarch64_expand_compare_and_swap): Use new
	accessors.
	* config/aarch64/atomics.md (atomic_load<mode>,atomic_store<mode>,
	arch64_load_exclusive<mode>, aarch64_store_exclusive<mode>,
	mem_thread_fence, *dmb): Likewise.
	* config/alpha/alpha.c (alpha_split_compare_and_swap,
	alpha_split_compare_and_swap_12): Likewise.
	* config/arm/arm.c (arm_expand_compare_and_swap,
	arm_split_compare_and_swap, arm_split_atomic_op): Likewise.
	* config/arm/sync.md (atomic_load<mode>, atomic_store<mode>,
	atomic_loaddi): Likewise.
	* config/i386/i386.c (ix86_destroy_cost_data, ix86_memmodel_check):
	Likewise.
	* config/i386/sync.md (mem_thread_fence, atomic_store<mode>): Likewise.
	* config/ia64/ia64.c (ia64_expand_atomic_op): Add new memmodel cases and
	use new accessors.
	* config/ia64/sync.md (mem_thread_fence, atomic_load<mode>,
	atomic_store<mode>, atomic_compare_and_swap<mode>,
	atomic_exchange<mode>): Use new accessors.
	* config/mips/mips.c (mips_process_sync_loop): Likewise.
	* config/pa/pa.md (atomic_loaddi, atomic_storedi): Likewise.
	* config/rs6000/rs6000.c (rs6000_pre_atomic_barrier,
	rs6000_post_atomic_barrier): Add new cases.
	(rs6000_expand_atomic_compare_and_swap): Use new accessors.
	* config/rs6000/sync.md (mem_thread_fence): Add new cases.
	(atomic_load<mode>): Add new cases and use new accessors.
	(store_quadpti): Add new cases.
	* config/s390/s390.md (mem_thread_fence, atomic_store<mode>): Use new
	accessors.
	* config/sparc/sparc.c (sparc_emit_membar_for_model): Use new accessors.

	* doc/extend.texi: Update docs to indicate 16 bits are used for memory
	model, not 8.

	* c-family/c-common.c: Use new accessor for memmodel_base.


Index: coretypes.h
===================================================================
*** coretypes.h	(revision 222579)
--- coretypes.h	(working copy)
*************** enum function_class {
*** 263,268 ****
--- 263,280 ----
    function_c11_misc
  };
  
+ /* Suppose that higher bits are target dependent. */
+ #define MEMMODEL_MASK ((1<<16)-1)
+ 
+ /* Legacy sync operations set this upper flag in the memory model.  This allows
+    targets that need to do something stronger for sync operations to
+    differentiate with their target patterns and issue a more appropriate insn
+    sequence.  See bugzilla 65697 for background.  */
+ #define MEMMODEL_SYNC (1<<15)
+ 
+ /* Memory model without SYNC bit for targets/operations that do not care.  */
+ #define MEMMODEL_BASE_MASK (MEMMODEL_SYNC-1)
+ 
  /* Memory model types for the __atomic* builtins. 
     This must match the order in libstdc++-v3/include/bits/atomic_base.h.  */
  enum memmodel
*************** enum memmodel
*** 273,284 ****
    MEMMODEL_RELEASE = 3,
    MEMMODEL_ACQ_REL = 4,
    MEMMODEL_SEQ_CST = 5,
!   MEMMODEL_LAST = 6
  };
  
- /* Suppose that higher bits are target dependent. */
- #define MEMMODEL_MASK ((1<<16)-1)
- 
  /* Support for user-provided GGC and PCH markers.  The first parameter
     is a pointer to a pointer, the second a cookie.  */
  typedef void (*gt_pointer_operator) (void *, void *);
--- 285,296 ----
    MEMMODEL_RELEASE = 3,
    MEMMODEL_ACQ_REL = 4,
    MEMMODEL_SEQ_CST = 5,
!   MEMMODEL_LAST = 6,
!   MEMMODEL_SYNC_ACQUIRE = MEMMODEL_ACQUIRE | MEMMODEL_SYNC,
!   MEMMODEL_SYNC_RELEASE = MEMMODEL_RELEASE | MEMMODEL_SYNC,
!   MEMMODEL_SYNC_SEQ_CST = MEMMODEL_SEQ_CST | MEMMODEL_SYNC
  };
  
  /* Support for user-provided GGC and PCH markers.  The first parameter
     is a pointer to a pointer, the second a cookie.  */
  typedef void (*gt_pointer_operator) (void *, void *);
Index: tree.h
===================================================================
*** tree.h	(revision 222579)
--- tree.h	(working copy)
*************** extern void assign_assembler_name_if_nee
*** 4378,4383 ****
--- 4378,4446 ----
  extern void warn_deprecated_use (tree, tree);
  extern void cache_integer_cst (tree);
  
+ /* Return the memory model from a host integer.  */
+ static inline enum memmodel
+ memmodel_from_int (unsigned HOST_WIDE_INT val)
+ {
+   return (enum memmodel) (val & MEMMODEL_MASK);
+ }
+ 
+ /* Return the base memory model from a host integer.  */
+ static inline enum memmodel
+ memmodel_base (unsigned HOST_WIDE_INT val)
+ {
+   return (enum memmodel) (val & MEMMODEL_BASE_MASK);
+ }
+ 
+ /* Return TRUE if the memory model is RELAXED.  */
+ static inline bool
+ is_mm_relaxed (enum memmodel model)
+ {
+   return (model & MEMMODEL_BASE_MASK) == MEMMODEL_RELAXED;
+ }
+ 
+ /* Return TRUE if the memory model is CONSUME.  */
+ static inline bool
+ is_mm_consume (enum memmodel model)
+ {
+   return (model & MEMMODEL_BASE_MASK) == MEMMODEL_CONSUME;
+ }
+ 
+ /* Return TRUE if the memory model is ACQUIRE.  */
+ static inline bool
+ is_mm_acquire (enum memmodel model)
+ {
+   return (model & MEMMODEL_BASE_MASK) == MEMMODEL_ACQUIRE;
+ }
+ 
+ /* Return TRUE if the memory model is RELEASE.  */
+ static inline bool
+ is_mm_release (enum memmodel model)
+ {
+   return (model & MEMMODEL_BASE_MASK) == MEMMODEL_RELEASE;
+ }
+ 
+ /* Return TRUE if the memory model is ACQ_REL.  */
+ static inline bool
+ is_mm_acq_rel (enum memmodel model)
+ {
+   return (model & MEMMODEL_BASE_MASK) == MEMMODEL_ACQ_REL;
+ }
+ 
+ /* Return TRUE if the memory model is SEQ_CST.  */
+ static inline bool
+ is_mm_seq_cst (enum memmodel model)
+ {
+   return (model & MEMMODEL_BASE_MASK) == MEMMODEL_SEQ_CST;
+ }
+ 
+ /* Return TRUE if the memory model is a SYNC variant.  */
+ static inline bool
+ is_mm_sync (enum memmodel model)
+ {
+   return (model & MEMMODEL_SYNC);
+ }
+ 
  /* Compare and hash for any structure which begins with a canonical
     pointer.  Assumes all pointers are interchangeable, which is sort
     of already assumed by gcc elsewhere IIRC.  */
Index: builtins.c
===================================================================
*** builtins.c	(revision 222579)
--- builtins.c	(working copy)
*************** expand_builtin_sync_operation (machine_m
*** 5271,5277 ****
    mem = get_builtin_sync_mem (CALL_EXPR_ARG (exp, 0), mode);
    val = expand_expr_force_mode (CALL_EXPR_ARG (exp, 1), mode);
  
!   return expand_atomic_fetch_op (target, mem, val, code, MEMMODEL_SEQ_CST,
  				 after);
  }
  
--- 5271,5277 ----
    mem = get_builtin_sync_mem (CALL_EXPR_ARG (exp, 0), mode);
    val = expand_expr_force_mode (CALL_EXPR_ARG (exp, 1), mode);
  
!   return expand_atomic_fetch_op (target, mem, val, code, MEMMODEL_SYNC_SEQ_CST,
  				 after);
  }
  
*************** expand_builtin_compare_and_swap (machine
*** 5301,5308 ****
  	poval = &target;
      }
    if (!expand_atomic_compare_and_swap (pbool, poval, mem, old_val, new_val,
! 				       false, MEMMODEL_SEQ_CST,
! 				       MEMMODEL_SEQ_CST))
      return NULL_RTX;
  
    return target;
--- 5301,5308 ----
  	poval = &target;
      }
    if (!expand_atomic_compare_and_swap (pbool, poval, mem, old_val, new_val,
! 				       false, MEMMODEL_SYNC_SEQ_CST,
! 				       MEMMODEL_SYNC_SEQ_CST))
      return NULL_RTX;
  
    return target;
*************** expand_builtin_sync_lock_release (machin
*** 5337,5343 ****
    /* Expand the operands.  */
    mem = get_builtin_sync_mem (CALL_EXPR_ARG (exp, 0), mode);
  
!   expand_atomic_store (mem, const0_rtx, MEMMODEL_RELEASE, true);
  }
  
  /* Given an integer representing an ``enum memmodel'', verify its
--- 5337,5343 ----
    /* Expand the operands.  */
    mem = get_builtin_sync_mem (CALL_EXPR_ARG (exp, 0), mode);
  
!   expand_atomic_store (mem, const0_rtx, MEMMODEL_SYNC_RELEASE, true);
  }
  
  /* Given an integer representing an ``enum memmodel'', verify its
*************** get_memmodel (tree exp)
*** 5366,5372 ****
        return MEMMODEL_SEQ_CST;
      }
  
!   if ((INTVAL (op) & MEMMODEL_MASK) >= MEMMODEL_LAST)
      {
        warning (OPT_Winvalid_memory_model,
  	       "invalid memory model argument to builtin");
--- 5366,5373 ----
        return MEMMODEL_SEQ_CST;
      }
  
!   /* Should never see a user explicit SYNC memodel model, so >= LAST works. */
!   if (memmodel_base (val) >= MEMMODEL_LAST)
      {
        warning (OPT_Winvalid_memory_model,
  	       "invalid memory model argument to builtin");
*************** expand_builtin_atomic_compare_exchange (
*** 5433,5440 ****
        success = MEMMODEL_SEQ_CST;
      }
   
!   if ((failure & MEMMODEL_MASK) == MEMMODEL_RELEASE
!       || (failure & MEMMODEL_MASK) == MEMMODEL_ACQ_REL)
      {
        warning (OPT_Winvalid_memory_model,
  	       "invalid failure memory model for "
--- 5434,5440 ----
        success = MEMMODEL_SEQ_CST;
      }
   
!   if (is_mm_release (failure) || is_mm_acq_rel (failure))
      {
        warning (OPT_Winvalid_memory_model,
  	       "invalid failure memory model for "
*************** expand_builtin_atomic_load (machine_mode
*** 5496,5503 ****
    enum memmodel model;
  
    model = get_memmodel (CALL_EXPR_ARG (exp, 1));
!   if ((model & MEMMODEL_MASK) == MEMMODEL_RELEASE
!       || (model & MEMMODEL_MASK) == MEMMODEL_ACQ_REL)
      {
        warning (OPT_Winvalid_memory_model,
  	       "invalid memory model for %<__atomic_load%>");
--- 5496,5502 ----
    enum memmodel model;
  
    model = get_memmodel (CALL_EXPR_ARG (exp, 1));
!   if (is_mm_release (model) || is_mm_acq_rel (model))
      {
        warning (OPT_Winvalid_memory_model,
  	       "invalid memory model for %<__atomic_load%>");
*************** expand_builtin_atomic_store (machine_mod
*** 5526,5534 ****
    enum memmodel model;
  
    model = get_memmodel (CALL_EXPR_ARG (exp, 2));
!   if ((model & MEMMODEL_MASK) != MEMMODEL_RELAXED
!       && (model & MEMMODEL_MASK) != MEMMODEL_SEQ_CST
!       && (model & MEMMODEL_MASK) != MEMMODEL_RELEASE)
      {
        warning (OPT_Winvalid_memory_model,
  	       "invalid memory model for %<__atomic_store%>");
--- 5525,5532 ----
    enum memmodel model;
  
    model = get_memmodel (CALL_EXPR_ARG (exp, 2));
!   if (!(is_mm_relaxed (model) || is_mm_seq_cst (model)
! 	|| is_mm_release (model)))
      {
        warning (OPT_Winvalid_memory_model,
  	       "invalid memory model for %<__atomic_store%>");
*************** expand_builtin_atomic_clear (tree exp)
*** 5635,5643 ****
    mem = get_builtin_sync_mem (CALL_EXPR_ARG (exp, 0), mode);
    model = get_memmodel (CALL_EXPR_ARG (exp, 1));
  
!   if ((model & MEMMODEL_MASK) == MEMMODEL_CONSUME
!       || (model & MEMMODEL_MASK) == MEMMODEL_ACQUIRE
!       || (model & MEMMODEL_MASK) == MEMMODEL_ACQ_REL)
      {
        warning (OPT_Winvalid_memory_model,
  	       "invalid memory model for %<__atomic_store%>");
--- 5633,5639 ----
    mem = get_builtin_sync_mem (CALL_EXPR_ARG (exp, 0), mode);
    model = get_memmodel (CALL_EXPR_ARG (exp, 1));
  
!   if (is_mm_consume (model) || is_mm_acquire (model) || is_mm_acq_rel (model))
      {
        warning (OPT_Winvalid_memory_model,
  	       "invalid memory model for %<__atomic_store%>");
*************** expand_builtin_atomic_signal_fence (tree
*** 5833,5839 ****
  static void
  expand_builtin_sync_synchronize (void)
  {
!   expand_mem_thread_fence (MEMMODEL_SEQ_CST);
  }
  
  static rtx
--- 5829,5835 ----
  static void
  expand_builtin_sync_synchronize (void)
  {
!   expand_mem_thread_fence (MEMMODEL_SYNC_SEQ_CST);
  }
  
  static rtx
Index: optabs.c
===================================================================
*** optabs.c	(revision 222579)
--- optabs.c	(working copy)
*************** expand_compare_and_swap_loop (rtx mem, r
*** 7178,7184 ****
    success = NULL_RTX;
    oldval = cmp_reg;
    if (!expand_atomic_compare_and_swap (&success, &oldval, mem, old_reg,
! 				       new_reg, false, MEMMODEL_SEQ_CST,
  				       MEMMODEL_RELAXED))
      return false;
  
--- 7178,7184 ----
    success = NULL_RTX;
    oldval = cmp_reg;
    if (!expand_atomic_compare_and_swap (&success, &oldval, mem, old_reg,
! 				       new_reg, false, MEMMODEL_SYNC_SEQ_CST,
  				       MEMMODEL_RELAXED))
      return false;
  
*************** maybe_emit_sync_lock_test_and_set (rtx t
*** 7239,7247 ****
       exists, and the memory model is stronger than acquire, add a release 
       barrier before the instruction.  */
  
!   if ((model & MEMMODEL_MASK) == MEMMODEL_SEQ_CST
!       || (model & MEMMODEL_MASK) == MEMMODEL_RELEASE
!       || (model & MEMMODEL_MASK) == MEMMODEL_ACQ_REL)
      expand_mem_thread_fence (model);
  
    if (icode != CODE_FOR_nothing)
--- 7239,7245 ----
       exists, and the memory model is stronger than acquire, add a release 
       barrier before the instruction.  */
  
!   if (is_mm_seq_cst (model) || is_mm_release (model) || is_mm_acq_rel (model))
      expand_mem_thread_fence (model);
  
    if (icode != CODE_FOR_nothing)
*************** expand_sync_lock_test_and_set (rtx targe
*** 7348,7358 ****
    rtx ret;
  
    /* Try an atomic_exchange first.  */
!   ret = maybe_emit_atomic_exchange (target, mem, val, MEMMODEL_ACQUIRE);
    if (ret)
      return ret;
  
!   ret = maybe_emit_sync_lock_test_and_set (target, mem, val, MEMMODEL_ACQUIRE);
    if (ret)
      return ret;
  
--- 7346,7357 ----
    rtx ret;
  
    /* Try an atomic_exchange first.  */
!   ret = maybe_emit_atomic_exchange (target, mem, val, MEMMODEL_SYNC_ACQUIRE);
    if (ret)
      return ret;
  
!   ret = maybe_emit_sync_lock_test_and_set (target, mem, val,
! 					   MEMMODEL_SYNC_ACQUIRE);
    if (ret)
      return ret;
  
*************** expand_sync_lock_test_and_set (rtx targe
*** 7363,7369 ****
    /* If there are no other options, try atomic_test_and_set if the value
       being stored is 1.  */
    if (val == const1_rtx)
!     ret = maybe_emit_atomic_test_and_set (target, mem, MEMMODEL_ACQUIRE);
  
    return ret;
  }
--- 7362,7368 ----
    /* If there are no other options, try atomic_test_and_set if the value
       being stored is 1.  */
    if (val == const1_rtx)
!     ret = maybe_emit_atomic_test_and_set (target, mem, MEMMODEL_SYNC_ACQUIRE);
  
    return ret;
  }
*************** expand_mem_thread_fence (enum memmodel m
*** 7620,7626 ****
  {
    if (HAVE_mem_thread_fence)
      emit_insn (gen_mem_thread_fence (GEN_INT (model)));
!   else if ((model & MEMMODEL_MASK) != MEMMODEL_RELAXED)
      {
        if (HAVE_memory_barrier)
  	emit_insn (gen_memory_barrier ());
--- 7619,7625 ----
  {
    if (HAVE_mem_thread_fence)
      emit_insn (gen_mem_thread_fence (GEN_INT (model)));
!   else if (!is_mm_relaxed (model))
      {
        if (HAVE_memory_barrier)
  	emit_insn (gen_memory_barrier ());
*************** expand_mem_signal_fence (enum memmodel m
*** 7644,7650 ****
  {
    if (HAVE_mem_signal_fence)
      emit_insn (gen_mem_signal_fence (GEN_INT (model)));
!   else if ((model & MEMMODEL_MASK) != MEMMODEL_RELAXED)
      {
        /* By default targets are coherent between a thread and the signal
  	 handler running on the same thread.  Thus this really becomes a
--- 7643,7649 ----
  {
    if (HAVE_mem_signal_fence)
      emit_insn (gen_mem_signal_fence (GEN_INT (model)));
!   else if (!is_mm_relaxed (model))
      {
        /* By default targets are coherent between a thread and the signal
  	 handler running on the same thread.  Thus this really becomes a
*************** expand_atomic_load (rtx target, rtx mem,
*** 7699,7705 ****
      target = gen_reg_rtx (mode);
  
    /* For SEQ_CST, emit a barrier before the load.  */
!   if ((model & MEMMODEL_MASK) == MEMMODEL_SEQ_CST)
      expand_mem_thread_fence (model);
  
    emit_move_insn (target, mem);
--- 7698,7704 ----
      target = gen_reg_rtx (mode);
  
    /* For SEQ_CST, emit a barrier before the load.  */
!   if (is_mm_seq_cst (model))
      expand_mem_thread_fence (model);
  
    emit_move_insn (target, mem);
*************** expand_atomic_store (rtx mem, rtx val, e
*** 7745,7751 ****
  	  if (maybe_expand_insn (icode, 2, ops))
  	    {
  	      /* lock_release is only a release barrier.  */
! 	      if ((model & MEMMODEL_MASK) == MEMMODEL_SEQ_CST)
  		expand_mem_thread_fence (model);
  	      return const0_rtx;
  	    }
--- 7744,7750 ----
  	  if (maybe_expand_insn (icode, 2, ops))
  	    {
  	      /* lock_release is only a release barrier.  */
! 	      if (is_mm_seq_cst (model))
  		expand_mem_thread_fence (model);
  	      return const0_rtx;
  	    }
*************** expand_atomic_store (rtx mem, rtx val, e
*** 7772,7778 ****
    emit_move_insn (mem, val);
  
    /* For SEQ_CST, also emit a barrier after the store.  */
!   if ((model & MEMMODEL_MASK) == MEMMODEL_SEQ_CST)
      expand_mem_thread_fence (model);
  
    return const0_rtx;
--- 7771,7777 ----
    emit_move_insn (mem, val);
  
    /* For SEQ_CST, also emit a barrier after the store.  */
!   if (is_mm_seq_cst (model))
      expand_mem_thread_fence (model);
  
    return const0_rtx;
Index: emit-rtl.c
===================================================================
*** emit-rtl.c	(revision 222579)
--- emit-rtl.c	(working copy)
*************** need_atomic_barrier_p (enum memmodel mod
*** 6296,6306 ****
--- 6296,6309 ----
      case MEMMODEL_CONSUME:
        return false;
      case MEMMODEL_RELEASE:
+     case MEMMODEL_SYNC_RELEASE:
        return pre;
      case MEMMODEL_ACQUIRE:
+     case MEMMODEL_SYNC_ACQUIRE:
        return !pre;
      case MEMMODEL_ACQ_REL:
      case MEMMODEL_SEQ_CST:
+     case MEMMODEL_SYNC_SEQ_CST:
        return true;
      default:
        gcc_unreachable ();
Index: tsan.c
===================================================================
*** tsan.c	(revision 222579)
--- tsan.c	(working copy)
*************** instrument_builtin_call (gimple_stmt_ite
*** 535,541 ****
  	  case fetch_op:
  	    last_arg = gimple_call_arg (stmt, num - 1);
  	    if (!tree_fits_uhwi_p (last_arg)
! 		|| tree_to_uhwi (last_arg) > MEMMODEL_SEQ_CST)
  	      return;
  	    gimple_call_set_fndecl (stmt, decl);
  	    update_stmt (stmt);
--- 535,541 ----
  	  case fetch_op:
  	    last_arg = gimple_call_arg (stmt, num - 1);
  	    if (!tree_fits_uhwi_p (last_arg)
! 		|| memmodel_base (tree_to_uhwi (last_arg)) >= MEMMODEL_LAST)
  	      return;
  	    gimple_call_set_fndecl (stmt, decl);
  	    update_stmt (stmt);
*************** instrument_builtin_call (gimple_stmt_ite
*** 600,609 ****
  	    for (j = 0; j < 6; j++)
  	      args[j] = gimple_call_arg (stmt, j);
  	    if (!tree_fits_uhwi_p (args[4])
! 		|| tree_to_uhwi (args[4]) > MEMMODEL_SEQ_CST)
  	      return;
  	    if (!tree_fits_uhwi_p (args[5])
! 		|| tree_to_uhwi (args[5]) > MEMMODEL_SEQ_CST)
  	      return;
  	    update_gimple_call (gsi, decl, 5, args[0], args[1], args[2],
  				args[4], args[5]);
--- 600,609 ----
  	    for (j = 0; j < 6; j++)
  	      args[j] = gimple_call_arg (stmt, j);
  	    if (!tree_fits_uhwi_p (args[4])
! 		|| memmodel_base (tree_to_uhwi (args[4])) >= MEMMODEL_LAST)
  	      return;
  	    if (!tree_fits_uhwi_p (args[5])
! 		|| memmodel_base (tree_to_uhwi (args[5])) >= MEMMODEL_LAST)
  	      return;
  	    update_gimple_call (gsi, decl, 5, args[0], args[1], args[2],
  				args[4], args[5]);
Index: c-family/c-common.c
===================================================================
*** c-family/c-common.c	(revision 222579)
--- c-family/c-common.c	(working copy)
*************** get_atomic_generic_size (location_t loc,
*** 10767,10773 ****
        if (TREE_CODE (p) == INTEGER_CST)
          {
  	  int i = tree_to_uhwi (p);
! 	  if (i < 0 || (i & MEMMODEL_MASK) >= MEMMODEL_LAST)
  	    {
  	      warning_at (loc, OPT_Winvalid_memory_model,
  			  "invalid memory model argument %d of %qE", x + 1,
--- 10767,10773 ----
        if (TREE_CODE (p) == INTEGER_CST)
          {
  	  int i = tree_to_uhwi (p);
! 	  if (i < 0 || (memmodel_base (i) >= MEMMODEL_LAST))
  	    {
  	      warning_at (loc, OPT_Winvalid_memory_model,
  			  "invalid memory model argument %d of %qE", x + 1,
Index: config/aarch64/aarch64.c
===================================================================
*** config/aarch64/aarch64.c	(revision 222579)
--- config/aarch64/aarch64.c	(working copy)
*************** aarch64_expand_compare_and_swap (rtx ope
*** 9027,9034 ****
       unlikely event of fail being ACQUIRE and succ being RELEASE we need to
       promote succ to ACQ_REL so that we don't lose the acquire semantics.  */
  
!   if (INTVAL (mod_f) == MEMMODEL_ACQUIRE
!       && INTVAL (mod_s) == MEMMODEL_RELEASE)
      mod_s = GEN_INT (MEMMODEL_ACQ_REL);
  
    switch (mode)
--- 9027,9034 ----
       unlikely event of fail being ACQUIRE and succ being RELEASE we need to
       promote succ to ACQ_REL so that we don't lose the acquire semantics.  */
  
!   if (is_mm_acquire (memmodel_from_int (INTVAL (mod_f)))
!       && is_mm_release (memmodel_from_int (INTVAL (mod_s))))
      mod_s = GEN_INT (MEMMODEL_ACQ_REL);
  
    switch (mode)
Index: config/aarch64/atomics.md
===================================================================
*** config/aarch64/atomics.md	(revision 222579)
--- config/aarch64/atomics.md	(working copy)
***************
*** 260,269 ****
        UNSPECV_LDA))]
    ""
    {
!     enum memmodel model = (enum memmodel) INTVAL (operands[2]);
!     if (model == MEMMODEL_RELAXED
! 	|| model == MEMMODEL_CONSUME
! 	|| model == MEMMODEL_RELEASE)
        return "ldr<atomic_sfx>\t%<w>0, %1";
      else
        return "ldar<atomic_sfx>\t%<w>0, %1";
--- 260,267 ----
        UNSPECV_LDA))]
    ""
    {
!     enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
!     if (is_mm_relaxed (model) || is_mm_consume (model) || is_mm_release (model))
        return "ldr<atomic_sfx>\t%<w>0, %1";
      else
        return "ldar<atomic_sfx>\t%<w>0, %1";
***************
*** 278,287 ****
        UNSPECV_STL))]
    ""
    {
!     enum memmodel model = (enum memmodel) INTVAL (operands[2]);
!     if (model == MEMMODEL_RELAXED
! 	|| model == MEMMODEL_CONSUME
! 	|| model == MEMMODEL_ACQUIRE)
        return "str<atomic_sfx>\t%<w>1, %0";
      else
        return "stlr<atomic_sfx>\t%<w>1, %0";
--- 276,283 ----
        UNSPECV_STL))]
    ""
    {
!     enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
!     if (is_mm_relaxed (model) || is_mm_consume (model) || is_mm_acquire (model))
        return "str<atomic_sfx>\t%<w>1, %0";
      else
        return "stlr<atomic_sfx>\t%<w>1, %0";
***************
*** 297,306 ****
  	UNSPECV_LX)))]
    ""
    {
!     enum memmodel model = (enum memmodel) INTVAL (operands[2]);
!     if (model == MEMMODEL_RELAXED
! 	|| model == MEMMODEL_CONSUME
! 	|| model == MEMMODEL_RELEASE)
        return "ldxr<atomic_sfx>\t%w0, %1";
      else
        return "ldaxr<atomic_sfx>\t%w0, %1";
--- 293,300 ----
  	UNSPECV_LX)))]
    ""
    {
!     enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
!     if (is_mm_relaxed (model) || is_mm_consume (model) || is_mm_release (model))
        return "ldxr<atomic_sfx>\t%w0, %1";
      else
        return "ldaxr<atomic_sfx>\t%w0, %1";
***************
*** 315,324 ****
        UNSPECV_LX))]
    ""
    {
!     enum memmodel model = (enum memmodel) INTVAL (operands[2]);
!     if (model == MEMMODEL_RELAXED
! 	|| model == MEMMODEL_CONSUME
! 	|| model == MEMMODEL_RELEASE)
        return "ldxr\t%<w>0, %1";
      else
        return "ldaxr\t%<w>0, %1";
--- 309,316 ----
        UNSPECV_LX))]
    ""
    {
!     enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
!     if (is_mm_relaxed (model) || is_mm_consume (model) || is_mm_release (model))
        return "ldxr\t%<w>0, %1";
      else
        return "ldaxr\t%<w>0, %1";
***************
*** 335,344 ****
        UNSPECV_SX))]
    ""
    {
!     enum memmodel model = (enum memmodel) INTVAL (operands[3]);
!     if (model == MEMMODEL_RELAXED
! 	|| model == MEMMODEL_CONSUME
! 	|| model == MEMMODEL_ACQUIRE)
        return "stxr<atomic_sfx>\t%w0, %<w>2, %1";
      else
        return "stlxr<atomic_sfx>\t%w0, %<w>2, %1";
--- 327,334 ----
        UNSPECV_SX))]
    ""
    {
!     enum memmodel model = memmodel_from_int (INTVAL (operands[3]));
!     if (is_mm_relaxed (model) || is_mm_consume (model) || is_mm_acquire (model))
        return "stxr<atomic_sfx>\t%w0, %<w>2, %1";
      else
        return "stlxr<atomic_sfx>\t%w0, %<w>2, %1";
***************
*** 349,356 ****
    [(match_operand:SI 0 "const_int_operand" "")]
    ""
    {
!     enum memmodel model = (enum memmodel) INTVAL (operands[0]);
!     if (model != MEMMODEL_RELAXED && model != MEMMODEL_CONSUME)
        emit_insn (gen_dmb (operands[0]));
      DONE;
    }
--- 339,346 ----
    [(match_operand:SI 0 "const_int_operand" "")]
    ""
    {
!     enum memmodel model = memmodel_from_int (INTVAL (operands[0]));
!     if (!(is_mm_relaxed (model) || is_mm_consume (model)))
        emit_insn (gen_dmb (operands[0]));
      DONE;
    }
***************
*** 373,380 ****
       UNSPEC_MB))]
    ""
    {
!     enum memmodel model = (enum memmodel) INTVAL (operands[1]);
!     if (model == MEMMODEL_ACQUIRE)
        return "dmb\\tishld";
      else
        return "dmb\\tish";
--- 363,370 ----
       UNSPEC_MB))]
    ""
    {
!     enum memmodel model = memmodel_from_int (INTVAL (operands[1]));
!     if (is_mm_acquire (model))
        return "dmb\\tishld";
      else
        return "dmb\\tish";
Index: config/alpha/alpha.c
===================================================================
*** config/alpha/alpha.c	(revision 222579)
--- config/alpha/alpha.c	(working copy)
*************** alpha_split_compare_and_swap (rtx operan
*** 4542,4549 ****
    oldval = operands[3];
    newval = operands[4];
    is_weak = (operands[5] != const0_rtx);
!   mod_s = (enum memmodel) INTVAL (operands[6]);
!   mod_f = (enum memmodel) INTVAL (operands[7]);
    mode = GET_MODE (mem);
  
    alpha_pre_atomic_barrier (mod_s);
--- 4542,4549 ----
    oldval = operands[3];
    newval = operands[4];
    is_weak = (operands[5] != const0_rtx);
!   mod_s = memmodel_from_int (INTVAL (operands[6]));
!   mod_f = memmodel_from_int (INTVAL (operands[7]));
    mode = GET_MODE (mem);
  
    alpha_pre_atomic_barrier (mod_s);
*************** alpha_split_compare_and_swap (rtx operan
*** 4581,4592 ****
        emit_unlikely_jump (x, label1);
      }
  
!   if (mod_f != MEMMODEL_RELAXED)
      emit_label (XEXP (label2, 0));
  
    alpha_post_atomic_barrier (mod_s);
  
!   if (mod_f == MEMMODEL_RELAXED)
      emit_label (XEXP (label2, 0));
  }
  
--- 4581,4592 ----
        emit_unlikely_jump (x, label1);
      }
  
!   if (!is_mm_relaxed (mod_f))
      emit_label (XEXP (label2, 0));
  
    alpha_post_atomic_barrier (mod_s);
  
!   if (is_mm_relaxed (mod_f))
      emit_label (XEXP (label2, 0));
  }
  
*************** alpha_split_compare_and_swap_12 (rtx ope
*** 4647,4654 ****
    newval = operands[4];
    align = operands[5];
    is_weak = (operands[6] != const0_rtx);
!   mod_s = (enum memmodel) INTVAL (operands[7]);
!   mod_f = (enum memmodel) INTVAL (operands[8]);
    scratch = operands[9];
    mode = GET_MODE (orig_mem);
    addr = XEXP (orig_mem, 0);
--- 4647,4654 ----
    newval = operands[4];
    align = operands[5];
    is_weak = (operands[6] != const0_rtx);
!   mod_s = memmodel_from_int (INTVAL (operands[7]));
!   mod_f = memmodel_from_int (INTVAL (operands[8]));
    scratch = operands[9];
    mode = GET_MODE (orig_mem);
    addr = XEXP (orig_mem, 0);
*************** alpha_split_compare_and_swap_12 (rtx ope
*** 4700,4711 ****
        emit_unlikely_jump (x, label1);
      }
  
!   if (mod_f != MEMMODEL_RELAXED)
      emit_label (XEXP (label2, 0));
  
    alpha_post_atomic_barrier (mod_s);
  
!   if (mod_f == MEMMODEL_RELAXED)
      emit_label (XEXP (label2, 0));
  }
  
--- 4700,4711 ----
        emit_unlikely_jump (x, label1);
      }
  
!   if (!is_mm_relaxed (mod_f))
      emit_label (XEXP (label2, 0));
  
    alpha_post_atomic_barrier (mod_s);
  
!   if (is_mm_relaxed (mod_f))
      emit_label (XEXP (label2, 0));
  }
  
Index: config/arm/arm.c
===================================================================
*** config/arm/arm.c	(revision 222579)
--- config/arm/arm.c	(working copy)
*************** arm_expand_compare_and_swap (rtx operand
*** 27447,27454 ****
       promote succ to ACQ_REL so that we don't lose the acquire semantics.  */
  
    if (TARGET_HAVE_LDACQ
!       && INTVAL (mod_f) == MEMMODEL_ACQUIRE
!       && INTVAL (mod_s) == MEMMODEL_RELEASE)
      mod_s = GEN_INT (MEMMODEL_ACQ_REL);
  
    switch (mode)
--- 27447,27454 ----
       promote succ to ACQ_REL so that we don't lose the acquire semantics.  */
  
    if (TARGET_HAVE_LDACQ
!       && is_mm_acquire (memmodel_from_int (INTVAL (mod_f)))
!       && is_mm_release (memmodel_from_int (INTVAL (mod_s))))
      mod_s = GEN_INT (MEMMODEL_ACQ_REL);
  
    switch (mode)
*************** arm_split_compare_and_swap (rtx operands
*** 27521,27540 ****
    oldval = operands[2];
    newval = operands[3];
    is_weak = (operands[4] != const0_rtx);
!   mod_s = (enum memmodel) INTVAL (operands[5]);
!   mod_f = (enum memmodel) INTVAL (operands[6]);
    scratch = operands[7];
    mode = GET_MODE (mem);
  
    bool use_acquire = TARGET_HAVE_LDACQ
!                      && !(mod_s == MEMMODEL_RELAXED
!                           || mod_s == MEMMODEL_CONSUME
!                           || mod_s == MEMMODEL_RELEASE);
! 
    bool use_release = TARGET_HAVE_LDACQ
!                      && !(mod_s == MEMMODEL_RELAXED
!                           || mod_s == MEMMODEL_CONSUME
!                           || mod_s == MEMMODEL_ACQUIRE);
  
    /* Checks whether a barrier is needed and emits one accordingly.  */
    if (!(use_acquire || use_release))
--- 27521,27538 ----
    oldval = operands[2];
    newval = operands[3];
    is_weak = (operands[4] != const0_rtx);
!   mod_s = memmodel_from_int (INTVAL (operands[5]));
!   mod_f = memmodel_from_int (INTVAL (operands[6]));
    scratch = operands[7];
    mode = GET_MODE (mem);
  
    bool use_acquire = TARGET_HAVE_LDACQ
!                      && !(is_mm_relaxed (mod_s) || is_mm_consume (mod_s)
! 			  || is_mm_release (mod_s));
! 		
    bool use_release = TARGET_HAVE_LDACQ
!                      && !(is_mm_relaxed (mod_s) || is_mm_consume (mod_s)
! 			  || is_mm_acquire (mod_s));
  
    /* Checks whether a barrier is needed and emits one accordingly.  */
    if (!(use_acquire || use_release))
*************** arm_split_compare_and_swap (rtx operands
*** 27572,27585 ****
        emit_unlikely_jump (gen_rtx_SET (VOIDmode, pc_rtx, x));
      }
  
!   if (mod_f != MEMMODEL_RELAXED)
      emit_label (label2);
  
    /* Checks whether a barrier is needed and emits one accordingly.  */
    if (!(use_acquire || use_release))
      arm_post_atomic_barrier (mod_s);
  
!   if (mod_f == MEMMODEL_RELAXED)
      emit_label (label2);
  }
  
--- 27570,27583 ----
        emit_unlikely_jump (gen_rtx_SET (VOIDmode, pc_rtx, x));
      }
  
!   if (!is_mm_relaxed (mod_f))
      emit_label (label2);
  
    /* Checks whether a barrier is needed and emits one accordingly.  */
    if (!(use_acquire || use_release))
      arm_post_atomic_barrier (mod_s);
  
!   if (is_mm_relaxed (mod_f))
      emit_label (label2);
  }
  
*************** void
*** 27587,27607 ****
  arm_split_atomic_op (enum rtx_code code, rtx old_out, rtx new_out, rtx mem,
  		     rtx value, rtx model_rtx, rtx cond)
  {
!   enum memmodel model = (enum memmodel) INTVAL (model_rtx);
    machine_mode mode = GET_MODE (mem);
    machine_mode wmode = (mode == DImode ? DImode : SImode);
    rtx_code_label *label;
    rtx x;
  
    bool use_acquire = TARGET_HAVE_LDACQ
!                      && !(model == MEMMODEL_RELAXED
!                           || model == MEMMODEL_CONSUME
!                           || model == MEMMODEL_RELEASE);
  
    bool use_release = TARGET_HAVE_LDACQ
!                      && !(model == MEMMODEL_RELAXED
!                           || model == MEMMODEL_CONSUME
!                           || model == MEMMODEL_ACQUIRE);
  
    /* Checks whether a barrier is needed and emits one accordingly.  */
    if (!(use_acquire || use_release))
--- 27585,27603 ----
  arm_split_atomic_op (enum rtx_code code, rtx old_out, rtx new_out, rtx mem,
  		     rtx value, rtx model_rtx, rtx cond)
  {
!   enum memmodel model = memmodel_from_int (INTVAL (model_rtx));
    machine_mode mode = GET_MODE (mem);
    machine_mode wmode = (mode == DImode ? DImode : SImode);
    rtx_code_label *label;
    rtx x;
  
    bool use_acquire = TARGET_HAVE_LDACQ
!                      && !(is_mm_relaxed (model) || is_mm_consume (model)
! 			  || is_mm_release (model));
  
    bool use_release = TARGET_HAVE_LDACQ
!                      && !(is_mm_relaxed (model) || is_mm_consume (model)
! 			  || is_mm_acquire (model));
  
    /* Checks whether a barrier is needed and emits one accordingly.  */
    if (!(use_acquire || use_release))
Index: config/arm/sync.md
===================================================================
*** config/arm/sync.md	(revision 222579)
--- config/arm/sync.md	(working copy)
***************
*** 73,82 ****
        VUNSPEC_LDA))]
    "TARGET_HAVE_LDACQ"
    {
!     enum memmodel model = (enum memmodel) INTVAL (operands[2]);
!     if (model == MEMMODEL_RELAXED
!         || model == MEMMODEL_CONSUME
!         || model == MEMMODEL_RELEASE)
        return \"ldr<sync_sfx>\\t%0, %1\";
      else
        return \"lda<sync_sfx>\\t%0, %1\";
--- 73,80 ----
        VUNSPEC_LDA))]
    "TARGET_HAVE_LDACQ"
    {
!     enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
!     if (is_mm_relaxed (model) || is_mm_consume (model) || is_mm_release (model))
        return \"ldr<sync_sfx>\\t%0, %1\";
      else
        return \"lda<sync_sfx>\\t%0, %1\";
***************
*** 91,100 ****
        VUNSPEC_STL))]
    "TARGET_HAVE_LDACQ"
    {
!     enum memmodel model = (enum memmodel) INTVAL (operands[2]);
!     if (model == MEMMODEL_RELAXED
!         || model == MEMMODEL_CONSUME
!         || model == MEMMODEL_ACQUIRE)
        return \"str<sync_sfx>\t%1, %0\";
      else
        return \"stl<sync_sfx>\t%1, %0\";
--- 89,96 ----
        VUNSPEC_STL))]
    "TARGET_HAVE_LDACQ"
    {
!     enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
!     if (is_mm_relaxed (model) || is_mm_consume (model) || is_mm_acquire (model))
        return \"str<sync_sfx>\t%1, %0\";
      else
        return \"stl<sync_sfx>\t%1, %0\";
***************
*** 110,119 ****
     (match_operand:SI 2 "const_int_operand")]		;; model
    "TARGET_HAVE_LDREXD && ARM_DOUBLEWORD_ALIGN"
  {
!   enum memmodel model = (enum memmodel) INTVAL (operands[2]);
    expand_mem_thread_fence (model);
    emit_insn (gen_atomic_loaddi_1 (operands[0], operands[1]));
!   if (model == MEMMODEL_SEQ_CST)
      expand_mem_thread_fence (model);
    DONE;
  })
--- 106,115 ----
     (match_operand:SI 2 "const_int_operand")]		;; model
    "TARGET_HAVE_LDREXD && ARM_DOUBLEWORD_ALIGN"
  {
!   enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
    expand_mem_thread_fence (model);
    emit_insn (gen_atomic_loaddi_1 (operands[0], operands[1]));
!   if (is_mm_seq_cst (model))
      expand_mem_thread_fence (model);
    DONE;
  })
Index: config/i386/i386.c
===================================================================
*** config/i386/i386.c	(revision 222579)
--- config/i386/i386.c	(working copy)
*************** ix86_destroy_cost_data (void *data)
*** 51301,51307 ****
  static unsigned HOST_WIDE_INT
  ix86_memmodel_check (unsigned HOST_WIDE_INT val)
  {
!   unsigned HOST_WIDE_INT model = val & MEMMODEL_MASK;
    bool strong;
  
    if (val & ~(unsigned HOST_WIDE_INT)(IX86_HLE_ACQUIRE|IX86_HLE_RELEASE
--- 51301,51307 ----
  static unsigned HOST_WIDE_INT
  ix86_memmodel_check (unsigned HOST_WIDE_INT val)
  {
!   enum memmodel model = memmodel_from_int (val);
    bool strong;
  
    if (val & ~(unsigned HOST_WIDE_INT)(IX86_HLE_ACQUIRE|IX86_HLE_RELEASE
*************** ix86_memmodel_check (unsigned HOST_WIDE_
*** 51312,51325 ****
  	       "Unknown architecture specific memory model");
        return MEMMODEL_SEQ_CST;
      }
!   strong = (model == MEMMODEL_ACQ_REL || model == MEMMODEL_SEQ_CST);
!   if (val & IX86_HLE_ACQUIRE && !(model == MEMMODEL_ACQUIRE || strong))
      {
        warning (OPT_Winvalid_memory_model,
                "HLE_ACQUIRE not used with ACQUIRE or stronger memory model");
        return MEMMODEL_SEQ_CST | IX86_HLE_ACQUIRE;
      }
!    if (val & IX86_HLE_RELEASE && !(model == MEMMODEL_RELEASE || strong))
      {
        warning (OPT_Winvalid_memory_model,
                "HLE_RELEASE not used with RELEASE or stronger memory model");
--- 51312,51325 ----
  	       "Unknown architecture specific memory model");
        return MEMMODEL_SEQ_CST;
      }
!   strong = (is_mm_acq_rel (model) || is_mm_seq_cst (model));
!   if (val & IX86_HLE_ACQUIRE && !(is_mm_acquire (model) || strong))
      {
        warning (OPT_Winvalid_memory_model,
                "HLE_ACQUIRE not used with ACQUIRE or stronger memory model");
        return MEMMODEL_SEQ_CST | IX86_HLE_ACQUIRE;
      }
!   if (val & IX86_HLE_RELEASE && !(is_mm_release (model) || strong))
      {
        warning (OPT_Winvalid_memory_model,
                "HLE_RELEASE not used with RELEASE or stronger memory model");
Index: config/i386/sync.md
===================================================================
*** config/i386/sync.md	(revision 222579)
--- config/i386/sync.md	(working copy)
***************
*** 105,115 ****
    [(match_operand:SI 0 "const_int_operand")]		;; model
    ""
  {
!   enum memmodel model = (enum memmodel) (INTVAL (operands[0]) & MEMMODEL_MASK);
  
    /* Unless this is a SEQ_CST fence, the i386 memory model is strong
       enough not to require barriers of any kind.  */
!   if (model == MEMMODEL_SEQ_CST)
      {
        rtx (*mfence_insn)(rtx);
        rtx mem;
--- 105,115 ----
    [(match_operand:SI 0 "const_int_operand")]		;; model
    ""
  {
!   enum memmodel model = memmodel_from_int (INTVAL (operands[0]));
  
    /* Unless this is a SEQ_CST fence, the i386 memory model is strong
       enough not to require barriers of any kind.  */
!   if (is_mm_seq_cst (model))
      {
        rtx (*mfence_insn)(rtx);
        rtx mem;
***************
*** 217,223 ****
  		       UNSPEC_STA))]
    ""
  {
!   enum memmodel model = (enum memmodel) (INTVAL (operands[2]) & MEMMODEL_MASK);
  
    if (<MODE>mode == DImode && !TARGET_64BIT)
      {
--- 217,223 ----
  		       UNSPEC_STA))]
    ""
  {
!   enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
  
    if (<MODE>mode == DImode && !TARGET_64BIT)
      {
***************
*** 233,239 ****
        operands[1] = force_reg (<MODE>mode, operands[1]);
  
        /* For seq-cst stores, when we lack MFENCE, use XCHG.  */
!       if (model == MEMMODEL_SEQ_CST && !(TARGET_64BIT || TARGET_SSE2))
  	{
  	  emit_insn (gen_atomic_exchange<mode> (gen_reg_rtx (<MODE>mode),
  						operands[0], operands[1],
--- 233,239 ----
        operands[1] = force_reg (<MODE>mode, operands[1]);
  
        /* For seq-cst stores, when we lack MFENCE, use XCHG.  */
!       if (is_mm_seq_cst (model) && !(TARGET_64BIT || TARGET_SSE2))
  	{
  	  emit_insn (gen_atomic_exchange<mode> (gen_reg_rtx (<MODE>mode),
  						operands[0], operands[1],
***************
*** 246,252 ****
  					   operands[2]));
      }
    /* ... followed by an MFENCE, if required.  */
!   if (model == MEMMODEL_SEQ_CST)
      emit_insn (gen_mem_thread_fence (operands[2]));
    DONE;
  })
--- 246,252 ----
  					   operands[2]));
      }
    /* ... followed by an MFENCE, if required.  */
!   if (is_mm_seq_cst (model))
      emit_insn (gen_mem_thread_fence (operands[2]));
    DONE;
  })
Index: config/ia64/ia64.c
===================================================================
*** config/ia64/ia64.c	(revision 222579)
--- config/ia64/ia64.c	(working copy)
*************** ia64_expand_atomic_op (enum rtx_code cod
*** 2389,2398 ****
--- 2389,2400 ----
  	{
  	case MEMMODEL_ACQ_REL:
  	case MEMMODEL_SEQ_CST:
+ 	case MEMMODEL_SYNC_SEQ_CST:
  	  emit_insn (gen_memory_barrier ());
  	  /* FALLTHRU */
  	case MEMMODEL_RELAXED:
  	case MEMMODEL_ACQUIRE:
+ 	case MEMMODEL_SYNC_ACQUIRE:
  	case MEMMODEL_CONSUME:
  	  if (mode == SImode)
  	    icode = CODE_FOR_fetchadd_acq_si;
*************** ia64_expand_atomic_op (enum rtx_code cod
*** 2400,2405 ****
--- 2402,2408 ----
  	    icode = CODE_FOR_fetchadd_acq_di;
  	  break;
  	case MEMMODEL_RELEASE:
+ 	case MEMMODEL_SYNC_RELEASE:
  	  if (mode == SImode)
  	    icode = CODE_FOR_fetchadd_rel_si;
  	  else
*************** ia64_expand_atomic_op (enum rtx_code cod
*** 2426,2433 ****
       front half of the full barrier.  The end half is the cmpxchg.rel.
       For relaxed and release memory models, we don't need this.  But we
       also don't bother trying to prevent it either.  */
!   gcc_assert (model == MEMMODEL_RELAXED
! 	      || model == MEMMODEL_RELEASE
  	      || MEM_VOLATILE_P (mem));
  
    old_reg = gen_reg_rtx (DImode);
--- 2429,2435 ----
       front half of the full barrier.  The end half is the cmpxchg.rel.
       For relaxed and release memory models, we don't need this.  But we
       also don't bother trying to prevent it either.  */
!   gcc_assert (is_mm_relaxed (model) || is_mm_release (model)
  	      || MEM_VOLATILE_P (mem));
  
    old_reg = gen_reg_rtx (DImode);
*************** ia64_expand_atomic_op (enum rtx_code cod
*** 2471,2476 ****
--- 2473,2479 ----
      {
      case MEMMODEL_RELAXED:
      case MEMMODEL_ACQUIRE:
+     case MEMMODEL_SYNC_ACQUIRE:
      case MEMMODEL_CONSUME:
        switch (mode)
  	{
*************** ia64_expand_atomic_op (enum rtx_code cod
*** 2484,2491 ****
--- 2487,2496 ----
        break;
  
      case MEMMODEL_RELEASE:
+     case MEMMODEL_SYNC_RELEASE:
      case MEMMODEL_ACQ_REL:
      case MEMMODEL_SEQ_CST:
+     case MEMMODEL_SYNC_SEQ_CST:
        switch (mode)
  	{
  	case QImode: icode = CODE_FOR_cmpxchg_rel_qi;  break;
Index: config/ia64/sync.md
===================================================================
*** config/ia64/sync.md	(revision 222579)
--- config/ia64/sync.md	(working copy)
***************
*** 33,39 ****
    [(match_operand:SI 0 "const_int_operand" "")]		;; model
    ""
  {
!   if (INTVAL (operands[0]) == MEMMODEL_SEQ_CST)
      emit_insn (gen_memory_barrier ());
    DONE;
  })
--- 33,39 ----
    [(match_operand:SI 0 "const_int_operand" "")]		;; model
    ""
  {
!   if (is_mm_seq_cst (memmodel_from_int (INTVAL (operands[0]))))
      emit_insn (gen_memory_barrier ());
    DONE;
  })
***************
*** 60,70 ****
     (match_operand:SI 2 "const_int_operand" "")]			;; model
    ""
  {
!   enum memmodel model = (enum memmodel) INTVAL (operands[2]);
  
    /* Unless the memory model is relaxed, we want to emit ld.acq, which
       will happen automatically for volatile memories.  */
!   gcc_assert (model == MEMMODEL_RELAXED || MEM_VOLATILE_P (operands[1]));
    emit_move_insn (operands[0], operands[1]);
    DONE;
  })
--- 60,70 ----
     (match_operand:SI 2 "const_int_operand" "")]			;; model
    ""
  {
!   enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
  
    /* Unless the memory model is relaxed, we want to emit ld.acq, which
       will happen automatically for volatile memories.  */
!   gcc_assert (is_mm_relaxed (model) || MEM_VOLATILE_P (operands[1]));
    emit_move_insn (operands[0], operands[1]);
    DONE;
  })
***************
*** 75,91 ****
     (match_operand:SI 2 "const_int_operand" "")]			;; model
    ""
  {
!   enum memmodel model = (enum memmodel) INTVAL (operands[2]);
  
    /* Unless the memory model is relaxed, we want to emit st.rel, which
       will happen automatically for volatile memories.  */
!   gcc_assert (model == MEMMODEL_RELAXED || MEM_VOLATILE_P (operands[0]));
    emit_move_insn (operands[0], operands[1]);
  
    /* Sequentially consistent stores need a subsequent MF.  See
       http://www.decadent.org.uk/pipermail/cpp-threads/2008-December/001952.html
       for a discussion of why a MF is needed here, but not for atomic_load.  */
!   if (model == MEMMODEL_SEQ_CST)
      emit_insn (gen_memory_barrier ());
    DONE;
  })
--- 75,91 ----
     (match_operand:SI 2 "const_int_operand" "")]			;; model
    ""
  {
!   enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
  
    /* Unless the memory model is relaxed, we want to emit st.rel, which
       will happen automatically for volatile memories.  */
!   gcc_assert (is_mm_relaxed (model) || MEM_VOLATILE_P (operands[0]));
    emit_move_insn (operands[0], operands[1]);
  
    /* Sequentially consistent stores need a subsequent MF.  See
       http://www.decadent.org.uk/pipermail/cpp-threads/2008-December/001952.html
       for a discussion of why a MF is needed here, but not for atomic_load.  */
!   if (is_mm_seq_cst (model))
      emit_insn (gen_memory_barrier ());
    DONE;
  })
***************
*** 101,107 ****
     (match_operand:SI 7 "const_int_operand" "")]			;; fail model
    ""
  {
!   enum memmodel model = (enum memmodel) INTVAL (operands[6]);
    rtx ccv = gen_rtx_REG (DImode, AR_CCV_REGNUM);
    rtx dval, eval;
  
--- 101,108 ----
     (match_operand:SI 7 "const_int_operand" "")]			;; fail model
    ""
  {
!   /* No need to distinquish __sync from __atomic, so get base value.  */
!   enum memmodel model = memmodel_base (INTVAL (operands[6]));
    rtx ccv = gen_rtx_REG (DImode, AR_CCV_REGNUM);
    rtx dval, eval;
  
***************
*** 200,206 ****
     (match_operand:SI 3 "const_int_operand" "")]			;; succ model
    ""
  {
!   enum memmodel model = (enum memmodel) INTVAL (operands[3]);
  
    switch (model)
      {
--- 201,208 ----
     (match_operand:SI 3 "const_int_operand" "")]			;; succ model
    ""
  {
!   /* No need to distinquish __sync from __atomic, so get base value.  */
!   enum memmodel model = memmodel_base (INTVAL (operands[3]));
  
    switch (model)
      {
Index: config/mips/mips.c
===================================================================
*** config/mips/mips.c	(revision 222579)
--- config/mips/mips.c	(working copy)
*************** mips_process_sync_loop (rtx_insn *insn,
*** 13111,13117 ****
        model = MEMMODEL_ACQUIRE;
        break;
      default:
!       model = (enum memmodel) INTVAL (operands[memmodel_attr]);
      }
  
    mips_multi_start ();
--- 13111,13117 ----
        model = MEMMODEL_ACQUIRE;
        break;
      default:
!       model = memmodel_from_int (INTVAL (operands[memmodel_attr]));
      }
  
    mips_multi_start ();
Index: config/pa/pa.md
===================================================================
*** config/pa/pa.md	(revision 222579)
--- config/pa/pa.md	(working copy)
***************
*** 707,718 ****
     (match_operand:SI 2 "const_int_operand")]            ;; model
    "!TARGET_64BIT && !TARGET_SOFT_FLOAT"
  {
!   enum memmodel model = (enum memmodel) INTVAL (operands[2]);
    operands[1] = force_reg (SImode, XEXP (operands[1], 0));
    operands[2] = gen_reg_rtx (DImode);
    expand_mem_thread_fence (model);
    emit_insn (gen_atomic_loaddi_1 (operands[0], operands[1], operands[2]));
!   if ((model & MEMMODEL_MASK) == MEMMODEL_SEQ_CST)
      expand_mem_thread_fence (model);
    DONE;
  })
--- 707,718 ----
     (match_operand:SI 2 "const_int_operand")]            ;; model
    "!TARGET_64BIT && !TARGET_SOFT_FLOAT"
  {
!   enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
    operands[1] = force_reg (SImode, XEXP (operands[1], 0));
    operands[2] = gen_reg_rtx (DImode);
    expand_mem_thread_fence (model);
    emit_insn (gen_atomic_loaddi_1 (operands[0], operands[1], operands[2]));
!   if (is_mm_seq_cst (model))
      expand_mem_thread_fence (model);
    DONE;
  })
***************
*** 734,745 ****
     (match_operand:SI 2 "const_int_operand")]            ;; model
    "!TARGET_64BIT && !TARGET_SOFT_FLOAT"
  {
!   enum memmodel model = (enum memmodel) INTVAL (operands[2]);
    operands[0] = force_reg (SImode, XEXP (operands[0], 0));
    operands[2] = gen_reg_rtx (DImode);
    expand_mem_thread_fence (model);
    emit_insn (gen_atomic_storedi_1 (operands[0], operands[1], operands[2]));
!   if ((model & MEMMODEL_MASK) == MEMMODEL_SEQ_CST)
      expand_mem_thread_fence (model);
    DONE;
  })
--- 734,745 ----
     (match_operand:SI 2 "const_int_operand")]            ;; model
    "!TARGET_64BIT && !TARGET_SOFT_FLOAT"
  {
!   enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
    operands[0] = force_reg (SImode, XEXP (operands[0], 0));
    operands[2] = gen_reg_rtx (DImode);
    expand_mem_thread_fence (model);
    emit_insn (gen_atomic_storedi_1 (operands[0], operands[1], operands[2]));
!   if (is_mm_seq_cst (model))
      expand_mem_thread_fence (model);
    DONE;
  })
Index: config/rs6000/rs6000.c
===================================================================
*** config/rs6000/rs6000.c	(revision 222579)
--- config/rs6000/rs6000.c	(working copy)
*************** rs6000_pre_atomic_barrier (rtx mem, enum
*** 20528,20539 ****
--- 20528,20542 ----
      case MEMMODEL_RELAXED:
      case MEMMODEL_CONSUME:
      case MEMMODEL_ACQUIRE:
+     case MEMMODEL_SYNC_ACQUIRE:
        break;
      case MEMMODEL_RELEASE:
+     case MEMMODEL_SYNC_RELEASE:
      case MEMMODEL_ACQ_REL:
        emit_insn (gen_lwsync ());
        break;
      case MEMMODEL_SEQ_CST:
+     case MEMMODEL_SYNC_SEQ_CST:
        emit_insn (gen_hwsync ());
        break;
      default:
*************** rs6000_post_atomic_barrier (enum memmode
*** 20550,20559 ****
--- 20553,20565 ----
      case MEMMODEL_RELAXED:
      case MEMMODEL_CONSUME:
      case MEMMODEL_RELEASE:
+     case MEMMODEL_SYNC_RELEASE:
        break;
      case MEMMODEL_ACQUIRE:
+     case MEMMODEL_SYNC_ACQUIRE:
      case MEMMODEL_ACQ_REL:
      case MEMMODEL_SEQ_CST:
+     case MEMMODEL_SYNC_SEQ_CST:
        emit_insn (gen_isync ());
        break;
      default:
*************** rs6000_expand_atomic_compare_and_swap (r
*** 20653,20660 ****
    oldval = operands[3];
    newval = operands[4];
    is_weak = (INTVAL (operands[5]) != 0);
!   mod_s = (enum memmodel) INTVAL (operands[6]);
!   mod_f = (enum memmodel) INTVAL (operands[7]);
    orig_mode = mode = GET_MODE (mem);
  
    mask = shift = NULL_RTX;
--- 20659,20666 ----
    oldval = operands[3];
    newval = operands[4];
    is_weak = (INTVAL (operands[5]) != 0);
!   mod_s = memmodel_from_int (INTVAL (operands[6]));
!   mod_f = memmodel_from_int (INTVAL (operands[7]));
    orig_mode = mode = GET_MODE (mem);
  
    mask = shift = NULL_RTX;
*************** rs6000_expand_atomic_compare_and_swap (r
*** 20742,20753 ****
        emit_unlikely_jump (x, label1);
      }
  
!   if (mod_f != MEMMODEL_RELAXED)
      emit_label (XEXP (label2, 0));
  
    rs6000_post_atomic_barrier (mod_s);
  
!   if (mod_f == MEMMODEL_RELAXED)
      emit_label (XEXP (label2, 0));
  
    if (shift)
--- 20748,20759 ----
        emit_unlikely_jump (x, label1);
      }
  
!   if (!is_mm_relaxed (mod_f))
      emit_label (XEXP (label2, 0));
  
    rs6000_post_atomic_barrier (mod_s);
  
!   if (is_mm_relaxed (mod_f))
      emit_label (XEXP (label2, 0));
  
    if (shift)
Index: config/rs6000/sync.md
===================================================================
*** config/rs6000/sync.md	(revision 222579)
--- config/rs6000/sync.md	(working copy)
***************
*** 41,58 ****
    [(match_operand:SI 0 "const_int_operand" "")]		;; model
    ""
  {
!   enum memmodel model = (enum memmodel) INTVAL (operands[0]);
    switch (model)
      {
      case MEMMODEL_RELAXED:
        break;
      case MEMMODEL_CONSUME:
      case MEMMODEL_ACQUIRE:
      case MEMMODEL_RELEASE:
      case MEMMODEL_ACQ_REL:
        emit_insn (gen_lwsync ());
        break;
      case MEMMODEL_SEQ_CST:
        emit_insn (gen_hwsync ());
        break;
      default:
--- 41,61 ----
    [(match_operand:SI 0 "const_int_operand" "")]		;; model
    ""
  {
!   enum memmodel model = memmodel_from_int (INTVAL (operands[0]));
    switch (model)
      {
      case MEMMODEL_RELAXED:
        break;
      case MEMMODEL_CONSUME:
      case MEMMODEL_ACQUIRE:
+     case MEMMODEL_SYNC_ACQUIRE:
      case MEMMODEL_RELEASE:
+     case MEMMODEL_SYNC_RELEASE:
      case MEMMODEL_ACQ_REL:
        emit_insn (gen_lwsync ());
        break;
      case MEMMODEL_SEQ_CST:
+     case MEMMODEL_SYNC_SEQ_CST:
        emit_insn (gen_hwsync ());
        break;
      default:
***************
*** 144,152 ****
    if (<MODE>mode == TImode && !TARGET_SYNC_TI)
      FAIL;
  
!   enum memmodel model = (enum memmodel) INTVAL (operands[2]);
  
!   if (model == MEMMODEL_SEQ_CST)
      emit_insn (gen_hwsync ());
  
    if (<MODE>mode != TImode)
--- 147,155 ----
    if (<MODE>mode == TImode && !TARGET_SYNC_TI)
      FAIL;
  
!   enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
  
!   if (is_mm_seq_cst (model))
      emit_insn (gen_hwsync ());
  
    if (<MODE>mode != TImode)
***************
*** 182,188 ****
--- 185,193 ----
        break;
      case MEMMODEL_CONSUME:
      case MEMMODEL_ACQUIRE:
+     case MEMMODEL_SYNC_ACQUIRE:
      case MEMMODEL_SEQ_CST:
+     case MEMMODEL_SYNC_SEQ_CST:
        emit_insn (gen_loadsync_<mode> (operands[0]));
        break;
      default:
***************
*** 209,223 ****
    if (<MODE>mode == TImode && !TARGET_SYNC_TI)
      FAIL;
  
!   enum memmodel model = (enum memmodel) INTVAL (operands[2]);
    switch (model)
      {
      case MEMMODEL_RELAXED:
        break;
      case MEMMODEL_RELEASE:
        emit_insn (gen_lwsync ());
        break;
      case MEMMODEL_SEQ_CST:
        emit_insn (gen_hwsync ());
        break;
      default:
--- 214,230 ----
    if (<MODE>mode == TImode && !TARGET_SYNC_TI)
      FAIL;
  
!   enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
    switch (model)
      {
      case MEMMODEL_RELAXED:
        break;
      case MEMMODEL_RELEASE:
+     case MEMMODEL_SYNC_RELEASE:
        emit_insn (gen_lwsync ());
        break;
      case MEMMODEL_SEQ_CST:
+     case MEMMODEL_SYNC_SEQ_CST:
        emit_insn (gen_hwsync ());
        break;
      default:
Index: config/s390/s390.md
===================================================================
*** config/s390/s390.md	(revision 222579)
--- config/s390/s390.md	(working copy)
***************
*** 9226,9232 ****
  {
    /* Unless this is a SEQ_CST fence, the s390 memory model is strong
       enough not to require barriers of any kind.  */
!   if (INTVAL (operands[0]) == MEMMODEL_SEQ_CST)
      {
        rtx mem = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode));
        MEM_VOLATILE_P (mem) = 1;
--- 9226,9232 ----
  {
    /* Unless this is a SEQ_CST fence, the s390 memory model is strong
       enough not to require barriers of any kind.  */
!   if (is_mm_seq_cst (memmodel_from_int (INTVAL (operands[0]))))
      {
        rtx mem = gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (Pmode));
        MEM_VOLATILE_P (mem) = 1;
***************
*** 9307,9313 ****
     (match_operand:SI 2 "const_int_operand")]	;; model
    ""
  {
!   enum memmodel model = (enum memmodel) INTVAL (operands[2]);
  
    if (MEM_ALIGN (operands[0]) < GET_MODE_BITSIZE (GET_MODE (operands[0])))
      FAIL;
--- 9307,9313 ----
     (match_operand:SI 2 "const_int_operand")]	;; model
    ""
  {
!   enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
  
    if (MEM_ALIGN (operands[0]) < GET_MODE_BITSIZE (GET_MODE (operands[0])))
      FAIL;
***************
*** 9318,9324 ****
      emit_insn (gen_atomic_storedi_1 (operands[0], operands[1]));
    else
      emit_move_insn (operands[0], operands[1]);
!   if (model == MEMMODEL_SEQ_CST)
      emit_insn (gen_mem_thread_fence (operands[2]));
    DONE;
  })
--- 9318,9324 ----
      emit_insn (gen_atomic_storedi_1 (operands[0], operands[1]));
    else
      emit_move_insn (operands[0], operands[1]);
!   if (is_mm_seq_cst (model))
      emit_insn (gen_mem_thread_fence (operands[2]));
    DONE;
  })
Index: config/sparc/sparc.c
===================================================================
*** config/sparc/sparc.c	(revision 222579)
--- config/sparc/sparc.c	(working copy)
*************** sparc_emit_membar_for_model (enum memmod
*** 11674,11682 ****
  
    if (before_after & 1)
      {
!       if (model == MEMMODEL_RELEASE
! 	  || model == MEMMODEL_ACQ_REL
! 	  || model == MEMMODEL_SEQ_CST)
  	{
  	  if (load_store & 1)
  	    mm |= LoadLoad | StoreLoad;
--- 11674,11681 ----
  
    if (before_after & 1)
      {
!       if (is_mm_release (model) || is_mm_acq_rel (model)
! 	  || is_mm_seq_cst (model))
  	{
  	  if (load_store & 1)
  	    mm |= LoadLoad | StoreLoad;
*************** sparc_emit_membar_for_model (enum memmod
*** 11686,11694 ****
      }
    if (before_after & 2)
      {
!       if (model == MEMMODEL_ACQUIRE
! 	  || model == MEMMODEL_ACQ_REL
! 	  || model == MEMMODEL_SEQ_CST)
  	{
  	  if (load_store & 1)
  	    mm |= LoadLoad | LoadStore;
--- 11685,11692 ----
      }
    if (before_after & 2)
      {
!       if (is_mm_acquire (model) || is_mm_acq_rel (model)
! 	  || is_mm_seq_cst (model))
  	{
  	  if (load_store & 1)
  	    mm |= LoadLoad | LoadStore;
Index: doc/extend.texi
===================================================================
*** doc/extend.texi	(revision 222813)
--- doc/extend.texi	(working copy)
*************** functions map any run-time value to @cod
*** 8946,8954 ****
  than invoke a runtime library call or inline a switch statement.  This is
  standard compliant, safe, and the simplest approach for now.
  
! The memory model parameter is a signed int, but only the lower 8 bits are
  reserved for the memory model.  The remainder of the signed int is reserved
! for future use and should be 0.  Use of the predefined atomic values
  ensures proper usage.
  
  @deftypefn {Built-in Function} @var{type} __atomic_load_n (@var{type} *ptr, int memmodel)
--- 8946,8954 ----
  than invoke a runtime library call or inline a switch statement.  This is
  standard compliant, safe, and the simplest approach for now.
  
! The memory model parameter is a signed int, but only the lower 16 bits are
  reserved for the memory model.  The remainder of the signed int is reserved
! for target use and should be 0.  Use of the predefined atomic values
  ensures proper usage.
  
  @deftypefn {Built-in Function} @var{type} __atomic_load_n (@var{type} *ptr, int memmodel)

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]