This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[C11-atomic] [patch] gimple atomic statements


Here is my first step in promoting the __atomic builtins into gimple statements. I originally planned this as tree codes, but when prototyped it became obvious that a gimple statement was a far superior solution since we also need to deal with LHS and memory address issues.

Motivations are many-fold, but primarily it makes manipulating atomics easier and exposes more of their side effects to the optimizers. In particular we can now expose both return values of the compare and swap when implementing compare_exchange and get more efficient code generation. (soon :-) It is item number 3 on my 4.8 task list: http://gcc.gnu.org/wiki/Atomic/GCCMM/gcc4.8.

This first step adds a GIMPLE_ATOMIC statement class which handles all the __atomic built-in calls. Right after the cfg is built, all built-in __atomic calls are converted to gimple_atomic statements. I considered doing the conversion right in the gimplifier, but elected to leave it as a pass for the time being. This then passes through all the optimizers to cfgexpand, where they are then converted directly into rtl.

This currently produces the same code that the builtins do in 4.7. I expect that I missed a few places in the optimizers where they aren't properly treated as barriers yet, but I'll get to tracking those down in a bit.

I also have not implemented non-integral atomics yet, nor do I issuing library calls when inline expansion cannot be done. That's next... I just want to get the basics checked into the branch.

I expect to be able to wrap the __sync routines into this as well, eliminating all the atomic and sync builtin expansion code, keeping everything in one easy statement class. Then I'll add the _Atomic type qualifier to the parser, and have that simply translate expressions involving those types into gimple_atomic statements at the same time calls are converted.

This bootstraps on x86_64-unknown-linux-gnu, and the only testsuite regressions are one involving issuing library calls. There is a toolchain build problem with libjava however... During libjava construction there ends up being files which cant be created due to permission problems in .svn directories ... Pretty darn weird, but I'll look into it later when the atomic gimple support is complete, if the problem it still exists then.

Anyone see anything obviously flawed about the approach?

Andrew


	* sync-builtins.def (BUILT_IN_ATOMIC_ALWAYS_LOCK_FREE,
	BUILT_IN_ATOMIC_IS_LOCK_FREE): relocate to make identifying atomic
	builtins mapping to tree codes easier.
	* gsstruct.def (GSS_ATOMIC): New gimple garbage collection format.
	* gimple.def (GIMPLE_ATOMIC): New gimple statement type.
	* gimple.h (GF_ATOMIC_THREAD_FENCE, GF_ATOMIC_WEAK): New flags.
	(enum gimple_atomic_kind): New.  Kind of atomic operations.
	(struct gimple_statement_atomic): New. Gimple atomic statement.
	(is_gimple_atomic, gimple_atomic_kind, gimple_atomic_set_kind,
	gimple_atomic_type, gimple_atomic_set_type, gimple_atomic_num_lhs,
	gimple_atomic_num_rhs, gimple_atomic_has_lhs,
	gimple_atomic_lhs, gimple_atomic_lhs_ptr, gimple_atomic_set_lhs,
	gimple_atomic_order, gimple_atomic_order_ptr, gimple_atomic_set_order,
	gimple_atomic_has_target, gimple_atomic_target,
	gimple_atomic_target_ptr, gimple_atomic_set_target,
	gimple_atomic_has_expr, gimple_atomic_expr, gimple_atomic_expr_ptr,
	gimple_atomic_set_expr, gimple_atomic_has_expected,
	gimple_atomic_expected, gimple_atomic_expected_ptr,
	gimple_atomic_set_expected, gimple_atomic_has_fail_order,
	gimple_atomic_fail_order, gimple_atomic_fail_order_ptr,
	gimple_atomic_set_fail_order, gimple_atomic_op_code,
	gimple_atomic_set_op_code, gimple_atomic_thread_fence,
	gimple_atomic_set_thread_fence, gimple_atomic_weak,
	gimple_atomic_set_weak): New. Helper functions for new atomic statement.
	* gimple.c (gimple_build_atomic_load, gimple_build_atomic_store,
	gimple_build_atomic_exchange, gimple_build_atomic_compare_exchange,
	gimple_build_atomic_fetch_op, gimple_build_atomic_op_fetch,
	gimple_build_atomic_test_and_set, gimple_build_atomic_clear,
	gimple_build_atomic_fence): New. Functions to construct atomic
	statements.
	(walk_gimple_op): Handle GIMPLE_ATOMIC case.
	(walk_stmt_load_store_addr_ops): Handle walking GIMPLE_ATOMIC.
	* cfgexpand.c (expand_atomic_stmt): New.  Expand a GIMPLE_ATOMIC stmt
	into RTL.
	(expand_gimple_assign_move, expand_gimple_assign): Split out from
	GIMPLE_ASSIGN case of expand_gimple_stmt_1.
	(expand_gimple_stmt_1): Handle GIMPLE_ATOMIC case.
	* Makefile.in (tree-atomic.o): Add new object file.
	* tree-atomic.c: New file.
	(get_atomic_type): New.  Return the type of an atomic operation.
	(get_memmodel): New.  Get memory model for an opertation.
	(gimple_verify_memmodel): New.  Verify the validity of a memory model.
	(expand_atomic_target): New:  Expand atomic memory location to RTL.
	(expand_expr_force_mode): New.  Force expression to the correct mode.
	(get_atomic_lhs_rtx): New.  Expand RTL for a LHS expression.
	(expand_gimple_atomic_library_call): New.  Turn an atomic operation
	into a library call.
	(expand_gimple_atomic_load, expand_gimple_atomic_store,
	expand_gimple_atomic_exchange, expand_gimple_atomic_compare_exchange):
	New.  Expand atomic operations to RTL.
	(rtx_code_from_tree_code): New.  Tree code to rtx code.
	(expand_atomic_fetch, expand_gimple_atomic_fetch_op,
	expand_gimple_atomic_op_fetch, expand_gimple_atomic_test_and_set,
	expand_gimple_atomic_clear, expand_gimple_atomic_fence): New.  Expand
	atomic operations to RTL.
	(is_built_in_atomic): New.  Check for atomic builtin functions.
	(atomic_func_type): New.  Base type of atomic builtin function.
	(lower_atomic_call): New.  Convert an atomic builtin to gimple.
	(lower_atomics): New.  Entry point to lower all atomic operations.
	(gate_lower_atomics): New gate routine.
	(pass_lower_atomics): New pass structure.
	* tree-ssa-operands.c (parse_ssa_operands): Handle GIMPLE_ATOMIC.
	* gimple-pretty-print.c (dump_gimple_atomic_kind_op): New.  Print
	atomic statement kind.
	(dump_gimple_atomic_order): New.  Print atomic memory order.
	(dump_gimple_atomic_type_size): New.  Append size of atomic operation.
	(dump_gimple_atomic): New.  Dump an atomic statement.
	(dump_gimple_stmt): Handle GIMPLE_ATOMIC case.
	* tree-cfg.c (verify_gimple_atomic): New.  Verify gimple atomic stmt.
	(verify_gimple_stmt): Handle GIMPLE_ATOMIC case.
	* tree-pass.h (pass_lower_atomics): Declare.
	* passes.c (init_optimization_passes): Add pass_lower_atomics right
	after CFG construction.
	* gimple-low.c (lower_stmt): Handle GIMPLE_ATOMIC case.
	* tree-ssa-alias.c (ref_maybe_used_by_stmt_p): Handle GIMPLE_ATOMIC.
	(stmt_may_clobber_ref_p_1): Handle GIMPLE_ATOMIC.
	(stmt_kills_ref_p_1): Handle GIMPLE_ATOMIC.
	* tree-ssa-sink.c (is_hidden_global_store): GIMPLE_ATOMIC prevents
	optimization.
	* tree-ssa-dce.c (propagate_necessity): Handle GIMPLE_ATOMIC in
	reaching defs.
	* tree-inline.c (estimate_num_insns): Handle GIMPLE_ATOMIC.
	* ipa-pure-const.c (check_stmt): GIMPLE_ATOMIC affects pure/const.

Index: sync-builtins.def
===================================================================
*** sync-builtins.def	(revision 186098)
--- sync-builtins.def	(working copy)
*************** DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_
*** 583,597 ****
  		  "__atomic_fetch_or_16",
  		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
- DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ALWAYS_LOCK_FREE,
- 		  "__atomic_always_lock_free",
- 		  BT_FN_BOOL_SIZE_CONST_VPTR, ATTR_CONST_NOTHROW_LEAF_LIST)
- 
- DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_IS_LOCK_FREE,
- 		  "__atomic_is_lock_free",
- 		  BT_FN_BOOL_SIZE_CONST_VPTR, ATTR_CONST_NOTHROW_LEAF_LIST)
- 
- 
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_THREAD_FENCE,
  		  "__atomic_thread_fence",
  		  BT_FN_VOID_INT, ATTR_NOTHROW_LEAF_LIST)
--- 583,588 ----
*************** DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SIGNAL
*** 600,602 ****
--- 591,602 ----
  		  "__atomic_signal_fence",
  		  BT_FN_VOID_INT, ATTR_NOTHROW_LEAF_LIST)
  
+ 
+ DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ALWAYS_LOCK_FREE,
+ 		  "__atomic_always_lock_free",
+ 		  BT_FN_BOOL_SIZE_CONST_VPTR, ATTR_CONST_NOTHROW_LEAF_LIST)
+ 
+ DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_IS_LOCK_FREE,
+ 		  "__atomic_is_lock_free",
+ 		  BT_FN_BOOL_SIZE_CONST_VPTR, ATTR_CONST_NOTHROW_LEAF_LIST)
+ 
Index: gsstruct.def
===================================================================
*** gsstruct.def	(revision 186098)
--- gsstruct.def	(working copy)
*************** DEFGSSTRUCT(GSS_WITH_OPS, gimple_stateme
*** 30,35 ****
--- 30,36 ----
  DEFGSSTRUCT(GSS_WITH_MEM_OPS_BASE, gimple_statement_with_memory_ops_base, false)
  DEFGSSTRUCT(GSS_WITH_MEM_OPS, gimple_statement_with_memory_ops, true)
  DEFGSSTRUCT(GSS_CALL, gimple_statement_call, true)
+ DEFGSSTRUCT(GSS_ATOMIC, gimple_statement_atomic, true)
  DEFGSSTRUCT(GSS_ASM, gimple_statement_asm, true)
  DEFGSSTRUCT(GSS_BIND, gimple_statement_bind, false)
  DEFGSSTRUCT(GSS_PHI, gimple_statement_phi, false)
Index: gimple.def
===================================================================
*** gimple.def	(revision 186098)
--- gimple.def	(working copy)
*************** DEFGSCODE(GIMPLE_ASM, "gimple_asm", GSS_
*** 124,129 ****
--- 124,138 ----
      CHAIN is the optional static chain link for nested functions.  */
  DEFGSCODE(GIMPLE_CALL, "gimple_call", GSS_CALL)
  
+ /* GIMPLE_ATOMIC <KIND, TYPE, ARG1... ARGN> Represents an atomic
+    operation which maps to a builtin function call.
+ 
+    FN is the gimple atomic operation KIND.
+    TYPE is the base type of the atomic operation.
+ 
+    ARG1-ARGN are the other arguments required by the various operations.  */
+ DEFGSCODE(GIMPLE_ATOMIC, "gimple_atomic", GSS_ATOMIC)
+ 
  /* GIMPLE_TRANSACTION <BODY, LABEL> represents __transaction_atomic and
     __transaction_relaxed blocks.
     BODY is the sequence of statements inside the transaction.
Index: gimple.h
===================================================================
*** gimple.h	(revision 186098)
--- gimple.h	(working copy)
*************** enum gimple_rhs_class
*** 97,102 ****
--- 97,104 ----
  enum gf_mask {
      GF_ASM_INPUT		= 1 << 0,
      GF_ASM_VOLATILE		= 1 << 1,
+     GF_ATOMIC_THREAD_FENCE	= 1 << 0,
+     GF_ATOMIC_WEAK		= 1 << 0,
      GF_CALL_FROM_THUNK		= 1 << 0,
      GF_CALL_RETURN_SLOT_OPT	= 1 << 1,
      GF_CALL_TAILCALL		= 1 << 2,
*************** struct GTY(()) gimple_statement_call
*** 424,429 ****
--- 426,466 ----
    tree GTY((length ("%h.membase.opbase.gsbase.num_ops"))) op[1];
  };
  
+ /* Kind of GIMPLE_ATOMIC statements.  */
+ enum gimple_atomic_kind
+ {
+   GIMPLE_ATOMIC_LOAD,
+   GIMPLE_ATOMIC_STORE,
+   GIMPLE_ATOMIC_EXCHANGE,
+   GIMPLE_ATOMIC_COMPARE_EXCHANGE,
+   GIMPLE_ATOMIC_FETCH_OP,
+   GIMPLE_ATOMIC_OP_FETCH,
+   GIMPLE_ATOMIC_TEST_AND_SET,
+   GIMPLE_ATOMIC_CLEAR,
+   GIMPLE_ATOMIC_FENCE 
+ };
+ 
+ /* GIMPLE_ATOMIC statement.  */
+ 
+ struct GTY(()) gimple_statement_atomic
+ {
+   /* [ WORD 1-8 ]  */
+   struct gimple_statement_with_memory_ops_base membase;
+ 
+   /* [ WORD 9 ] */
+   enum gimple_atomic_kind kind;
+ 
+   /* [ WORD 10 ] */
+   tree fntype;
+ 
+   /* [ WORD 11 ]
+      Operand vector.  NOTE!  This must always be the last field
+      of this structure.  In particular, this means that this
+      structure cannot be embedded inside another one.  */
+   tree GTY((length ("%h.membase.opbase.gsbase.num_ops"))) op[1];
+ };
+ 
+ 
  
  /* OpenMP statements (#pragma omp).  */
  
*************** union GTY ((desc ("gimple_statement_stru
*** 821,826 ****
--- 858,864 ----
    struct gimple_statement_with_memory_ops_base GTY ((tag ("GSS_WITH_MEM_OPS_BASE"))) gsmembase;
    struct gimple_statement_with_memory_ops GTY ((tag ("GSS_WITH_MEM_OPS"))) gsmem;
    struct gimple_statement_call GTY ((tag ("GSS_CALL"))) gimple_call;
+   struct gimple_statement_atomic GTY ((tag("GSS_ATOMIC"))) gimple_atomic;
    struct gimple_statement_omp GTY ((tag ("GSS_OMP"))) omp;
    struct gimple_statement_bind GTY ((tag ("GSS_BIND"))) gimple_bind;
    struct gimple_statement_catch GTY ((tag ("GSS_CATCH"))) gimple_catch;
*************** gimple gimple_build_debug_source_bind_st
*** 878,883 ****
--- 916,932 ----
  #define gimple_build_debug_source_bind(var,val,stmt)			\
    gimple_build_debug_source_bind_stat ((var), (val), (stmt) MEM_STAT_INFO)
  
+ gimple gimple_build_atomic_load (tree, tree, tree);
+ gimple gimple_build_atomic_store (tree, tree, tree, tree);
+ gimple gimple_build_atomic_exchange (tree, tree, tree, tree);
+ gimple gimple_build_atomic_compare_exchange (tree, tree, tree, tree, tree,
+ 					     tree, bool);
+ gimple gimple_build_atomic_fetch_op (tree, tree, tree, enum tree_code, tree);
+ gimple gimple_build_atomic_op_fetch (tree, tree, tree, enum tree_code, tree);
+ gimple gimple_build_atomic_test_and_set (tree, tree);
+ gimple gimple_build_atomic_clear (tree, tree);
+ gimple gimple_build_atomic_fence (tree, bool);
+ 
  gimple gimple_build_call_vec (tree, VEC(tree, heap) *);
  gimple gimple_build_call (tree, unsigned, ...);
  gimple gimple_build_call_valist (tree, unsigned, va_list);
*************** extern void gimplify_function_tree (tree
*** 1133,1138 ****
--- 1182,1201 ----
  
  /* In cfgexpand.c.  */
  extern tree gimple_assign_rhs_to_tree (gimple);
+ extern void expand_gimple_assign_move (tree, rtx, rtx, bool);
+ 
+ /* In tree-atomic.c.  */
+ extern bool expand_gimple_atomic_load (gimple);
+ extern bool expand_gimple_atomic_store (gimple);
+ extern bool expand_gimple_atomic_exchange (gimple);
+ extern bool expand_gimple_atomic_compare_exchange (gimple);
+ extern bool expand_gimple_atomic_fetch_op (gimple);
+ extern bool expand_gimple_atomic_op_fetch (gimple);
+ extern void expand_gimple_atomic_test_and_set (gimple);
+ extern void expand_gimple_atomic_clear (gimple);
+ extern void expand_gimple_atomic_fence (gimple);
+ extern void expand_gimple_atomic_library_call (gimple);
+ extern void gimple_verify_memmodel (gimple);
  
  /* In builtins.c  */
  extern bool validate_gimple_arglist (const_gimple, ...);
*************** gimple_set_op (gimple gs, unsigned i, tr
*** 1800,1805 ****
--- 1863,2247 ----
    gimple_ops (gs)[i] = op;
  }
  
+ /* Return true if GS is a GIMPLE_ATOMIC.  */
+ 
+ static inline bool
+ is_gimple_atomic (const_gimple gs)
+ {
+   return gimple_code (gs) == GIMPLE_ATOMIC;
+ }
+ 
+ /* Return the kind of atomic operation GS.  */
+ 
+ static inline enum gimple_atomic_kind
+ gimple_atomic_kind (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   return gs->gimple_atomic.kind;
+ }
+ 
+ /* Set the kind of atomic operation GS to K.  */
+ 
+ static inline void
+ gimple_atomic_set_kind (gimple gs, enum gimple_atomic_kind k)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gs->gimple_atomic.kind = k;
+ }
+ 
+ /* Return the base type of the atomic operation GS.  */
+ static inline tree
+ gimple_atomic_type (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   return gs->gimple_atomic.fntype;
+ }
+ 
+ /* Set the base type of atomic operation GS to T.  */
+ 
+ static inline void
+ gimple_atomic_set_type (gimple gs, tree t)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gs->gimple_atomic.fntype = t;
+ }
+ 
+ /*  Return the number of possible results for atomic operation GS.  */
+ 
+ static inline unsigned
+ gimple_atomic_num_lhs (const_gimple gs)
+ {
+   switch (gimple_atomic_kind (gs))
+     {
+     case GIMPLE_ATOMIC_COMPARE_EXCHANGE:
+       return 2;
+ 
+     case GIMPLE_ATOMIC_STORE:
+     case GIMPLE_ATOMIC_CLEAR:
+     case GIMPLE_ATOMIC_FENCE:
+       return 0;
+ 
+     default:
+       break;
+     }
+   return 1;
+ }
+ 
+ /* Return the number of rhs operands for atomic operation GS.  */
+ 
+ static inline unsigned
+ gimple_atomic_num_rhs (const_gimple gs)
+ {
+   return (gimple_num_ops (gs) - gimple_atomic_num_lhs (gs));
+ }
+ 
+ /* Return true if atomic operation GS can have at least one result.  */
+ 
+ static inline bool
+ gimple_atomic_has_lhs (const_gimple gs)
+ {
+   return (gimple_atomic_num_lhs (gs) > 0);
+ }
+ 
+ /* Return the LHS number INDEX of atomic operation GS.  */
+ 
+ static inline tree
+ gimple_atomic_lhs (const_gimple gs, unsigned index)
+ {
+   unsigned n;
+ 
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   n = gimple_atomic_num_lhs (gs);
+   gcc_assert ((n > 0) && (index < n));
+   return gimple_op (gs, gimple_num_ops (gs) - index - 1);
+ }
+ 
+ /* Return the pointer to LHS number INDEX of atomic operation GS.  */
+ 
+ static inline tree *
+ gimple_atomic_lhs_ptr (const_gimple gs, unsigned index)
+ {
+   unsigned n;
+ 
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   n = gimple_atomic_num_lhs (gs);
+   gcc_assert ((n > 0) && (index < n));
+   return gimple_op_ptr (gs, gimple_num_ops (gs) - index - 1);
+ }
+ 
+ /* Set the LHS number INDEX of atomic operation GS to EXPR.  */
+ 
+ static inline void
+ gimple_atomic_set_lhs (gimple gs, unsigned index, tree expr)
+ {
+   unsigned n;
+ 
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   n = gimple_atomic_num_lhs (gs);
+   gcc_assert ((n > 0) && (index < n));
+   gimple_set_op (gs, gimple_num_ops (gs) - index - 1, expr);
+ }
+ 
+ /* Return the memory order for atomic operation GS.  */
+ 
+ static inline tree 
+ gimple_atomic_order (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   return gimple_op (gs, 0);
+ }
+ 
+ /* Return a pointer to the memory order for atomic operation GS.  */
+ 
+ static inline tree *
+ gimple_atomic_order_ptr (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   return gimple_op_ptr (gs, 0);
+ }
+ 
+ 
+ /* set the memory order for atomic operation GS to T.  */
+ 
+ static inline void
+ gimple_atomic_set_order (gimple gs, tree t)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gimple_set_op (gs, 0, t);
+ }
+ 
+ /* Return true if atomic operation GS contarins an atomic target location.  */
+ 
+ static inline bool
+ gimple_atomic_has_target (const_gimple gs)
+ {
+   return (gimple_atomic_kind (gs) != GIMPLE_ATOMIC_FENCE);
+ }
+ 
+ /* Return the target location of atomic operation GS.  */
+ 
+ static inline tree
+ gimple_atomic_target (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_target (gs));
+   return gimple_op (gs, 1);
+ }
+ 
+ /* Return a pointer to the target location of atomic operation GS.  */
+ 
+ static inline tree *
+ gimple_atomic_target_ptr (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_target (gs));
+   return gimple_op_ptr (gs, 1);
+ }
+ 
+ /* Set the target location of atomic operation GS to T.  */
+ 
+ static inline void
+ gimple_atomic_set_target (gimple gs, tree t)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_target (gs));
+   gimple_set_op (gs, 1, t);
+ }
+ 
+ /* Return true if atomic operation GS has an expression field.  */
+ 
+ static inline bool
+ gimple_atomic_has_expr (const_gimple gs)
+ {
+   switch (gimple_atomic_kind (gs))
+   {
+     case GIMPLE_ATOMIC_COMPARE_EXCHANGE:
+     case GIMPLE_ATOMIC_EXCHANGE:
+     case GIMPLE_ATOMIC_STORE:
+     case GIMPLE_ATOMIC_FETCH_OP:
+     case GIMPLE_ATOMIC_OP_FETCH:
+       return true;
+ 
+     default:
+       return false;
+   }
+ }
+ 
+ /* Return the expression field of atomic operation GS.  */
+ 
+ static inline tree
+ gimple_atomic_expr (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_expr (gs));
+   return gimple_op (gs, 2);
+ }
+ 
+ /* Return a pointer to the expression field of atomic operation GS.  */
+ 
+ static inline tree *
+ gimple_atomic_expr_ptr (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_expr (gs));
+   return gimple_op_ptr (gs, 2);
+ }
+ 
+ /* Set the expression field of atomic operation GS.  */
+ 
+ static inline void
+ gimple_atomic_set_expr (gimple gs, tree t)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_expr (gs));
+   gimple_set_op (gs, 2, t);
+ }
+ 
+ /* Return true if atomic operation GS has an expected field.  */
+ 
+ static inline bool
+ gimple_atomic_has_expected (const_gimple gs)
+ {
+   return gimple_atomic_kind (gs) == GIMPLE_ATOMIC_COMPARE_EXCHANGE;
+ }
+ 
+ /* Return the expected field of atomic operation GS.  */
+ 
+ static inline tree
+ gimple_atomic_expected (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_expected (gs));
+   return gimple_op (gs, 3);
+ }
+ 
+ /* Return a pointer to the expected field of atomic operation GS.  */
+ 
+ static inline tree *
+ gimple_atomic_expected_ptr (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_expected (gs));
+   return gimple_op_ptr (gs, 3);
+ }
+ 
+ /* Set the expected field of atomic operation GS.  */
+ 
+ static inline void
+ gimple_atomic_set_expected (gimple gs, tree t)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_expected (gs));
+   gimple_set_op (gs, 3, t);
+ }
+ 
+ /* Return true if atomic operation GS has a fail order field.  */
+ 
+ static inline bool
+ gimple_atomic_has_fail_order (const_gimple gs)
+ {
+   return gimple_atomic_kind (gs) == GIMPLE_ATOMIC_COMPARE_EXCHANGE;
+ }
+ 
+ /* Return the fail_order field of atomic operation GS.  */
+ 
+ static inline tree
+ gimple_atomic_fail_order (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_fail_order (gs));
+   return gimple_op (gs, 4);
+ }
+ 
+ /* Return a pointer to the fail_order field of atomic operation GS.  */
+ 
+ static inline tree *
+ gimple_atomic_fail_order_ptr (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_fail_order (gs));
+   return gimple_op_ptr (gs, 4);
+ }
+ 
+ 
+ /* Set the fail_order field of atomic operation GS.  */
+ 
+ static inline void
+ gimple_atomic_set_fail_order (gimple gs, tree t)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_has_fail_order (gs));
+   gimple_set_op (gs, 4, t);
+ }
+ 
+ /* Return the arithmetic operation tree code for atomic operation GS.  */
+ 
+ static inline enum tree_code
+ gimple_atomic_op_code (const_gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_kind (gs) == GIMPLE_ATOMIC_FETCH_OP ||
+ 	      gimple_atomic_kind (gs) == GIMPLE_ATOMIC_OP_FETCH);
+   return (enum tree_code) gs->gsbase.subcode;
+ }
+ 
+ /* Set the arithmetic operation tree code for atomic operation GS.  */
+ 
+ static inline void
+ gimple_atomic_set_op_code (gimple gs, enum tree_code tc)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_kind (gs) == GIMPLE_ATOMIC_FETCH_OP ||
+ 	      gimple_atomic_kind (gs) == GIMPLE_ATOMIC_OP_FETCH);
+   gs->gsbase.subcode = tc;
+ }
+ 
+ /* Return true if atomic fence GS is a thread fence.  */
+ 
+ static inline bool
+ gimple_atomic_thread_fence (gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_kind (gs) == GIMPLE_ATOMIC_FENCE);
+   return (gs->gsbase.subcode & GF_ATOMIC_THREAD_FENCE) != 0;
+ }
+ 
+ /* Set the thread fence field of atomic fence GS to THREAD.  */
+ 
+ static inline void 
+ gimple_atomic_set_thread_fence (gimple gs, bool thread)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_kind (gs) == GIMPLE_ATOMIC_FENCE);
+   if (thread)
+     gs->gsbase.subcode |= GF_ATOMIC_THREAD_FENCE;
+   else
+     gs->gsbase.subcode &= ~GF_ATOMIC_THREAD_FENCE;
+ }
+ 
+ /* Return the weak flag of atomic operation GS.  */
+ 
+ static inline bool
+ gimple_atomic_weak (gimple gs)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_kind (gs) == GIMPLE_ATOMIC_COMPARE_EXCHANGE);
+   return (gs->gsbase.subcode & GF_ATOMIC_WEAK) != 0;
+ }
+ 
+ /* set the weak flag of atomic operation GS to WEAK.  */
+ 
+ static inline void
+ gimple_atomic_set_weak (gimple gs, bool weak)
+ {
+   GIMPLE_CHECK (gs, GIMPLE_ATOMIC);
+   gcc_assert (gimple_atomic_kind (gs) == GIMPLE_ATOMIC_COMPARE_EXCHANGE);
+   if (weak)
+     gs->gsbase.subcode |= GF_ATOMIC_WEAK;
+   else
+     gs->gsbase.subcode &= ~GF_ATOMIC_WEAK;
+ }
+ 
  /* Return true if GS is a GIMPLE_ASSIGN.  */
  
  static inline bool
Index: gimple.c
===================================================================
*** gimple.c	(revision 186098)
--- gimple.c	(working copy)
*************** gimple_build_return (tree retval)
*** 200,205 ****
--- 200,373 ----
    return s;
  }
  
+ /* Build a GIMPLE_ATOMIC statement of GIMPLE_ATOMIC_LOAD kind.
+    TYPE is the underlying type of the atomic operation.
+    TARGET is the atomic memory location being operated on.
+    ORDER is the memory model to be used.  */
+ 
+ gimple
+ gimple_build_atomic_load (tree type, tree target, tree order)
+ {
+   gimple s = gimple_build_with_ops (GIMPLE_ATOMIC, ERROR_MARK, 3);
+   gimple_atomic_set_kind (s, GIMPLE_ATOMIC_LOAD);
+   gimple_atomic_set_order (s, order);
+   gimple_atomic_set_target (s, target);
+   gimple_atomic_set_type (s, type);
+   gimple_set_has_volatile_ops (s, true);
+   return s;
+ }
+ 
+ /* Build a GIMPLE_ATOMIC statement of GIMPLE_ATOMIC_STORE kind.
+    TYPE is the underlying type of the atomic operation.
+    TARGET is the atomic memory location being operated on.
+    EXPR is the expression to be stored.
+    ORDER is the memory model to be used.  */
+ 
+ gimple
+ gimple_build_atomic_store (tree type, tree target, tree expr, tree order)
+ {
+   gimple s = gimple_build_with_ops (GIMPLE_ATOMIC, ERROR_MARK, 3);
+   gimple_atomic_set_kind (s, GIMPLE_ATOMIC_STORE);
+   gimple_atomic_set_order (s, order);
+   gimple_atomic_set_target (s, target);
+   gimple_atomic_set_expr (s, expr);
+   gimple_atomic_set_type (s, type);
+   gimple_set_has_volatile_ops (s, true);
+   return s;
+ }
+ 
+ /* Build a GIMPLE_ATOMIC statement of GIMPLE_ATOMIC_EXCHANGE kind.
+    TYPE is the underlying type of the atomic operation.
+    TARGET is the atomic memory location being operated on.
+    EXPR is the expression to be stored.
+    ORDER is the memory model to be used.  */
+ 
+ gimple
+ gimple_build_atomic_exchange (tree type, tree target, tree expr, tree order)
+ {
+   gimple s = gimple_build_with_ops (GIMPLE_ATOMIC, ERROR_MARK, 4);
+   gimple_atomic_set_kind (s, GIMPLE_ATOMIC_EXCHANGE);
+   gimple_atomic_set_order (s, order);
+   gimple_atomic_set_target (s, target);
+   gimple_atomic_set_expr (s, expr);
+   gimple_atomic_set_type (s, type);
+   gimple_set_has_volatile_ops (s, true);
+   return s;
+ }
+ 
+ /* Build a GIMPLE_ATOMIC statement of GIMPLE_ATOMIC_COMPARE_EXCHANGE kind.
+    TYPE is the underlying type of the atomic operation.
+    TARGET is the atomic memory location being operated on.
+    EXPECTED is the value thought to be in the atomic memory location.
+    EXPR is the expression to be stored if EXPECTED matches.
+    SUCCESS is the memory model to be used for a success operation.
+    FAIL is the memory model to be used for a failed operation.
+    WEAK is true if this is a weak compare_exchange, otherwise it is strong.  */
+ 
+ gimple
+ gimple_build_atomic_compare_exchange (tree type, tree target, tree expected,
+ 				      tree expr, tree success, tree fail,
+ 				      bool weak)
+ {
+   gimple s = gimple_build_with_ops (GIMPLE_ATOMIC, ERROR_MARK, 7);
+   gimple_atomic_set_kind (s, GIMPLE_ATOMIC_COMPARE_EXCHANGE);
+   gimple_atomic_set_order (s, success);
+   gimple_atomic_set_target (s, target);
+   gimple_atomic_set_expr (s, expr);
+   gimple_atomic_set_expected (s, expected);
+   gimple_atomic_set_fail_order (s, fail);
+   gimple_atomic_set_weak (s, weak);
+   gimple_atomic_set_type (s, type);
+   gimple_set_has_volatile_ops (s, true);
+   return s;
+ }
+ 
+ /* Build a GIMPLE_ATOMIC statement of GIMPLE_ATOMIC_FETCH_OP kind.
+    TYPE is the underlying type of the atomic operation.
+    TARGET is the atomic memory location being operated on.
+    EXPR is the expression to be stored.
+    OP is the tree code for the operation to be performed.
+    ORDER is the memory model to be used.  */
+ 
+ gimple
+ gimple_build_atomic_fetch_op (tree type, tree target, tree expr,
+ 			      enum tree_code op, tree order)
+ {
+   gimple s = gimple_build_with_ops (GIMPLE_ATOMIC, ERROR_MARK, 4);
+   gimple_atomic_set_kind (s, GIMPLE_ATOMIC_FETCH_OP);
+   gimple_atomic_set_order (s, order);
+   gimple_atomic_set_target (s, target);
+   gimple_atomic_set_expr (s, expr);
+   gimple_atomic_set_op_code (s, op);
+   gimple_atomic_set_type (s, type);
+   gimple_set_has_volatile_ops (s, true);
+   return s;
+ }
+ 
+ /* Build a GIMPLE_ATOMIC statement of GIMPLE_ATOMIC_OP_FETCH kind.
+    TYPE is the underlying type of the atomic operation.
+    TARGET is the atomic memory location being operated on.
+    EXPR is the expression to be stored.
+    OP is the tree code for the operation to be performed.
+    ORDER is the memory model to be used.  */
+ 
+ gimple
+ gimple_build_atomic_op_fetch (tree type, tree target, tree expr,
+ 			      enum tree_code op, tree order)
+ {
+   gimple s = gimple_build_atomic_fetch_op (type, target, expr, op, order);
+   gimple_atomic_set_kind (s, GIMPLE_ATOMIC_OP_FETCH);
+   return s;
+ }
+ 
+ /* Build a GIMPLE_ATOMIC statement of GIMPLE_ATOMIC_TEST_AND_SET kind.
+    TARGET is the atomic memory location being operated on.
+    ORDER is the memory model to be used.  */
+ 
+ gimple
+ gimple_build_atomic_test_and_set (tree target, tree order)
+ {
+   gimple s = gimple_build_with_ops (GIMPLE_ATOMIC, ERROR_MARK, 3);
+   gimple_atomic_set_kind (s, GIMPLE_ATOMIC_TEST_AND_SET);
+   gimple_atomic_set_order (s, order);
+   gimple_atomic_set_target (s, target);
+   gimple_atomic_set_type (s, boolean_type_node);
+   gimple_set_has_volatile_ops (s, true);
+   return s;
+ }
+ 
+ /* Build a GIMPLE_ATOMIC statement of GIMPLE_ATOMIC_CLEAR kind.
+    TARGET is the atomic memory location being operated on.
+    ORDER is the memory model to be used.  */
+ 
+ gimple
+ gimple_build_atomic_clear (tree target, tree order)
+ {
+   gimple s = gimple_build_with_ops (GIMPLE_ATOMIC, ERROR_MARK, 2);
+   gimple_atomic_set_kind (s, GIMPLE_ATOMIC_CLEAR);
+   gimple_atomic_set_order (s, order);
+   gimple_atomic_set_target (s, target);
+   gimple_atomic_set_type (s, boolean_type_node);
+   gimple_set_has_volatile_ops (s, true);
+   return s;
+ }
+ 
+ /* Build a GIMPLE_ATOMIC statement of GIMPLE_ATOMIC_FENCE kind.
+    ORDER is the memory model to be used.
+    THREAD is true if this is a thread barrier, otherwise it is a
+    signal barrier for just the local CPU.  */
+ 
+ gimple
+ gimple_build_atomic_fence (tree order, bool thread)
+ {
+   gimple s = gimple_build_with_ops (GIMPLE_ATOMIC, ERROR_MARK, 1);
+   gimple_atomic_set_kind (s, GIMPLE_ATOMIC_FENCE);
+   gimple_atomic_set_order (s, order);
+   gimple_atomic_set_thread_fence (s, thread);
+   gimple_set_has_volatile_ops (s, true);
+   return s;
+ }
+ 
  /* Reset alias information on call S.  */
  
  void
*************** walk_gimple_op (gimple stmt, walk_tree_f
*** 1519,1524 ****
--- 1687,1721 ----
  	}
        break;
  
+     case GIMPLE_ATOMIC:
+       if (wi)
+ 	wi->val_only = true;
+ 
+       /* Walk the RHS.  */
+       for (i = 0; i < gimple_atomic_num_rhs (stmt) ; i++)
+        {
+  	 ret = walk_tree (gimple_op_ptr (stmt, i), callback_op, wi,
+ 			  pset);
+ 	 if (ret)
+ 	   return ret;
+        }
+ 
+       if (wi)
+ 	wi->is_lhs = true;
+ 
+       for (i = 0; i < gimple_atomic_num_lhs (stmt) ; i++)
+        {
+  	 ret = walk_tree (gimple_atomic_lhs_ptr (stmt, i), callback_op, wi,
+ 			  pset);
+ 	 if (ret)
+ 	   return ret;
+        }
+ 
+       if (wi)
+ 	wi->is_lhs = false;
+ 
+       break;
+ 
      case GIMPLE_CALL:
        if (wi)
  	{
*************** walk_stmt_load_store_addr_ops (gimple st
*** 5281,5286 ****
--- 5478,5518 ----
  	    }
  	}
      }
+   else if (is_gimple_atomic (stmt))
+     {
+       tree t;
+       if (visit_store)
+         {
+ 	  for (i = 0; i < gimple_atomic_num_lhs (stmt); i++)
+ 	    {
+ 	      t = gimple_atomic_lhs (stmt, i);
+ 	      if (t)
+ 	        {
+ 		  t = get_base_loadstore (t);
+ 		  if (t)
+ 		    ret |= visit_store (stmt, t, data);
+ 		}
+ 	    }
+ 	}
+       if (visit_load || visit_addr)
+         {
+ 	  for (i = 0; i < gimple_atomic_num_rhs (stmt); i++)
+ 	    {
+ 	      t = gimple_op (stmt, i);
+ 	      if (t)
+ 	        {
+ 		  if (visit_addr && TREE_CODE (t) == ADDR_EXPR)
+ 		    ret |= visit_addr (stmt, TREE_OPERAND (t, 0), data);
+ 		  else if (visit_load)
+ 		    {
+ 		      t = get_base_loadstore (t);
+ 		      if (t)
+ 			ret |= visit_load (stmt, t, data);
+ 		    }
+ 		}
+ 	    }
+ 	}    
+       }
    else if (is_gimple_call (stmt))
      {
        if (visit_store)
Index: cfgexpand.c
===================================================================
*** cfgexpand.c	(revision 186098)
--- cfgexpand.c	(working copy)
*************** mark_transaction_restart_calls (gimple s
*** 1990,1995 ****
--- 1990,2076 ----
      }
  }
  
+ 
+ /* Expand a GIMPLE_ATOMIC statement STMT into RTL.  */
+ static void
+ expand_atomic_stmt (gimple stmt)
+ {
+   enum gimple_atomic_kind kind = gimple_atomic_kind (stmt);
+   bool emitted = false;
+   bool try_inline;
+ 
+   /* Fences, test_and_set, and clear operations are required to be inlined.  */
+   try_inline = flag_inline_atomics || (kind == GIMPLE_ATOMIC_FENCE) ||
+ 	       (kind == GIMPLE_ATOMIC_TEST_AND_SET) ||
+ 	       (kind == GIMPLE_ATOMIC_CLEAR);
+ 
+   /* Try emitting inline code if requsted.  */
+   if (try_inline)
+     {
+       switch (kind)
+ 	{
+ 	case GIMPLE_ATOMIC_LOAD:
+ 	  emitted = expand_gimple_atomic_load (stmt);
+ 	  break;
+ 
+ 	case GIMPLE_ATOMIC_STORE:
+ 	  emitted = expand_gimple_atomic_store (stmt);
+ 	  break;
+ 
+ 	case GIMPLE_ATOMIC_EXCHANGE:
+ 	  emitted = expand_gimple_atomic_exchange (stmt);
+ 	  break;
+ 
+ 	case GIMPLE_ATOMIC_COMPARE_EXCHANGE:
+ 	  emitted = expand_gimple_atomic_compare_exchange (stmt);
+ 	  break;
+ 
+ 	case GIMPLE_ATOMIC_FETCH_OP:
+ 	  emitted = expand_gimple_atomic_fetch_op (stmt);
+ 	  break;
+ 
+ 	case GIMPLE_ATOMIC_OP_FETCH:
+ 	  emitted = expand_gimple_atomic_op_fetch (stmt);
+ 	  break;
+ 
+ 	case GIMPLE_ATOMIC_TEST_AND_SET:
+ 	  expand_gimple_atomic_test_and_set (stmt);
+ 	  return;
+ 
+ 	case GIMPLE_ATOMIC_CLEAR:
+ 	  expand_gimple_atomic_clear (stmt);
+ 	  return;
+ 
+ 	case GIMPLE_ATOMIC_FENCE:
+ 	  expand_gimple_atomic_fence (stmt);
+ 	  return;
+ 
+ 	default:
+ 	  gcc_unreachable ();
+ 	}
+     }
+ 
+   /* If no code was emitted, issue a library call.  */
+   if (!emitted)
+     {
+       switch (kind)
+         {
+ 	case GIMPLE_ATOMIC_LOAD:
+ 	case GIMPLE_ATOMIC_STORE:
+ 	case GIMPLE_ATOMIC_EXCHANGE:
+ 	case GIMPLE_ATOMIC_COMPARE_EXCHANGE:
+ 	case GIMPLE_ATOMIC_FETCH_OP:
+ 	case GIMPLE_ATOMIC_OP_FETCH:
+ 	  expand_gimple_atomic_library_call (stmt);
+ 	  return;
+ 
+ 	default:
+ 	  /* The remaining kinds must be inlined or unsupported.  */
+ 	  gcc_unreachable ();
+ 	}
+     }
+ }
+ 
  /* A subroutine of expand_gimple_stmt_1, expanding one GIMPLE_CALL
     statement STMT.  */
  
*************** expand_call_stmt (gimple stmt)
*** 2079,2084 ****
--- 2160,2277 ----
    mark_transaction_restart_calls (stmt);
  }
  
+ 
+ /* A subroutine of expand_gimple_assign, Take care of moving the RHS of an
+    assignment into TARGET which is of type TARGET_TREE_TYPE.   Moves can
+    be NONTEMPORAL.  */
+ 
+ void
+ expand_gimple_assign_move (tree target_tree_type, rtx target, rtx rhs,
+ 			   bool nontemporal) 
+ {
+   bool promoted = false;
+ 
+   if (GET_CODE (target) == SUBREG && SUBREG_PROMOTED_VAR_P (target))
+     promoted = true;
+ 
+   if (rhs == target)
+     ;
+   else if (promoted)
+     {
+       int unsignedp = SUBREG_PROMOTED_UNSIGNED_P (target);
+       /* If TEMP is a VOIDmode constant, use convert_modes to make
+ 	 sure that we properly convert it.  */
+       if (CONSTANT_P (rhs) && GET_MODE (rhs) == VOIDmode)
+ 	{
+ 	  rhs = convert_modes (GET_MODE (target),
+ 				TYPE_MODE (target_tree_type),
+ 				rhs, unsignedp);
+ 	  rhs = convert_modes (GET_MODE (SUBREG_REG (target)),
+ 				GET_MODE (target), rhs, unsignedp);
+ 	}
+ 
+       convert_move (SUBREG_REG (target), rhs, unsignedp);
+     }
+   else if (nontemporal && emit_storent_insn (target, rhs))
+     ;
+   else
+     {
+       rhs = force_operand (rhs, target);
+       if (rhs != target)
+ 	emit_move_insn (target, rhs);
+     }
+ }
+ 
+ /* A subroutine of expand_gimple_stmt_1, expanding one GIMPLE_CALL
+    statement STMT.  */
+ 
+ static void
+ expand_gimple_assign (gimple stmt)
+ {
+   tree lhs = gimple_assign_lhs (stmt);
+ 
+   /* Tree expand used to fiddle with |= and &= of two bitfield
+      COMPONENT_REFs here.  This can't happen with gimple, the LHS
+      of binary assigns must be a gimple reg.  */
+ 
+   if (TREE_CODE (lhs) != SSA_NAME
+       || get_gimple_rhs_class (gimple_expr_code (stmt))
+ 	 == GIMPLE_SINGLE_RHS)
+     {
+       tree rhs = gimple_assign_rhs1 (stmt);
+       gcc_assert (get_gimple_rhs_class (gimple_expr_code (stmt))
+ 		  == GIMPLE_SINGLE_RHS);
+       if (gimple_has_location (stmt) && CAN_HAVE_LOCATION_P (rhs))
+ 	SET_EXPR_LOCATION (rhs, gimple_location (stmt));
+       if (TREE_CLOBBER_P (rhs))
+ 	/* This is a clobber to mark the going out of scope for
+ 	   this LHS.  */
+ 	;
+       else
+ 	expand_assignment (lhs, rhs,
+ 			   gimple_assign_nontemporal_move_p (stmt));
+     }
+   else
+     {
+       rtx target, temp;
+       bool nontemporal = gimple_assign_nontemporal_move_p (stmt);
+       struct separate_ops ops;
+       bool promoted = false;
+ 
+       target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE);
+       if (GET_CODE (target) == SUBREG && SUBREG_PROMOTED_VAR_P (target))
+ 	promoted = true;
+ 
+       ops.code = gimple_assign_rhs_code (stmt);
+       ops.type = TREE_TYPE (lhs);
+       switch (get_gimple_rhs_class (gimple_expr_code (stmt)))
+ 	{
+ 	  case GIMPLE_TERNARY_RHS:
+ 	    ops.op2 = gimple_assign_rhs3 (stmt);
+ 	    /* Fallthru */
+ 	  case GIMPLE_BINARY_RHS:
+ 	    ops.op1 = gimple_assign_rhs2 (stmt);
+ 	    /* Fallthru */
+ 	  case GIMPLE_UNARY_RHS:
+ 	    ops.op0 = gimple_assign_rhs1 (stmt);
+ 	    break;
+ 	  default:
+ 	    gcc_unreachable ();
+ 	}
+       ops.location = gimple_location (stmt);
+ 
+       /* If we want to use a nontemporal store, force the value to
+ 	 register first.  If we store into a promoted register,
+ 	 don't directly expand to target.  */
+       temp = nontemporal || promoted ? NULL_RTX : target;
+       temp = expand_expr_real_2 (&ops, temp, GET_MODE (target),
+ 				 EXPAND_NORMAL);
+ 
+       expand_gimple_assign_move (TREE_TYPE (lhs), target, temp, nontemporal);
+     }
+ }
+ 
+ 
  /* A subroutine of expand_gimple_stmt, expanding one gimple statement
     STMT that doesn't require special handling for outgoing edges.  That
     is no tailcalls and no GIMPLE_COND.  */
*************** expand_gimple_stmt_1 (gimple stmt)
*** 2115,2120 ****
--- 2308,2316 ----
      case GIMPLE_CALL:
        expand_call_stmt (stmt);
        break;
+     case GIMPLE_ATOMIC:
+       expand_atomic_stmt (stmt);
+       break;
  
      case GIMPLE_RETURN:
        op0 = gimple_return_retval (stmt);
*************** expand_gimple_stmt_1 (gimple stmt)
*** 2147,2240 ****
        break;
  
      case GIMPLE_ASSIGN:
!       {
! 	tree lhs = gimple_assign_lhs (stmt);
! 
! 	/* Tree expand used to fiddle with |= and &= of two bitfield
! 	   COMPONENT_REFs here.  This can't happen with gimple, the LHS
! 	   of binary assigns must be a gimple reg.  */
! 
! 	if (TREE_CODE (lhs) != SSA_NAME
! 	    || get_gimple_rhs_class (gimple_expr_code (stmt))
! 	       == GIMPLE_SINGLE_RHS)
! 	  {
! 	    tree rhs = gimple_assign_rhs1 (stmt);
! 	    gcc_assert (get_gimple_rhs_class (gimple_expr_code (stmt))
! 			== GIMPLE_SINGLE_RHS);
! 	    if (gimple_has_location (stmt) && CAN_HAVE_LOCATION_P (rhs))
! 	      SET_EXPR_LOCATION (rhs, gimple_location (stmt));
! 	    if (TREE_CLOBBER_P (rhs))
! 	      /* This is a clobber to mark the going out of scope for
! 		 this LHS.  */
! 	      ;
! 	    else
! 	      expand_assignment (lhs, rhs,
! 				 gimple_assign_nontemporal_move_p (stmt));
! 	  }
! 	else
! 	  {
! 	    rtx target, temp;
! 	    bool nontemporal = gimple_assign_nontemporal_move_p (stmt);
! 	    struct separate_ops ops;
! 	    bool promoted = false;
! 
! 	    target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE);
! 	    if (GET_CODE (target) == SUBREG && SUBREG_PROMOTED_VAR_P (target))
! 	      promoted = true;
! 
! 	    ops.code = gimple_assign_rhs_code (stmt);
! 	    ops.type = TREE_TYPE (lhs);
! 	    switch (get_gimple_rhs_class (gimple_expr_code (stmt)))
! 	      {
! 		case GIMPLE_TERNARY_RHS:
! 		  ops.op2 = gimple_assign_rhs3 (stmt);
! 		  /* Fallthru */
! 		case GIMPLE_BINARY_RHS:
! 		  ops.op1 = gimple_assign_rhs2 (stmt);
! 		  /* Fallthru */
! 		case GIMPLE_UNARY_RHS:
! 		  ops.op0 = gimple_assign_rhs1 (stmt);
! 		  break;
! 		default:
! 		  gcc_unreachable ();
! 	      }
! 	    ops.location = gimple_location (stmt);
! 
! 	    /* If we want to use a nontemporal store, force the value to
! 	       register first.  If we store into a promoted register,
! 	       don't directly expand to target.  */
! 	    temp = nontemporal || promoted ? NULL_RTX : target;
! 	    temp = expand_expr_real_2 (&ops, temp, GET_MODE (target),
! 				       EXPAND_NORMAL);
! 
! 	    if (temp == target)
! 	      ;
! 	    else if (promoted)
! 	      {
! 		int unsignedp = SUBREG_PROMOTED_UNSIGNED_P (target);
! 		/* If TEMP is a VOIDmode constant, use convert_modes to make
! 		   sure that we properly convert it.  */
! 		if (CONSTANT_P (temp) && GET_MODE (temp) == VOIDmode)
! 		  {
! 		    temp = convert_modes (GET_MODE (target),
! 					  TYPE_MODE (ops.type),
! 					  temp, unsignedp);
! 		    temp = convert_modes (GET_MODE (SUBREG_REG (target)),
! 					  GET_MODE (target), temp, unsignedp);
! 		  }
! 
! 		convert_move (SUBREG_REG (target), temp, unsignedp);
! 	      }
! 	    else if (nontemporal && emit_storent_insn (target, temp))
! 	      ;
! 	    else
! 	      {
! 		temp = force_operand (temp, target);
! 		if (temp != target)
! 		  emit_move_insn (target, temp);
! 	      }
! 	  }
!       }
        break;
  
      default:
--- 2343,2349 ----
        break;
  
      case GIMPLE_ASSIGN:
!       expand_gimple_assign (stmt);
        break;
  
      default:
Index: Makefile.in
===================================================================
*** Makefile.in	(revision 186098)
--- Makefile.in	(working copy)
*************** OBJS = \
*** 1356,1361 ****
--- 1356,1362 ----
  	tracer.o \
  	trans-mem.o \
  	tree-affine.o \
+ 	tree-atomic.o \
  	tree-call-cdce.o \
  	tree-cfg.o \
  	tree-cfgcleanup.o \
*************** omp-low.o : omp-low.c $(CONFIG_H) $(SYST
*** 2585,2590 ****
--- 2586,2596 ----
     $(TREE_FLOW_H) $(TIMEVAR_H) $(FLAGS_H) $(EXPR_H) $(DIAGNOSTIC_CORE_H) \
     $(TREE_PASS_H) $(GGC_H) $(EXCEPT_H) $(SPLAY_TREE_H) $(OPTABS_H) \
     $(CFGLOOP_H) tree-iterator.h gt-omp-low.h
+ tree-atomic.o : tree-atomic.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \
+    $(TREE_H) $(RTL_H) $(GIMPLE_H) $(TREE_INLINE_H) langhooks.h \
+    $(DIAGNOSTIC_CORE_H) $(TREE_FLOW_H) $(TIMEVAR_H) $(FLAGS_H) $(EXPR_H) \
+    $(DIAGNOSTIC_CORE_H) $(TREE_PASS_H) $(GGC_H) $(EXCEPT_H) $(SPLAY_TREE_H) \
+    $(OPTABS_H) $(CFGLOOP_H) tree-iterator.h
  tree-browser.o : tree-browser.c tree-browser.def $(CONFIG_H) $(SYSTEM_H) \
     coretypes.h $(TREE_H) tree-pretty-print.h
  omega.o : omega.c omega.h $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TREE_H) \
Index: tree-atomic.c
===================================================================
*** tree-atomic.c	(revision 0)
--- tree-atomic.c	(revision 0)
***************
*** 0 ****
--- 1,967 ----
+ /* Pass for lowering and manipulating atomic tree codes.
+    Various __builtin_atomic function calls are turned into atomic tree
+    expressions. 
+    Any memory references of type atomic are also translated into
+    the approriate atomic expression.
+    Contributed by Andrew MacLeod <amacleod@redhat.com>
+ 
+    Copyright (C) 2012
+    Free Software Foundation, Inc.
+ 
+ This file is part of GCC.
+ 
+ GCC is free software; you can redistribute it and/or modify it under
+ the terms of the GNU General Public License as published by the Free
+ Software Foundation; either version 3, or (at your option) any later
+ version.
+ 
+ GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+ WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ for more details.
+ 
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING3.  If not see
+ <http://www.gnu.org/licenses/>.  */
+ 
+ #include "config.h"
+ #include "system.h"
+ #include "coretypes.h"
+ #include "tm.h"
+ #include "tree.h"
+ #include "rtl.h"
+ #include "gimple.h"
+ #include "tree-iterator.h"
+ #include "tree-inline.h"
+ #include "langhooks.h"
+ #include "diagnostic-core.h"
+ #include "tree-flow.h"
+ #include "timevar.h"
+ #include "flags.h"
+ #include "function.h"
+ #include "expr.h"
+ #include "tree-pass.h"
+ #include "ggc.h"
+ #include "except.h"
+ #include "splay-tree.h"
+ #include "optabs.h"
+ #include "cfgloop.h"
+ #include "tree-pretty-print.h"
+ #include "gimple-pretty-print.h"
+ 
+ 
+ /* Return the type of atomic operation STMT.  */
+ 
+ static tree
+ get_atomic_type (gimple stmt)
+ {
+   tree t;
+ 
+   t = gimple_atomic_type (stmt);
+   gcc_assert (INTEGRAL_TYPE_P (t));
+ 
+   return t;
+ }
+ 
+ 
+ /* Extract memory model from tree EXP, and verify it is valid for KIND.  */
+ 
+ static enum memmodel
+ get_memmodel (enum gimple_atomic_kind kind, tree exp)
+ {
+   rtx op;
+   enum memmodel model;
+ 
+   /* If the parameter is not a constant, it's a run time value so we'll just
+      convert it to MEMMODEL_SEQ_CST to avoid annoying runtime checking.  */
+   if (TREE_CODE (exp) != INTEGER_CST)
+     return MEMMODEL_SEQ_CST;
+ 
+   op = expand_normal (exp);
+   if (INTVAL (op) < 0 || INTVAL (op) >= MEMMODEL_LAST)
+     {
+       warning (OPT_Winvalid_memory_model,
+ 	       "invalid memory model argument for atomic operation");
+       return MEMMODEL_SEQ_CST;
+     }
+   model = (enum memmodel) INTVAL (op);
+ 
+   switch (kind)
+     {
+     case GIMPLE_ATOMIC_LOAD:
+       if (model != MEMMODEL_RELEASE && model != MEMMODEL_ACQ_REL)
+         return model;
+       break;
+ 
+     case GIMPLE_ATOMIC_STORE:
+       if (model == MEMMODEL_RELAXED || model == MEMMODEL_SEQ_CST ||
+ 	  model == MEMMODEL_RELEASE)
+ 	 return model;
+       break;
+ 
+     case GIMPLE_ATOMIC_EXCHANGE:
+       if (model != MEMMODEL_CONSUME)
+         return model;
+       break;
+ 
+     case GIMPLE_ATOMIC_CLEAR:
+       if (model != MEMMODEL_ACQUIRE && model != MEMMODEL_ACQ_REL)
+         return model;
+       break;
+ 
+     default:
+       return model;
+     }
+ 
+   error ("invalid memory model for atomic operation");
+   return MEMMODEL_SEQ_CST;
+ }
+ 
+ /* Verify that all the memory model's are valid for STMT.  */
+ 
+ void
+ gimple_verify_memmodel (gimple stmt)
+ {
+   enum memmodel a,b;
+ 
+   a = get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_order (stmt));
+ 
+   if (gimple_atomic_kind (stmt) != GIMPLE_ATOMIC_COMPARE_EXCHANGE)
+     return;
+ 
+   b = get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_fail_order (stmt));
+   if (b == MEMMODEL_RELEASE || b == MEMMODEL_ACQ_REL)
+     error ("invalid failure memory model for %<__atomic_compare_exchange%>");
+   if (b > a)
+     error ("failure memory model cannot be stronger than success "
+ 	   "memory model for %<__atomic_compare_exchange%>");
+ }
+ 
+ /* Generate RTL for accessing the atomic location LOC in MODE.  */
+ 
+ static rtx
+ expand_atomic_target (tree loc, enum machine_mode mode)
+ {
+   rtx addr, mem;
+ 
+   addr = expand_expr (loc, NULL_RTX, ptr_mode, EXPAND_SUM);
+   addr = convert_memory_address (Pmode, addr);
+ 
+   /* Note that we explicitly do not want any alias information for this
+      memory, so that we kill all other live memories.  Otherwise we don't
+      satisfy the full barrier semantics of the intrinsic.  */
+   mem = validize_mem (gen_rtx_MEM (mode, addr));
+ 
+   /* The alignment needs to be at least according to that of the mode.  */
+   set_mem_align (mem, MAX (GET_MODE_ALIGNMENT (mode),
+ 			   get_pointer_alignment (loc)));
+   set_mem_alias_set (mem, ALIAS_SET_MEMORY_BARRIER);
+   MEM_VOLATILE_P (mem) = 1;
+ 
+   return mem;
+ }
+ 
+ 
+ /* Make sure an argument is in the right mode.
+    EXP is the tree argument. 
+    MODE is the mode it should be in.  */
+ 
+ static rtx
+ expand_expr_force_mode (tree exp, enum machine_mode mode)
+ {
+   rtx val;
+   enum machine_mode old_mode;
+ 
+   val = expand_expr (exp, NULL_RTX, mode, EXPAND_NORMAL);
+   /* If VAL is promoted to a wider mode, convert it back to MODE.  Take care
+      of CONST_INTs, where we know the old_mode only from the call argument.  */
+ 
+   old_mode = GET_MODE (val);
+   if (old_mode == VOIDmode)
+     old_mode = TYPE_MODE (TREE_TYPE (exp));
+   val = convert_modes (mode, old_mode, val, 1);
+   return val;
+ }
+ 
+ /* Get the RTL for lhs #INDEX of STMT.  */
+ 
+ static rtx
+ get_atomic_lhs_rtx (gimple stmt, unsigned index)
+ {
+   tree tree_lhs;
+   rtx rtl_lhs;
+   
+   tree_lhs = gimple_atomic_lhs (stmt, index);
+   if (!tree_lhs)
+     return NULL_RTX;
+ 
+   gcc_assert (TREE_CODE (tree_lhs) == SSA_NAME);
+ 
+   rtl_lhs = expand_expr (tree_lhs, NULL_RTX, VOIDmode, EXPAND_WRITE);
+   return rtl_lhs;
+ }
+ 
+ /* Expand STMT into a library call.  */
+ 
+ void
+ expand_gimple_atomic_library_call (gimple stmt) 
+ {
+   /* Verify the models if inlining hasn't been attempted.  */
+   if (!flag_inline_atomics)
+     get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_order (stmt));
+ 
+   /* Trigger error to come and look so we can complete writing this.  */
+   gcc_assert (stmt == NULL);
+ }
+ 
+ /* Expand atomic load STMT into RTL.  Return true if successful.  */
+ 
+ bool
+ expand_gimple_atomic_load (gimple stmt)
+ {
+   enum machine_mode mode;
+   enum memmodel model;
+   tree type;
+   rtx mem, rtl_rhs, rtl_lhs;
+ 
+   gcc_assert (gimple_atomic_kind (stmt) == GIMPLE_ATOMIC_LOAD);
+ 
+   type = get_atomic_type (stmt);
+   mode = mode_for_size (tree_low_cst (TYPE_SIZE (type), 1), MODE_INT, 0);
+   gcc_assert (mode != BLKmode);
+ 
+   model = get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_order (stmt));
+ 
+   mem = expand_atomic_target (gimple_atomic_target (stmt), mode);
+ 
+   rtl_lhs = get_atomic_lhs_rtx (stmt, 0);
+   rtl_rhs = expand_atomic_load (rtl_lhs, mem, model);
+ 
+   /* If no rtl is generated, indicate the code was not inlined.  */
+   if (!rtl_rhs)
+     return false;
+ 
+   if (rtl_lhs)
+     expand_gimple_assign_move (TREE_TYPE (type), rtl_lhs, rtl_rhs, false);
+   return true;
+ }
+ 
+ 
+ /* Expand atomic store STMT into RTL.  Return true if successful.  */
+ 
+ bool
+ expand_gimple_atomic_store (gimple stmt)
+ {
+   rtx mem, val, rtl_rhs;
+   enum memmodel model;
+   enum machine_mode mode;
+   tree type;
+ 
+   gcc_assert (gimple_atomic_kind (stmt) == GIMPLE_ATOMIC_STORE);
+ 
+   type = get_atomic_type (stmt);
+   mode = mode_for_size (tree_low_cst (TYPE_SIZE (type), 1), MODE_INT, 0);
+   gcc_assert (mode != BLKmode);
+ 
+   model = get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_order (stmt));
+ 
+   /* Expand the operands.  */
+   mem = expand_atomic_target (gimple_atomic_target (stmt), mode);
+   val = expand_expr_force_mode (gimple_atomic_expr (stmt), mode);
+ 
+   rtl_rhs = expand_atomic_store (mem, val, model, false);
+ 
+   /* If no rtl is generated, indicate the code was not inlined.  */
+   if (!rtl_rhs)
+     return false;
+ 
+   return true;
+ }
+ 
+ /* Expand atomic exchange STMT into RTL.  Return true if successful.  */
+ 
+ bool
+ expand_gimple_atomic_exchange (gimple stmt)
+ {
+   rtx mem, val, rtl_rhs, rtl_lhs;
+   enum memmodel model;
+   enum machine_mode mode;
+   tree type;
+ 
+   gcc_assert (gimple_atomic_kind (stmt) == GIMPLE_ATOMIC_EXCHANGE);
+ 
+   type = get_atomic_type (stmt);
+   mode = mode_for_size (tree_low_cst (TYPE_SIZE (type), 1), MODE_INT, 0);
+   gcc_assert (mode != BLKmode);
+ 
+   model = get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_order (stmt));
+ 
+   /* Expand the operands.  */
+   mem = expand_atomic_target (gimple_atomic_target (stmt), mode);
+   val = expand_expr_force_mode (gimple_atomic_expr (stmt), mode);
+ 
+   rtl_lhs = get_atomic_lhs_rtx (stmt, 0);
+   rtl_rhs = expand_atomic_exchange (rtl_lhs, mem, val, model);
+ 
+   /* If no rtl is generated, indicate the code was not inlined.  */
+   if (!rtl_rhs)
+     return false;
+ 
+   if (rtl_lhs)
+     expand_gimple_assign_move (TREE_TYPE (type), rtl_lhs, rtl_rhs, false);
+   return true;
+ }
+ 
+ /* Expand atomic compare_exchange STMT into RTL.  Return true if successful.  */
+ 
+ bool
+ expand_gimple_atomic_compare_exchange (gimple stmt)
+ {
+   rtx mem, val, rtl_lhs1, rtl_lhs2, expect;
+   rtx real_lhs1, real_lhs2;
+   enum memmodel success, failure;
+   enum machine_mode mode;
+   tree type;
+   bool is_weak, emitted;
+ 
+   gcc_assert (gimple_atomic_kind (stmt) == GIMPLE_ATOMIC_COMPARE_EXCHANGE);
+ 
+   type = get_atomic_type (stmt);
+   mode = mode_for_size (tree_low_cst (TYPE_SIZE (type), 1), MODE_INT, 0);
+   gcc_assert (mode != BLKmode);
+ 
+   success = get_memmodel (gimple_atomic_kind (stmt),  gimple_atomic_order (stmt));
+   failure = get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_fail_order (stmt));
+ 
+   /* compare_exchange has additional restrictions on the failure order.  */
+   if (failure == MEMMODEL_RELEASE || failure == MEMMODEL_ACQ_REL)
+     error ("invalid failure memory model for %<__atomic_compare_exchange%>");
+ 
+   if (failure > success)
+     {
+       error ("failure memory model cannot be stronger than success "
+ 	     "memory model for %<__atomic_compare_exchange%>");
+     }
+   
+   /* Expand the operands.  */
+   mem = expand_atomic_target (gimple_atomic_target (stmt), mode);
+   val = expand_expr_force_mode (gimple_atomic_expr (stmt), mode);
+   expect = expand_expr_force_mode (gimple_atomic_expected (stmt), mode);
+   is_weak = gimple_atomic_weak (stmt);
+ 
+   rtl_lhs1 = get_atomic_lhs_rtx (stmt, 0);
+   rtl_lhs2 = get_atomic_lhs_rtx (stmt, 1);
+   real_lhs1 = rtl_lhs1;
+   real_lhs2 = rtl_lhs2;
+   emitted = expand_atomic_compare_and_swap (&real_lhs1, &real_lhs2, mem, expect,
+ 					    val, is_weak, success, failure);
+ 
+   /* If no rtl is generated, indicate the code was not inlined.  */
+   if (!emitted)
+     return false;
+ 
+   if (rtl_lhs1)
+     expand_gimple_assign_move (TREE_TYPE (type), rtl_lhs1, real_lhs1, false);
+   /* The second result is not optional.  */
+   expand_gimple_assign_move (TREE_TYPE (type), rtl_lhs2, real_lhs2, false);
+   return true;
+ }
+ 
+ /* Return the RTL code for the tree operation TCODE.  */
+ 
+ static enum rtx_code
+ rtx_code_from_tree_code (enum tree_code tcode)
+ {
+   switch (tcode)
+     {
+     case PLUS_EXPR:
+       return PLUS;
+     case MINUS_EXPR:
+       return MINUS;
+     case BIT_AND_EXPR:
+       return AND;
+     case BIT_IOR_EXPR:
+       return IOR;
+     case BIT_XOR_EXPR:
+       return XOR;
+     case BIT_NOT_EXPR:
+       return NOT;
+     default :
+       error ("invalid operation type in atomic fetch operation");
+     }
+   return PLUS;
+ }
+ 
+ 
+ /* Expand atomic fetch operation STMT into RTL.  FETCH_AFTER is true if the 
+    value returned is the post operation value.  Return true if successful.  */
+ 
+ static bool
+ expand_atomic_fetch (gimple stmt, bool fetch_after)
+ {
+   rtx mem, val, rtl_rhs, rtl_lhs;
+   enum memmodel model;
+   enum machine_mode mode;
+   tree type;
+   enum rtx_code rcode;
+ 
+ 
+   type = get_atomic_type (stmt);
+   mode = mode_for_size (tree_low_cst (TYPE_SIZE (type), 1), MODE_INT, 0);
+   gcc_assert (mode != BLKmode);
+ 
+   model = get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_order (stmt));
+ 
+   /* Expand the operands.  */
+   mem = expand_atomic_target (gimple_atomic_target (stmt), mode);
+   val = expand_expr_force_mode (gimple_atomic_expr (stmt), mode);
+   rcode = rtx_code_from_tree_code (gimple_atomic_op_code (stmt));
+ 
+   rtl_lhs = get_atomic_lhs_rtx (stmt, 0);
+   rtl_rhs = expand_atomic_fetch_op (rtl_lhs, mem, val, rcode, model,
+ 				    fetch_after);
+ 
+   /* If no rtl is generated, indicate the code was not inlined.  */
+   if (!rtl_rhs)
+     return false;
+ 
+   /* If the result is used, make sure its in correct LHS.  */
+   if (rtl_lhs)
+     expand_gimple_assign_move (TREE_TYPE (type), rtl_lhs, rtl_rhs, false);
+   return true;
+ }
+ 
+ 
+ /* Expand atomic fetch_op operation STMT into RTL.  Return true if successful.  */
+ 
+ bool
+ expand_gimple_atomic_fetch_op (gimple stmt)
+ {
+   gcc_assert (gimple_atomic_kind (stmt) == GIMPLE_ATOMIC_FETCH_OP);
+   return expand_atomic_fetch (stmt, false);
+ }
+ 
+ /* Expand atomic fetch_op operation STMT into RTL.  Return true if successful.  */
+ 
+ bool
+ expand_gimple_atomic_op_fetch (gimple stmt)
+ {
+   gcc_assert (gimple_atomic_kind (stmt) == GIMPLE_ATOMIC_OP_FETCH);
+   return expand_atomic_fetch (stmt, true);
+ }
+ 
+ /* Expand atomic test_and_set STMT into RTL.  Return true if successful.  */
+ 
+ void
+ expand_gimple_atomic_test_and_set (gimple stmt)
+ {
+   rtx mem, rtl_rhs, rtl_lhs;
+   enum memmodel model;
+   enum machine_mode mode;
+   tree type;
+ 
+   gcc_assert (gimple_atomic_kind (stmt) == GIMPLE_ATOMIC_TEST_AND_SET);
+ 
+   type = get_atomic_type (stmt);
+   mode = mode_for_size (tree_low_cst (TYPE_SIZE (type), 1), MODE_INT, 0);
+   gcc_assert (mode != BLKmode);
+ 
+   mode = mode_for_size (BOOL_TYPE_SIZE, MODE_INT, 0);
+   model = get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_order (stmt));
+ 
+   /* Expand the operands.  */
+   mem = expand_atomic_target (gimple_atomic_target (stmt), mode);
+ 
+   rtl_lhs = get_atomic_lhs_rtx (stmt, 0);
+   rtl_rhs = expand_atomic_test_and_set (rtl_lhs, mem, model);
+ 
+   /* Test and set is not allowed to fail.  */
+   gcc_assert (rtl_rhs);
+ 
+   if (rtl_lhs)
+     expand_gimple_assign_move (TREE_TYPE (type), rtl_lhs, rtl_rhs, false);
+ }
+ 
+ #ifndef HAVE_atomic_clear
+ # define HAVE_atomic_clear 0
+ # define gen_atomic_clear(x,y) (gcc_unreachable (), NULL_RTX)
+ #endif
+ 
+ /* Expand atomic clear STMT into RTL.  Return true if successful.  */
+ 
+ void
+ expand_gimple_atomic_clear (gimple stmt)
+ {
+   rtx mem, ret;
+   enum memmodel model;
+   enum machine_mode mode;
+   tree type;
+ 
+   gcc_assert (gimple_atomic_kind (stmt) == GIMPLE_ATOMIC_CLEAR);
+ 
+   type = get_atomic_type (stmt);
+   mode = mode_for_size (tree_low_cst (TYPE_SIZE (type), 1), MODE_INT, 0);
+   gcc_assert (mode != BLKmode);
+ 
+   mode = mode_for_size (BOOL_TYPE_SIZE, MODE_INT, 0);
+   model = get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_order (stmt));
+   mem = expand_atomic_target (gimple_atomic_target (stmt), mode);
+ 
+   if (HAVE_atomic_clear)
+     {
+       emit_insn (gen_atomic_clear (mem, model));
+       return;
+     }
+ 
+   /* Try issuing an __atomic_store, and allow fallback to __sync_lock_release.
+      Failing that, a store is issued by __atomic_store.  The only way this can
+      fail is if the bool type is larger than a word size.  Unlikely, but
+      handle it anyway for completeness.  Assume a single threaded model since
+      there is no atomic support in this case, and no barriers are required.  */
+   ret = expand_atomic_store (mem, const0_rtx, model, true);
+   if (!ret)
+     emit_move_insn (mem, const0_rtx);
+ }
+ 
+ /* Expand atomic fence STMT into RTL.  Return true if successful.  */
+ 
+ void
+ expand_gimple_atomic_fence (gimple stmt)
+ {
+   enum memmodel model;
+   gcc_assert (gimple_atomic_kind (stmt) == GIMPLE_ATOMIC_FENCE);
+ 
+   model = get_memmodel (gimple_atomic_kind (stmt), gimple_atomic_order (stmt));
+ 
+   if (gimple_atomic_thread_fence (stmt))
+     expand_mem_thread_fence (model);
+   else
+     expand_mem_signal_fence (model);
+ }
+ 
+ 
+ /* Return true if FNDECL is an atomic builtin function that can be mapped to a
+    GIMPLE_ATOMIC statement.  */
+ 
+ static bool
+ is_built_in_atomic (tree fndecl)
+ {
+   enum built_in_function fcode;
+ 
+ 
+   if (!fndecl || !DECL_BUILT_IN (fndecl))
+     return false;
+ 
+   if (DECL_BUILT_IN_CLASS (fndecl) != BUILT_IN_NORMAL)
+     return false;
+ 
+   fcode = DECL_FUNCTION_CODE (fndecl);
+   if (fcode >= BUILT_IN_ATOMIC_TEST_AND_SET &&
+       fcode <= BUILT_IN_ATOMIC_SIGNAL_FENCE)
+     return true;
+ 
+   return false;
+ }
+ 
+ /* Return base type for an atomic builtin function.  */
+ 
+ static tree
+ atomic_func_type (unsigned i)
+ {
+   gcc_assert (i <= 4);
+ 
+   switch (i)
+     {
+     case 0:
+       gcc_unreachable ();
+     case 1:
+       return unsigned_intQI_type_node;
+     case 2:
+       return unsigned_intHI_type_node;
+     case 3:
+       return unsigned_intSI_type_node;
+     case 4:
+       return unsigned_intDI_type_node;
+     case 5:
+       return unsigned_intTI_type_node;
+     default:
+       gcc_unreachable ();
+     }
+ }
+ 
+ /* Convert an atomic builtin call at GSI_P into a GIMPLE_ATOMIC statement.  */
+ 
+ static void
+ lower_atomic_call (gimple_stmt_iterator *gsi_p)
+ {
+   tree fndecl;
+   enum built_in_function fcode;
+   gimple s = NULL;
+   tree order;
+   tree target;
+   tree expr;
+   tree type;
+   enum tree_code op;
+   bool fetch_op;
+   gimple stmt = gsi_stmt (*gsi_p);
+ 
+   fndecl = gimple_call_fndecl (stmt);
+   gcc_assert (is_built_in_atomic (fndecl));
+ 
+   fcode = DECL_FUNCTION_CODE (fndecl);
+ 
+   switch (fcode) {
+     case BUILT_IN_ATOMIC_COMPARE_EXCHANGE:
+     case BUILT_IN_ATOMIC_STORE:
+     case BUILT_IN_ATOMIC_LOAD:
+     case BUILT_IN_ATOMIC_EXCHANGE:
+       /* Do nothing for the generic functions at the moment.  */
+       return;
+ 
+     case BUILT_IN_ATOMIC_LOAD_N:
+     case BUILT_IN_ATOMIC_LOAD_1:
+     case BUILT_IN_ATOMIC_LOAD_2:
+     case BUILT_IN_ATOMIC_LOAD_4:
+     case BUILT_IN_ATOMIC_LOAD_8:
+     case BUILT_IN_ATOMIC_LOAD_16:
+       gcc_assert (gimple_call_num_args (stmt) == 2);
+       order = gimple_call_arg (stmt, 1);
+       target = gimple_call_arg (stmt, 0);
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_LOAD_N);
+       s = gimple_build_atomic_load (type, target, order);
+       if (gimple_call_lhs (stmt))
+         gimple_atomic_set_lhs (s, 0, gimple_call_lhs (stmt));
+       break;
+ 
+ 
+     case BUILT_IN_ATOMIC_EXCHANGE_N:
+     case BUILT_IN_ATOMIC_EXCHANGE_1:
+     case BUILT_IN_ATOMIC_EXCHANGE_2:
+     case BUILT_IN_ATOMIC_EXCHANGE_4:
+     case BUILT_IN_ATOMIC_EXCHANGE_8:
+     case BUILT_IN_ATOMIC_EXCHANGE_16:
+       gcc_assert (gimple_call_num_args (stmt) == 3);
+       target = gimple_call_arg (stmt, 0);
+       expr = gimple_call_arg (stmt, 1);
+       order = gimple_call_arg (stmt, 2);
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_EXCHANGE_N);
+       s = gimple_build_atomic_exchange (type, target, expr, order);
+       if (gimple_call_lhs (stmt))
+         gimple_atomic_set_lhs (s, 0, gimple_call_lhs (stmt));
+       break;
+ 
+     case BUILT_IN_ATOMIC_COMPARE_EXCHANGE_N:
+     case BUILT_IN_ATOMIC_COMPARE_EXCHANGE_1:
+     case BUILT_IN_ATOMIC_COMPARE_EXCHANGE_2:
+     case BUILT_IN_ATOMIC_COMPARE_EXCHANGE_4:
+     case BUILT_IN_ATOMIC_COMPARE_EXCHANGE_8:
+     case BUILT_IN_ATOMIC_COMPARE_EXCHANGE_16:
+       {
+         tree tmp1, tmp2, tmp1_type, tmp2_type, deref, deref_tmp;
+         tree expected, fail, weak;
+ 	bool is_weak = false;
+ 
+         gcc_assert (gimple_call_num_args (stmt) == 6);
+ 	target = gimple_call_arg (stmt, 0);
+ 	expected = gimple_call_arg (stmt, 1);
+ 	expr = gimple_call_arg (stmt, 2);
+ 	weak = gimple_call_arg (stmt, 3);
+ 	if (host_integerp (weak, 0) && tree_low_cst (weak, 0) != 0)
+ 	    is_weak = true;
+ 	order = gimple_call_arg (stmt, 4);
+ 	fail = gimple_call_arg (stmt, 5);
+ 
+ 	/* TODO : Translate the original
+ 	   bool = cmp_xch (t,expect,...)
+ 	      into
+ 	   tmp1 = expect;
+ 	   bool, tmp2 = cmp_xch (t,*tmp1,e)
+ 	   *tmp1 = tmp2;  */
+ 	/* tmp1 = expect */
+ 	tmp1_type = TREE_TYPE (expected);
+ 	/* Handle other tree codes  as this assert fails. */
+ 	gcc_assert (TREE_CODE (expected) == ADDR_EXPR);
+ 	tmp2_type = TREE_TYPE (TREE_OPERAND (expected, 0));
+ 
+ 	tmp1 = create_tmp_var (tmp1_type, "cmpxchg_p");
+ 	s = gimple_build_assign (tmp1, expected);
+ 	gimple_set_location (s, gimple_location (stmt));
+ 	gsi_insert_before (gsi_p, s, GSI_SAME_STMT);
+ 
+ 	/* deref_tmp = *tmp1 */
+ 	deref = build2 (MEM_REF, tmp2_type, tmp1, 
+ 			build_int_cst_wide (tmp1_type, 0, 0));
+ 	deref_tmp = create_tmp_var (tmp2_type, "cmpxchg_d");
+ 	s = gimple_build_assign (deref_tmp, deref);
+ 	gimple_set_location (s, gimple_location (stmt));
+ 	gsi_insert_before (gsi_p, s, GSI_SAME_STMT);
+ 
+         /* bool, tmp2 = cmp_exchange (t, deref_tmp, ...) */
+ 	type = atomic_func_type (fcode - BUILT_IN_ATOMIC_COMPARE_EXCHANGE_N);
+ 	s = gimple_build_atomic_compare_exchange (type, target, deref_tmp, expr,
+ 						  order, fail, is_weak);
+ 	gimple_atomic_set_lhs (s, 0, gimple_call_lhs (stmt));
+ 
+ 	tmp2 = create_tmp_var (tmp2_type, "cmpxchg");
+ 	gimple_atomic_set_lhs (s, 1, tmp2);
+ 	gimple_set_location (s, gimple_location (stmt));
+ 
+ 	gsi_insert_before (gsi_p, s, GSI_SAME_STMT);
+ 
+ 	/* *tmp1 = tmp2  */
+ 	deref = build2 (MEM_REF, tmp2_type, tmp1, 
+ 			build_int_cst_wide (tmp1_type, 0, 0));
+ 	s = gimple_build_assign (deref, tmp2);
+ 	break;
+       }
+ 
+     case BUILT_IN_ATOMIC_STORE_N:
+     case BUILT_IN_ATOMIC_STORE_1:
+     case BUILT_IN_ATOMIC_STORE_2:
+     case BUILT_IN_ATOMIC_STORE_4:
+     case BUILT_IN_ATOMIC_STORE_8:
+     case BUILT_IN_ATOMIC_STORE_16:
+       gcc_assert (gimple_call_num_args (stmt) == 3);
+       target = gimple_call_arg (stmt, 0);
+       expr = gimple_call_arg (stmt, 1);
+       order = gimple_call_arg (stmt, 2);
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_STORE_N);
+       s = gimple_build_atomic_store (type, target, expr, order);
+       break;
+ 
+     case BUILT_IN_ATOMIC_ADD_FETCH_N:
+     case BUILT_IN_ATOMIC_ADD_FETCH_1:
+     case BUILT_IN_ATOMIC_ADD_FETCH_2:
+     case BUILT_IN_ATOMIC_ADD_FETCH_4:
+     case BUILT_IN_ATOMIC_ADD_FETCH_8:
+     case BUILT_IN_ATOMIC_ADD_FETCH_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_ADD_FETCH_N);
+       op = PLUS_EXPR;
+       fetch_op = false;
+ fetch_body:
+       gcc_assert (gimple_call_num_args (stmt) == 3);
+       target = gimple_call_arg (stmt, 0);
+       expr = gimple_call_arg (stmt, 1);
+       order = gimple_call_arg (stmt, 2);
+       if (fetch_op)
+ 	s = gimple_build_atomic_fetch_op (type, target, expr, op, order);
+       else
+ 	s = gimple_build_atomic_op_fetch (type, target, expr, op, order);
+       if (gimple_call_lhs (stmt))
+         gimple_atomic_set_lhs (s, 0, gimple_call_lhs (stmt));
+       break;
+ 
+     case BUILT_IN_ATOMIC_FETCH_ADD_N:
+     case BUILT_IN_ATOMIC_FETCH_ADD_1:
+     case BUILT_IN_ATOMIC_FETCH_ADD_2:
+     case BUILT_IN_ATOMIC_FETCH_ADD_4:
+     case BUILT_IN_ATOMIC_FETCH_ADD_8:
+     case BUILT_IN_ATOMIC_FETCH_ADD_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_FETCH_ADD_N);
+       op = PLUS_EXPR;
+       fetch_op = true;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_SUB_FETCH_N:
+     case BUILT_IN_ATOMIC_SUB_FETCH_1:
+     case BUILT_IN_ATOMIC_SUB_FETCH_2:
+     case BUILT_IN_ATOMIC_SUB_FETCH_4:
+     case BUILT_IN_ATOMIC_SUB_FETCH_8:
+     case BUILT_IN_ATOMIC_SUB_FETCH_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_SUB_FETCH_N);
+       op = MINUS_EXPR;
+       fetch_op = false;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_FETCH_SUB_N:
+     case BUILT_IN_ATOMIC_FETCH_SUB_1:
+     case BUILT_IN_ATOMIC_FETCH_SUB_2:
+     case BUILT_IN_ATOMIC_FETCH_SUB_4:
+     case BUILT_IN_ATOMIC_FETCH_SUB_8:
+     case BUILT_IN_ATOMIC_FETCH_SUB_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_FETCH_SUB_N);
+       op = MINUS_EXPR;
+       fetch_op = true;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_AND_FETCH_N:
+     case BUILT_IN_ATOMIC_AND_FETCH_1:
+     case BUILT_IN_ATOMIC_AND_FETCH_2:
+     case BUILT_IN_ATOMIC_AND_FETCH_4:
+     case BUILT_IN_ATOMIC_AND_FETCH_8:
+     case BUILT_IN_ATOMIC_AND_FETCH_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_AND_FETCH_N);
+       op = BIT_AND_EXPR;
+       fetch_op = false;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_FETCH_AND_N:
+     case BUILT_IN_ATOMIC_FETCH_AND_1:
+     case BUILT_IN_ATOMIC_FETCH_AND_2:
+     case BUILT_IN_ATOMIC_FETCH_AND_4:
+     case BUILT_IN_ATOMIC_FETCH_AND_8:
+     case BUILT_IN_ATOMIC_FETCH_AND_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_FETCH_AND_N);
+       op = BIT_AND_EXPR;
+       fetch_op = true;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_XOR_FETCH_N:
+     case BUILT_IN_ATOMIC_XOR_FETCH_1:
+     case BUILT_IN_ATOMIC_XOR_FETCH_2:
+     case BUILT_IN_ATOMIC_XOR_FETCH_4:
+     case BUILT_IN_ATOMIC_XOR_FETCH_8:
+     case BUILT_IN_ATOMIC_XOR_FETCH_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_XOR_FETCH_N);
+       op = BIT_XOR_EXPR;
+       fetch_op = false;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_FETCH_XOR_N:
+     case BUILT_IN_ATOMIC_FETCH_XOR_1:
+     case BUILT_IN_ATOMIC_FETCH_XOR_2:
+     case BUILT_IN_ATOMIC_FETCH_XOR_4:
+     case BUILT_IN_ATOMIC_FETCH_XOR_8:
+     case BUILT_IN_ATOMIC_FETCH_XOR_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_FETCH_XOR_N);
+       op = BIT_XOR_EXPR;
+       fetch_op = true;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_OR_FETCH_N:
+     case BUILT_IN_ATOMIC_OR_FETCH_1:
+     case BUILT_IN_ATOMIC_OR_FETCH_2:
+     case BUILT_IN_ATOMIC_OR_FETCH_4:
+     case BUILT_IN_ATOMIC_OR_FETCH_8:
+     case BUILT_IN_ATOMIC_OR_FETCH_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_OR_FETCH_N);
+       op = BIT_IOR_EXPR;
+       fetch_op = false;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_FETCH_OR_N:
+     case BUILT_IN_ATOMIC_FETCH_OR_1:
+     case BUILT_IN_ATOMIC_FETCH_OR_2:
+     case BUILT_IN_ATOMIC_FETCH_OR_4:
+     case BUILT_IN_ATOMIC_FETCH_OR_8:
+     case BUILT_IN_ATOMIC_FETCH_OR_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_FETCH_OR_N);
+       op = BIT_IOR_EXPR;
+       fetch_op = true;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_NAND_FETCH_N:
+     case BUILT_IN_ATOMIC_NAND_FETCH_1:
+     case BUILT_IN_ATOMIC_NAND_FETCH_2:
+     case BUILT_IN_ATOMIC_NAND_FETCH_4:
+     case BUILT_IN_ATOMIC_NAND_FETCH_8:
+     case BUILT_IN_ATOMIC_NAND_FETCH_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_NAND_FETCH_N);
+       op = BIT_NOT_EXPR;
+       fetch_op = false;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_FETCH_NAND_N:
+     case BUILT_IN_ATOMIC_FETCH_NAND_1:
+     case BUILT_IN_ATOMIC_FETCH_NAND_2:
+     case BUILT_IN_ATOMIC_FETCH_NAND_4:
+     case BUILT_IN_ATOMIC_FETCH_NAND_8:
+     case BUILT_IN_ATOMIC_FETCH_NAND_16:
+       type = atomic_func_type (fcode - BUILT_IN_ATOMIC_FETCH_NAND_N);
+       op = BIT_NOT_EXPR;
+       fetch_op = true;
+       goto fetch_body;
+ 
+     case BUILT_IN_ATOMIC_TEST_AND_SET:
+       gcc_assert (gimple_call_num_args (stmt) == 2);
+       target = gimple_call_arg (stmt, 0);
+       order = gimple_call_arg (stmt, 1);
+       s = gimple_build_atomic_test_and_set (target, order);
+       if (gimple_call_lhs (stmt))
+         gimple_atomic_set_lhs (s, 0, gimple_call_lhs (stmt));
+       break;
+ 
+     case BUILT_IN_ATOMIC_CLEAR:
+       gcc_assert (gimple_call_num_args (stmt) == 2);
+       target = gimple_call_arg (stmt, 0);
+       order = gimple_call_arg (stmt, 1);
+       s = gimple_build_atomic_clear (target, order);
+       break;
+ 
+     case BUILT_IN_ATOMIC_THREAD_FENCE:
+       gcc_assert (gimple_call_num_args (stmt) == 1);
+       order = gimple_call_arg (stmt, 0);
+       s = gimple_build_atomic_fence (order, true);
+       break;
+ 
+     case BUILT_IN_ATOMIC_SIGNAL_FENCE:
+       gcc_assert (gimple_call_num_args (stmt) == 1);
+       order = gimple_call_arg (stmt, 0);
+       s = gimple_build_atomic_fence (order, false);
+       break;
+ 
+     default:
+       gcc_unreachable ();
+   }
+  
+  gcc_assert (s != NULL);
+ 
+  gimple_set_location (s, gimple_location (stmt));
+  gsi_insert_after (gsi_p, s, GSI_SAME_STMT);
+  gsi_remove (gsi_p, true);
+ 
+ }
+ 
+ /* Conversion of atomic builtin functions to tree codes.  Scan the
+    function looking for BUILT_IN_ATOMIC_* functions and replace them with
+    the eqivilent atomic tree codes.  */
+ 
+ static unsigned int
+ lower_atomics (void)
+ {
+   basic_block bb;
+   gimple_stmt_iterator gsi;
+ 
+   FOR_EACH_BB (bb)
+     {
+       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
+       	{
+ 	  if (gimple_code (gsi_stmt (gsi)) == GIMPLE_CALL)
+ 	    {
+ 	      if (is_built_in_atomic (gimple_call_fndecl (gsi_stmt (gsi))))
+ 		lower_atomic_call (&gsi);
+ 	    }
+ 	}
+     }
+   return 0;
+ }
+ 
+ 
+ /* Gate to enable lowering of atomic operations.  As this will replace the 
+    built-in support, always do it.  */
+ 
+ static bool
+ gate_lower_atomics (void)
+ {
+   return 1;
+ }
+ 
+ struct gimple_opt_pass pass_lower_atomics =
+ {
+   {
+     GIMPLE_PASS,
+     "lower_atomics",			/* name */
+     gate_lower_atomics,			/* gate */
+     lower_atomics,			/* execute */
+     NULL,				/* sub */
+     NULL,				/* next */
+     0,					/* static_pass_number */
+     TV_NONE,				/* tv_id */
+     PROP_cfg,				/* properties_required */
+     0,					/* properties_provided */
+     0,					/* properties_destroyed */
+     0,					/* todo_flags_start */
+     0,					/* todo_flags_finish */
+   }
+ };
+ 
Index: tree-ssa-operands.c
===================================================================
*** tree-ssa-operands.c	(revision 186098)
--- tree-ssa-operands.c	(working copy)
*************** parse_ssa_operands (gimple stmt)
*** 1063,1068 ****
--- 1063,1078 ----
  			   opf_use | opf_no_vops);
        break;
  
+     case GIMPLE_ATOMIC:
+       /* Atomic operations are memory barriers in both directions for now.  */
+       add_virtual_operand (stmt, opf_def | opf_use);
+       
+       for (n = 0; n < gimple_atomic_num_lhs (stmt); n++)
+ 	get_expr_operands (stmt, gimple_atomic_lhs_ptr (stmt, n), opf_def);
+       for (n = 0; n < gimple_atomic_num_rhs (stmt); n++)
+ 	get_expr_operands (stmt, gimple_op_ptr (stmt, n), opf_use);
+       break;
+       
      case GIMPLE_RETURN:
        append_vuse (gimple_vop (cfun));
        goto do_default;
Index: gimple-pretty-print.c
===================================================================
*** gimple-pretty-print.c	(revision 186098)
--- gimple-pretty-print.c	(working copy)
*************** dump_gimple_call (pretty_printer *buffer
*** 749,754 ****
--- 749,1016 ----
      }
  }
  
+ /* Dump the tree opcode for an atomic_fetch stmt GS into BUFFER.  */
+ 
+ static void
+ dump_gimple_atomic_kind_op (pretty_printer *buffer, const_gimple gs)
+ {
+   switch (gimple_atomic_op_code (gs))
+     {
+     case PLUS_EXPR:
+       pp_string (buffer, "ADD");
+     break;
+ 
+     case MINUS_EXPR:
+       pp_string (buffer, "SUB");
+     break;
+ 
+     case BIT_AND_EXPR:
+       pp_string (buffer, "AND");
+     break;
+ 
+     case BIT_IOR_EXPR:
+       pp_string (buffer, "OR");
+     break;
+ 
+     case BIT_XOR_EXPR:
+       pp_string (buffer, "XOR");
+     break;
+ 
+     case BIT_NOT_EXPR:	/* This is used for NAND in the builtins.  */
+       pp_string (buffer, "NAND");
+     break;
+ 
+     default:
+      gcc_unreachable ();
+     }
+ }
+ 
+ /* Dump a memory order node ORDER. BUFFER, SPC and FLAGS are as in
+    dump_generic_node.  */
+ 
+ static void
+ dump_gimple_atomic_order (pretty_printer *buffer, tree t, int spc, int flags)
+ {
+   enum memmodel order;
+ 
+   if (TREE_CODE (t) != INTEGER_CST)
+     {
+       dump_generic_node (buffer, t, spc, flags, false);
+       return;
+     }
+ 
+   order = (enum memmodel) TREE_INT_CST_LOW (t);
+   switch (order)
+     {
+     case MEMMODEL_RELAXED:
+       pp_string (buffer, "RELAXED");
+       break;
+ 
+     case MEMMODEL_CONSUME:
+       pp_string (buffer, "CONSUME");
+       break;
+ 
+     case MEMMODEL_ACQUIRE:
+       pp_string (buffer, "ACQUIRE");
+       break;
+ 
+     case MEMMODEL_RELEASE:
+       pp_string (buffer, "RELEASE");
+       break;
+ 
+     case MEMMODEL_ACQ_REL:
+       pp_string (buffer, "ACQ_REL");
+       break;
+ 
+     case MEMMODEL_SEQ_CST:
+       pp_string (buffer, "SEQ_CST");
+       break;
+ 
+     default:
+       gcc_unreachable ();
+       break;
+     }
+ }
+ 
+ /* Dump the appropriate suffix size for an atomic statement GS into BUFFER.  */
+ 
+ static void
+ dump_gimple_atomic_type_size (pretty_printer *buffer, const_gimple gs)
+ {
+   tree t = gimple_atomic_type (gs);
+   unsigned n = TREE_INT_CST_LOW (TYPE_SIZE (t));
+   switch (n)
+     {
+     case 8:
+       pp_string (buffer, "_1 <");
+       break;
+ 
+     case 16:
+       pp_string (buffer, "_2 <");
+       break;
+ 
+     case 32:
+       pp_string (buffer, "_4 <");
+       break;
+ 
+     case 64:
+       pp_string (buffer, "_8 <");
+       break;
+ 
+     case 128:
+       pp_string (buffer, "_16 <");
+       break;
+ 
+     default:
+       pp_string (buffer, " <");
+       break;
+     }
+ }
+ 
+ /* Dump the atomic statement GS.  BUFFER, SPC and FLAGS are as in
+    dump_gimple_stmt.  */
+ 
+ static void
+ dump_gimple_atomic (pretty_printer *buffer, gimple gs, int spc, int flags)
+ {
+   if (gimple_atomic_num_lhs (gs) == 1)
+     {
+       dump_generic_node (buffer, gimple_atomic_lhs (gs, 0), spc, flags, false);
+       pp_string (buffer, " = ");
+     }
+   else if (gimple_atomic_num_lhs (gs) > 1)
+     {
+       /* The first LHS is still optional, so print both results only if the
+          first one is present.  */
+       if (gimple_atomic_lhs (gs, 0))
+         {
+ 	  pp_string (buffer, "(");
+ 
+ 	  dump_generic_node (buffer, gimple_atomic_lhs (gs, 0), spc, flags,
+ 			     false);
+ 	  pp_string (buffer, ", ");
+ 	  dump_generic_node (buffer, gimple_atomic_lhs (gs, 1), spc, flags,
+ 			     false);
+ 	  pp_string (buffer, ") = ");
+ 	}
+       else
+        {
+ 	  /* Otherwise just print the result that has to be there.  */
+ 	  dump_generic_node (buffer, gimple_atomic_lhs (gs, 1), spc, flags,
+ 			     false);
+ 	  pp_string (buffer, " = ");
+        }
+     }
+    
+   switch (gimple_atomic_kind (gs))
+     {
+     case GIMPLE_ATOMIC_LOAD:
+       pp_string (buffer, "ATOMIC_LOAD");
+       dump_gimple_atomic_type_size (buffer, gs);
+       dump_generic_node (buffer, gimple_atomic_target (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_gimple_atomic_order (buffer, gimple_atomic_order (gs), spc, flags);
+       pp_string (buffer, "> ");
+       break;
+ 
+     case GIMPLE_ATOMIC_STORE:
+       pp_string (buffer, "ATOMIC_STORE");
+       dump_gimple_atomic_type_size (buffer, gs);
+       dump_generic_node (buffer, gimple_atomic_target (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_generic_node (buffer, gimple_atomic_expr (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_gimple_atomic_order (buffer, gimple_atomic_order (gs), spc, flags);
+       pp_string (buffer, "> ");
+       break;
+ 
+     case GIMPLE_ATOMIC_EXCHANGE:
+       pp_string (buffer, "ATOMIC_EXCHANGE");
+       dump_gimple_atomic_type_size (buffer, gs);
+       dump_generic_node (buffer, gimple_atomic_target (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_generic_node (buffer, gimple_atomic_expr (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_gimple_atomic_order (buffer, gimple_atomic_order (gs), spc, flags);
+       pp_string (buffer, "> ");
+       break;
+ 
+     case GIMPLE_ATOMIC_COMPARE_EXCHANGE:
+       pp_string (buffer, "ATOMIC_COMPARE_EXCHANGE_");
+       if (gimple_atomic_weak (gs))
+ 	pp_string (buffer, "WEAK");
+       else
+ 	pp_string (buffer, "STRONG");
+       dump_gimple_atomic_type_size (buffer, gs);
+       dump_generic_node (buffer, gimple_atomic_target (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_generic_node (buffer, gimple_atomic_expected (gs), spc, flags,
+ 			false);
+       pp_string (buffer, ", ");
+       dump_generic_node (buffer, gimple_atomic_expr (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_gimple_atomic_order (buffer, gimple_atomic_order (gs), spc, flags);
+       pp_string (buffer, ", ");
+       dump_gimple_atomic_order (buffer, gimple_atomic_fail_order (gs), spc,
+ 				flags);
+       pp_string (buffer, "> ");
+       break;
+ 
+     case GIMPLE_ATOMIC_FETCH_OP:
+       pp_string (buffer, "ATOMIC_FETCH_");
+       dump_gimple_atomic_kind_op (buffer, gs);
+       dump_gimple_atomic_type_size (buffer, gs);
+       dump_generic_node (buffer, gimple_atomic_target (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_generic_node (buffer, gimple_atomic_expr (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_gimple_atomic_order (buffer, gimple_atomic_order (gs), spc, flags);
+       pp_string (buffer, "> ");
+       break;
+ 
+     case GIMPLE_ATOMIC_OP_FETCH:
+       pp_string (buffer, "ATOMIC_");
+       dump_gimple_atomic_kind_op (buffer, gs);
+       pp_string (buffer, "_FETCH");
+       dump_gimple_atomic_type_size (buffer, gs);
+       dump_generic_node (buffer, gimple_atomic_target (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_generic_node (buffer, gimple_atomic_expr (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_gimple_atomic_order (buffer, gimple_atomic_order (gs), spc, flags);
+       pp_string (buffer, "> ");
+       break;
+ 
+     case GIMPLE_ATOMIC_TEST_AND_SET:
+       pp_string (buffer, "ATOMIC_TEST_AND_SET <");
+       dump_generic_node (buffer, gimple_atomic_target (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_gimple_atomic_order (buffer, gimple_atomic_order (gs), spc, flags);
+       pp_string (buffer, "> ");
+       break;
+ 
+     case GIMPLE_ATOMIC_CLEAR:
+       pp_string (buffer, "ATOMIC_CLEAR <");
+       dump_generic_node (buffer, gimple_atomic_target (gs), spc, flags, false);
+       pp_string (buffer, ", ");
+       dump_gimple_atomic_order (buffer, gimple_atomic_order (gs), spc, flags);
+       pp_string (buffer, "> ");
+       break;
+ 
+     case GIMPLE_ATOMIC_FENCE:
+       if (gimple_atomic_thread_fence (gs))
+ 	pp_string (buffer, "ATOMIC_THREAD_FENCE <");
+       else
+ 	pp_string (buffer, "ATOMIC_SIGNAL_FENCE <");
+       dump_gimple_atomic_order (buffer, gimple_atomic_order (gs), spc, flags);
+       pp_string (buffer, "> ");
+       break;
+ 
+     default:
+      gcc_unreachable ();
+     }
+ }
+ 
  
  /* Dump the switch statement GS.  BUFFER, SPC and FLAGS are as in
     dump_gimple_stmt.  */
*************** dump_gimple_stmt (pretty_printer *buffer
*** 1920,1925 ****
--- 2182,2191 ----
        dump_gimple_call (buffer, gs, spc, flags);
        break;
  
+     case GIMPLE_ATOMIC:
+       dump_gimple_atomic (buffer, gs, spc, flags);
+       break;
+ 
      case GIMPLE_COND:
        dump_gimple_cond (buffer, gs, spc, flags);
        break;
Index: tree-cfg.c
===================================================================
*** tree-cfg.c	(revision 186098)
--- tree-cfg.c	(working copy)
*************** verify_gimple_return (gimple stmt)
*** 4073,4078 ****
--- 4073,4113 ----
    return false;
  }
  
+ /* Verify that STMT is a valid GIMPLE_ATOMIC statement.  */
+ 
+ static bool
+ verify_gimple_atomic (gimple stmt)
+ {
+   enum gimple_atomic_kind kind = gimple_atomic_kind (stmt);
+ 
+   switch (kind)
+     {
+     case GIMPLE_ATOMIC_LOAD:
+       break;
+ 
+     case GIMPLE_ATOMIC_STORE:
+     case GIMPLE_ATOMIC_EXCHANGE:
+       break;
+ 
+     case GIMPLE_ATOMIC_COMPARE_EXCHANGE:
+       break;
+ 
+     case GIMPLE_ATOMIC_FETCH_OP:
+     case GIMPLE_ATOMIC_OP_FETCH:
+       break;
+ 
+     case GIMPLE_ATOMIC_TEST_AND_SET:
+     case GIMPLE_ATOMIC_CLEAR:
+       break;
+ 
+     case GIMPLE_ATOMIC_FENCE:
+       break;
+ 
+     default:
+       gcc_unreachable ();
+     }
+   return false;
+ }
  
  /* Verify the contents of a GIMPLE_GOTO STMT.  Returns true when there
     is a problem, otherwise false.  */
*************** verify_gimple_stmt (gimple stmt)
*** 4174,4179 ****
--- 4209,4217 ----
      case GIMPLE_ASSIGN:
        return verify_gimple_assign (stmt);
  
+     case GIMPLE_ATOMIC:
+       return verify_gimple_atomic (stmt);
+ 
      case GIMPLE_LABEL:
        return verify_gimple_label (stmt);
  
Index: tree-pass.h
===================================================================
*** tree-pass.h	(revision 186098)
--- tree-pass.h	(working copy)
*************** extern struct gimple_opt_pass pass_tm_me
*** 455,460 ****
--- 455,461 ----
  extern struct gimple_opt_pass pass_tm_edges;
  extern struct gimple_opt_pass pass_split_functions;
  extern struct gimple_opt_pass pass_feedback_split_functions;
+ extern struct gimple_opt_pass pass_lower_atomics;
  
  /* IPA Passes */
  extern struct simple_ipa_opt_pass pass_ipa_lower_emutls;
Index: passes.c
===================================================================
*** passes.c	(revision 186098)
--- passes.c	(working copy)
*************** init_optimization_passes (void)
*** 1188,1193 ****
--- 1188,1194 ----
    NEXT_PASS (pass_refactor_eh);
    NEXT_PASS (pass_lower_eh);
    NEXT_PASS (pass_build_cfg);
+   NEXT_PASS (pass_lower_atomics);
    NEXT_PASS (pass_warn_function_return);
    NEXT_PASS (pass_build_cgraph_edges);
    *p = NULL;
Index: gimple-low.c
===================================================================
*** gimple-low.c	(revision 186098)
--- gimple-low.c	(working copy)
*************** lower_stmt (gimple_stmt_iterator *gsi, s
*** 404,409 ****
--- 404,410 ----
      case GIMPLE_NOP:
      case GIMPLE_ASM:
      case GIMPLE_ASSIGN:
+     case GIMPLE_ATOMIC:
      case GIMPLE_PREDICT:
      case GIMPLE_LABEL:
      case GIMPLE_EH_MUST_NOT_THROW:
Index: tree-ssa-alias.c
===================================================================
*** tree-ssa-alias.c	(revision 186098)
--- tree-ssa-alias.c	(working copy)
*************** ref_maybe_used_by_stmt_p (gimple stmt, t
*** 1440,1445 ****
--- 1440,1447 ----
      }
    else if (is_gimple_call (stmt))
      return ref_maybe_used_by_call_p (stmt, ref);
+   else if (is_gimple_atomic (stmt))
+     return true;
    else if (gimple_code (stmt) == GIMPLE_RETURN)
      {
        tree retval = gimple_return_retval (stmt);
*************** stmt_may_clobber_ref_p_1 (gimple stmt, a
*** 1762,1767 ****
--- 1764,1771 ----
      }
    else if (gimple_code (stmt) == GIMPLE_ASM)
      return true;
+   else if (is_gimple_atomic (stmt))
+     return true;
  
    return false;
  }
*************** stmt_kills_ref_p_1 (gimple stmt, ao_ref 
*** 1814,1819 ****
--- 1818,1825 ----
  	}
      }
  
+   if (is_gimple_atomic (stmt))
+     return true;
    if (is_gimple_call (stmt))
      {
        tree callee = gimple_call_fndecl (stmt);
Index: tree-ssa-sink.c
===================================================================
*** tree-ssa-sink.c	(revision 186098)
--- tree-ssa-sink.c	(working copy)
*************** is_hidden_global_store (gimple stmt)
*** 145,150 ****
--- 145,154 ----
      {
        tree lhs;
  
+       /* Don't optimize across an atomic operation.  */
+       if (is_gimple_atomic (stmt))
+         return true;
+ 
        gcc_assert (is_gimple_assign (stmt) || is_gimple_call (stmt));
  
        /* Note that we must not check the individual virtual operands
Index: tree-ssa-dce.c
===================================================================
*** tree-ssa-dce.c	(revision 186098)
--- tree-ssa-dce.c	(working copy)
*************** propagate_necessity (struct edge_list *e
*** 920,925 ****
--- 920,943 ----
  		    mark_aliased_reaching_defs_necessary (stmt, arg);
  		}
  	    }
+ 	  else if (is_gimple_atomic (stmt))
+ 	    {
+ 	      unsigned n;
+ 
+ 	      /* We may be able to lessen this with more relaxed memory
+ 	         models, but for now, its a full barrier.  */
+ 	      mark_all_reaching_defs_necessary (stmt);
+ 
+ 	      for (n = 0; n < gimple_atomic_num_rhs (stmt); n++)
+ 	        {
+ 		  tree t = gimple_op (stmt, n);
+ 		  if (TREE_CODE (t) != SSA_NAME && 
+ 		      TREE_CODE (t) != INTEGER_CST &&
+ 		      !is_gimple_min_invariant (t) &&
+ 		      !ref_may_be_aliased (t))
+ 		    mark_aliased_reaching_defs_necessary (stmt, t);
+ 		}
+ 	    }
  	  else if (gimple_assign_single_p (stmt))
  	    {
  	      tree rhs;
Index: tree-inline.c
===================================================================
*** tree-inline.c	(revision 186098)
--- tree-inline.c	(working copy)
*************** estimate_num_insns (gimple stmt, eni_wei
*** 3565,3570 ****
--- 3565,3579 ----
  	break;
        }
  
+     case GIMPLE_ATOMIC:
+       /* Treat this like a call for now, it may expand into a call.  */
+       if (gimple_atomic_kind (stmt) != GIMPLE_ATOMIC_FENCE)
+ 	cost = gimple_num_ops (stmt) *
+ 	       estimate_move_cost (TREE_TYPE (gimple_atomic_target (stmt)));
+       else
+         cost = 1;
+       break;
+ 
      case GIMPLE_RETURN:
        return weights->return_cost;
  
Index: ipa-pure-const.c
===================================================================
*** ipa-pure-const.c	(revision 186098)
--- ipa-pure-const.c	(working copy)
*************** check_stmt (gimple_stmt_iterator *gsip, 
*** 712,717 ****
--- 712,722 ----
            local->looping = true;
  	}
        return;
+     case GIMPLE_ATOMIC:
+       if (dump_file)
+ 	fprintf (dump_file, "    atomic is not const/pure");
+       local->pure_const_state = IPA_NEITHER;
+       return;
      default:
        break;
      }

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]