[PATCH 6/19][GCC-8] aarch64: Remove early clobber from ATOMIC_LDOP scratch

Andre Vieira (lists) andre.simoesdiasvieira@arm.com
Thu Apr 16 12:25:52 GMT 2020


2020-04-16  Andre Vieira <andre.simoesdiasvieira@arm.com>

     Backport from mainline.
     2018-10-31  Richard Henderson <richard.henderson@linaro.org>

     * config/aarch64/atomics.md (aarch64_atomic_<ATOMIC_LDOP><ALLI>_lse):
     scratch register need not be early-clobber.  Document the reason
     why we cannot use ST<OP>.

-------------- next part --------------
diff --git a/gcc/config/aarch64/atomics.md b/gcc/config/aarch64/atomics.md
index 47a8a40c5b82e349b2caf4e48f9f81577f4c3ed3..d740f4a100b1b624eafdb279f38ac1ce9db587dd 100644
--- a/gcc/config/aarch64/atomics.md
+++ b/gcc/config/aarch64/atomics.md
@@ -263,6 +263,18 @@
   }
 )
 
+;; It is tempting to want to use ST<OP> for relaxed and release
+;; memory models here.  However, that is incompatible with the
+;; C++ memory model for the following case:
+;;
+;;	atomic_fetch_add(ptr, 1, memory_order_relaxed);
+;;	atomic_thread_fence(memory_order_acquire);
+;;
+;; The problem is that the architecture says that ST<OP> (and LD<OP>
+;; insns where the destination is XZR) are not regarded as a read.
+;; However we also implement the acquire memory barrier with DMB LD,
+;; and so the ST<OP> is not blocked by the barrier.
+
 (define_insn "aarch64_atomic_<atomic_ldoptab><mode>_lse"
   [(set (match_operand:ALLI 0 "aarch64_sync_memory_operand" "+Q")
 	(unspec_volatile:ALLI
@@ -270,7 +282,7 @@
 	   (match_operand:ALLI 1 "register_operand" "r")
 	   (match_operand:SI 2 "const_int_operand")]
       ATOMIC_LDOP))
-   (clobber (match_scratch:ALLI 3 "=&r"))]
+   (clobber (match_scratch:ALLI 3 "=r"))]
   "TARGET_LSE"
   {
    enum memmodel model = memmodel_from_int (INTVAL (operands[2]));


More information about the Gcc-patches mailing list