This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 1/2] x86,s390: add compiler memory barriers when expanding atomic_thread_fence (PR 80640)


While fixing the fences issue of PR80640 I've noticed that a similar oversight
is present in expansion of atomic loads on x86: they become volatile loads, but
that is insufficient, a compiler memory barrier is still needed.  Volatility
prevents tearing the load (preserves non-divisibility of atomic access), but
does not prevent propagation and code motion of non-volatile accesses across the
load.  The testcase below demonstrates that even SEQ_CST loads are mishandled.

It's less clear how to fix this one.  AFAICS there are two possible approaches:
either emit explicit compiler barriers on either side of the load (a barrier
before the load is needed for SEQ_CST loads), or change how atomic loads are
expressed in RTL: atomic stores already use ATOMIC_STA UNSPECs, so they don't
suffer from this issue.  The asymmetry seems odd to me.

The former approach is easier and exposes non-seq-cst loads upwards for
optimization, so that's what the proposed patch implements.

The s390 backend suffers from the same issue, except there the atomic stores are
affected too.

Alexander

	* config/i386/sync.md (atomic_load<mode>): Emit a compiler barrier
	before (only for SEQ_CST order) and after the load.
testsuite/
	* gcc.target/i386/pr80640-2.c: New testcase.

---
 gcc/config/i386/sync.md                   |  7 +++++++
 gcc/testsuite/gcc.target/i386/pr80640-2.c | 11 +++++++++++
 2 files changed, 18 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/i386/pr80640-2.c

diff --git a/gcc/config/i386/sync.md b/gcc/config/i386/sync.md
index 619d53b..35a7b38 100644
--- a/gcc/config/i386/sync.md
+++ b/gcc/config/i386/sync.md
@@ -155,6 +155,11 @@ (define_expand "atomic_load<mode>"
 		       UNSPEC_LDA))]
   ""
 {
+  enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
+
+  if (is_mm_seq_cst (model))
+    expand_asm_memory_barrier ();
+
   /* For DImode on 32-bit, we can use the FPU to perform the load.  */
   if (<MODE>mode == DImode && !TARGET_64BIT)
     emit_insn (gen_atomic_loaddi_fpu
@@ -173,6 +178,8 @@ (define_expand "atomic_load<mode>"
       if (dst != operands[0])
 	emit_move_insn (operands[0], dst);
     }
+  if (!is_mm_relaxed (model))
+    expand_asm_memory_barrier ();
   DONE;
 })
 
diff --git a/gcc/testsuite/gcc.target/i386/pr80640-2.c b/gcc/testsuite/gcc.target/i386/pr80640-2.c
new file mode 100644
index 0000000..f0a050c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr80640-2.c
@@ -0,0 +1,11 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fno-tree-ter" } */
+/* { dg-final { scan-assembler-not "xorl\[\\t \]*\\\%eax,\[\\t \]*%eax" } } */
+
+int f(int *p, volatile _Bool *p1, _Atomic _Bool *p2)
+{
+  int v = *p;
+  *p1 = !v;
+  while (__atomic_load_n (p2, __ATOMIC_SEQ_CST));
+  return v - *p;
+}
-- 
1.8.3.1


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]