[committed] Add locking semaphore to sysdep/pa/locks.h

John David Anglin dave@hiauly1.hia.nrc.ca
Wed Dec 28 19:19:00 GMT 2005


The enclosed change adds a locking semaphore around the previously
non-atomic compare_and_swap implementation.  The implementation is
derived from the one used in libstdc++.

I've had this patch under test on and off for several months on
hppa-unknown-linux-gnu.  It fixes various process related testsuite
fails.

There was a contending implementation using a kernel light-weight
syscall.  I finally decided to go with this implementation because

1) it is usable with hpux,
2) the compare_and_swap light-weight syscall isn't available in
   all linux kernel versions,
3) I wasn't fully convinced that the light-weight syscall was
   completely reliable under heavy contention.

Committed to 4.0, 4.1 and 4.2.

Dave
-- 
J. David Anglin                                  dave.anglin@nrc-cnrc.gc.ca
National Research Council of Canada              (613) 990-0752 (FAX: 952-6602)

2005-12-28  John David Anglin  <dave.anglin@nrc-cnrc.gc.ca>

	* sysdep/pa/locks.h (compare_and_swap): Add ldcw semaphore to make
	operation atomic.

Index: sysdep/pa/locks.h
===================================================================
--- sysdep/pa/locks.h	(revision 109064)
+++ sysdep/pa/locks.h	(working copy)
@@ -1,6 +1,6 @@
-// locks.h - Thread synchronization primitives. PARISC implementation.
+// locks.h - Thread synchronization primitives. PA-RISC implementation.
 
-/* Copyright (C) 2002  Free Software Foundation
+/* Copyright (C) 2002, 2005  Free Software Foundation
 
    This file is part of libgcj.
 
@@ -11,30 +11,62 @@
 #ifndef __SYSDEP_LOCKS_H__
 #define __SYSDEP_LOCKS_H__
 
-typedef size_t obj_addr_t;	/* Integer type big enough for object	*/
-				/* address.				*/
+// Integer type big enough for object address.
+typedef size_t obj_addr_t;
 
-// Atomically replace *addr by new_val if it was initially equal to old.
-// Return true if the comparison succeeded.
+template<int _Inst>
+  struct _pa_jv_cas_lock
+  {
+    static volatile int _S_pa_jv_cas_lock;
+  };
+
+template<int _Inst>
+volatile int
+_pa_jv_cas_lock<_Inst>::_S_pa_jv_cas_lock __attribute__ ((aligned (16))) = 1;
+
+// Because of the lack of weak support when using the hpux som
+// linker, we explicitly instantiate the atomicity lock.
+template volatile int _pa_jv_cas_lock<0>::_S_pa_jv_cas_lock;
+
+// Atomically replace *addr by new_val if it was initially equal to old_val.
+// Return true if the comparison is successful.
 // Assumed to have acquire semantics, i.e. later memory operations
 // cannot execute before the compare_and_swap finishes.
+// The following implementation is atomic but it can deadlock
+// (e.g., if a thread dies holding the lock).
 inline static bool
+__attribute__ ((__unused__))
 compare_and_swap(volatile obj_addr_t *addr,
-	 	 obj_addr_t old,
+	 	 obj_addr_t old_val,
 		 obj_addr_t new_val) 
 {
-  /* FIXME: not atomic */
-  obj_addr_t prev;
+  bool result;
+  int tmp;
+  volatile int& lock = _pa_jv_cas_lock<0>::_S_pa_jv_cas_lock;
+
+  __asm__ __volatile__ ("ldcw 0(%1),%0\n\t"
+			"cmpib,<>,n 0,%0,.+20\n\t"
+			"ldw 0(%1),%0\n\t"
+			"cmpib,= 0,%0,.-4\n\t"
+			"nop\n\t"
+			"b,n .-20"
+			: "=&r" (tmp)
+			: "r" (&lock)
+			: "memory");
 
-  if ((prev = *addr) == old)
-    {
-      *addr = new_val;
-      return true;
-    }
+  if (*addr != old_val)
+    result = false;
   else
     {
-      return false;
+      *addr = new_val;
+      result = true;
     }
+
+  /* Reset lock with PA 2.0 "ordered" store.  */
+  __asm__ __volatile__ ("stw,ma %1,0(%0)"
+			: : "r" (&lock), "r" (tmp) : "memory");
+
+  return result;
 }
 
 // Set *addr to new_val with release semantics, i.e. making sure



More information about the Gcc-patches mailing list