This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[gccgo] Initialize mutexes


This patch from Vinu Rajashekhar ensures that all mutexes in libgo are
initialized.  Committed to gccgo branch.

Ian

diff -r 960b7647b1da libgo/runtime/mfinal.c
--- a/libgo/runtime/mfinal.c	Fri Jul 02 09:53:03 2010 -0700
+++ b/libgo/runtime/mfinal.c	Fri Jul 02 12:46:46 2010 -0700
@@ -5,7 +5,7 @@
 #include "runtime.h"
 #include "malloc.h"
 
-Lock finlock;
+Lock finlock = LOCK_INITIALIZER;
 
 // Finalizer hash table.  Direct hash, linear scan, at most 3/4 full.
 // Table size is power of 3 so that hash can be key % max.
diff -r 960b7647b1da libgo/runtime/mprof.goc
--- a/libgo/runtime/mprof.goc	Fri Jul 02 09:53:03 2010 -0700
+++ b/libgo/runtime/mprof.goc	Fri Jul 02 12:46:46 2010 -0700
@@ -14,7 +14,7 @@
 typedef struct __go_open_array Slice;
 
 // NOTE(rsc): Everything here could use cas if contention became an issue.
-static Lock proflock;
+static Lock proflock = LOCK_INITIALIZER;
 
 // Per-call-stack allocation information.
 // Lookup by hashing call stack into a linked-list hash table.
diff -r 960b7647b1da libgo/runtime/runtime.h
--- a/libgo/runtime/runtime.h	Fri Jul 02 09:53:03 2010 -0700
+++ b/libgo/runtime/runtime.h	Fri Jul 02 12:46:46 2010 -0700
@@ -111,8 +111,8 @@
  * mutual exclusion locks.  in the uncontended case,
  * as fast as spin locks (just a few user-level instructions),
  * but on the contention path they sleep in the kernel.
- * a zeroed Lock is unlocked (no need to initialize each lock).
  */
+#define	LOCK_INITIALIZER	{ PTHREAD_MUTEX_INITIALIZER }
 void	initlock(Lock*);
 void	lock(Lock*);
 void	unlock(Lock*);

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]