The concurrence.h mutex lock/unlock changes of 13 Sept have caused failure of all C++ testcases using std::cout on mingw32, iff -mthreads (enabling __gthread_active_p) option is added to RUNTESTFLAGS. Originally, the failures were of the form" "terminate called after throwing an instance of 'std::runtime_error' what(): __mutex::lock" and resulted from the target __gthread_mutex_lock being handed an invalid mutex. Its a Good Thing the error checking had been turned on by the same patchset :) Its a Bad Thing that I haven't been testing with -mthreads turned on more frequently :( This patch 2006-10-09 Benjamin Kosnik <bkoz@redhat.com> PR libstdc++/29118 * src/locale_init.cc (__get_locale_mutex): New. (locale::locale): Use it. (locale::global): Use it. . got rid of those errors, but they were replaced by: "terminate called after throwing an instance of 'std::runtime_error' what(): __recursive_mutex::lock" ^^^^^^^^^^^^^^^^^ (mingw32 uses __GTHREAD_MUTEX_INIT_FUNCTION) These can be fixed by reverting this part of the 13 Sept patch * libsupc++/guard.cc (static_mutex): Subsume into and fixup for... * include/ext/concurrence.h (__recursive_mutex): ...this. like so: Index: guard.cc =================================================================== --- guard.cc (revision 117613) +++ guard.cc (working copy) @@ -42,8 +42,49 @@ #ifdef __GTHREADS namespace { - // A single mutex controlling all static initializations. - __gnu_cxx::__recursive_mutex static_mutex; + // static_mutex is a single mutex controlling all static initializations. + // This is a static class--the need for a static initialization function + // to pass to __gthread_once precludes creating multiple instances, though + // I suppose you could achieve the same effect with a template. + class static_mutex + { + static __gthread_recursive_mutex_t mutex; + +#ifdef __GTHREAD_RECURSIVE_MUTEX_INIT_FUNCTION + static void init(); +#endif + + public: + static void lock(); + static void unlock(); + }; + + __gthread_recursive_mutex_t static_mutex::mutex +#ifdef __GTHREAD_RECURSIVE_MUTEX_INIT + = __GTHREAD_RECURSIVE_MUTEX_INIT +#endif + ; + +#ifdef __GTHREAD_RECURSIVE_MUTEX_INIT_FUNCTION + void static_mutex::init() + { + __GTHREAD_RECURSIVE_MUTEX_INIT_FUNCTION (&mutex); + } +#endif + + void static_mutex::lock() + { +#ifdef __GTHREAD_RECURSIVE_MUTEX_INIT_FUNCTION + static __gthread_once_t once = __GTHREAD_ONCE_INIT; + __gthread_once (&once, init); +#endif + __gthread_recursive_mutex_lock (&mutex); + } + + void static_mutex::unlock () + { + __gthread_recursive_mutex_unlock (&mutex); + } } #ifndef _GLIBCXX_GUARD_TEST_AND_ACQUIRE @@ -144,12 +185,12 @@ bool unlock; mutex_wrapper (): unlock(true) { - static_mutex.lock(); + static_mutex::lock (); } ~mutex_wrapper () { if (unlock) - static_mutex.unlock(); + static_mutex::unlock (); } } mw; @@ -172,7 +213,7 @@ recursion_pop (g); #ifdef __GTHREADS if (__gthread_active_p ()) - static_mutex.unlock(); + static_mutex::unlock (); #endif } @@ -183,7 +224,7 @@ _GLIBCXX_GUARD_SET_AND_RELEASE (g); #ifdef __GTHREADS if (__gthread_active_p ()) - static_mutex.unlock(); + static_mutex::unlock (); #endif } }
Mine.
Note we don't actually know if this is a regression, as without the stricter error checking that is now present this may not be failing.
This is probably just another ordering issue. I'm on it. -benjamin
This is a regression, oh well. Can you confirm for me that mingw32 is a target w/o __cxa_atexit? I don't suppose it will make any difference, but can you please try: - __gnu_cxx::__recursive_mutex static_mutex; + static __gnu_cxx::__recursive_mutex static_mutex; best, benjamin
Created attachment 12407 [details] simple failure testcase
Created attachment 12408 [details] patch Please try this and see if it works. If so, let me know. -benjamin
I'm testing it now.
OK. Seems to be working (i.e. build succeeded and testing isn't blowing up).
Hmm. Eric, are you testing this on mingw32, or on darwin? If darwin, is this the cause of the recent massive failures? If so, I'll put this in immediately. If you can let me know in the next 2-3 hours I can get it in today. -benjamin
Testing on darwin, the patch seems to get rid of the massive failures I was seeing. Thanks Ben.
Subject: Bug 29426 Author: bkoz Date: Wed Oct 11 20:18:36 2006 New Revision: 117643 URL: http://gcc.gnu.org/viewcvs?root=gcc&view=rev&rev=117643 Log: 2006-10-11 Benjamin Kosnik <bkoz@redhat.com> PR libstdc++/29426 * libsupc++/guard.cc (get_static_mutex): New. (mutex_wrapper::mutex_wrapper): Use it to get properly initialized recursive mutex without ordering issues. * src/locale_init.cc (__get_locale_mutex): No need to uglify. Change to get_locale_mutex. Modified: trunk/libstdc++-v3/ChangeLog trunk/libstdc++-v3/libsupc++/guard.cc trunk/libstdc++-v3/src/locale_init.cc
(In reply to comment #4) > - __gnu_cxx::__recursive_mutex static_mutex; > + static __gnu_cxx::__recursive_mutex static_mutex; I tried thaty before I submitted bug report. No dice. (In reply to comment #11) > 2006-10-11 Benjamin Kosnik <bkoz@redhat.com> > PR libstdc++/29426 > * libsupc++/guard.cc (get_static_mutex): New. > (mutex_wrapper::mutex_wrapper): Use it to get properly initialized > recursive mutex without ordering issues. > * src/locale_init.cc (__get_locale_mutex): No need to > uglify. Change to get_locale_mutex. Thanks I submitted a bug report just before I went to bed and it was fixed in the morning. Wow! This fixes testsuite failures on mingw32, too. Tested with and without -mthreads. Also tested with a pending (stage 1) patch to enable __cxa_atexit like destruction on mingw32. Danny
Benjamin's patch totally broke the C++ compiler on Solaris 2.6 and probably damaged it on Solaris 7, 8 and 9 too. This is again PR target/24071. At least I now have a strong incentive to tackle the problem. :-)
Eric, it looks like you've got this fixed now: great news. Solaris test results for 2.10, 2.9, and 2.8 looked fine for the last month and a half, so I'd assumed this patch was not problematic. As a side note, it's hard to deal with paging in and out WRT this bug report over two months. If we're supposed to care about Solaris-2.5-7, then please, post test results on a (at least) weekly basis. -benjamin
> Eric, it looks like you've got this fixed now: great news. Solaris test > results for 2.10, 2.9, and 2.8 looked fine for the last month and a half, > so I'd assumed this patch was not problematic. I think it is, up to Solaris 9, but the failure mode is not so blatant. > As a side note, it's hard to deal with paging in and out WRT this bug report > over two months. If we're supposed to care about Solaris-2.5-7, then please, > post test results on a (at least) weekly basis. I'd say that we (the GCC project) have to care about Solaris 7 and up only, at this point. I'm personally interested in Solaris 2.5.1 and 2.6 for some reasons, but I will certainly not bug anyone about them. And, yes, I try to post results on a weekly basis for all Solaris versions: http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg01176.html http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg01177.html http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg01178.html http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg01179.html http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg01180.html http://gcc.gnu.org/ml/gcc-testresults/2006-10/msg01181.html I don't have any for recent mainline though, but you probably have already guessed why.