Created attachment 28182 [details] std::timed_mutex::try_lock_for test Hello, 1., When using std::timed_mutex::try_lock_for() with high precision timers enabled on Linux the method will return immediately, rather than after the given time duration has elapsed. This is because it uses the CLOCK_MONOTONIC clock (if available on the platform) to calculate the absolute time when it needs to return, which is incorrect as the POSIX pthread_mutex_timedlock() call uses the CLOCK_REALTIME clock, and on my platform the monotonic clock is way behind the real time clock. Please see the attached cpptest.cpp file to reproduce the problem: $ g++-4.7.1 -std=gnu++11 cpptest.cpp -o cpptest $ ./cpptest mutex locked delay: 0 It should have displayed "delay: 3" instead. 2., std::condition_variable uses the std::chrono::system_clock for timed waits, I suggest changing that to std::chrono::high_resolution_clock which has the same precision as the POSIX pthread_cond_timedwait() function. My platform: # uname -a Linux epcau-srv-dev 2.6.33.7-rt29-0.5-rt #1 SMP PREEMPT RT 2010-08-25 19:40:23 +0200 x86_64 x86_64 x86_64 GNU/Linux # g++-4.7.1 -v Reading specs from /opt/epoch/lib64/gcc/x86_64-suse-linux/4.7.1/specs COLLECT_GCC=g++-4.7.1 COLLECT_LTO_WRAPPER=/opt/epoch/lib64/gcc/x86_64-suse-linux/4.7.1/lto-wrapper Target: x86_64-suse-linux Configured with: ../gcc-4.7.1-mutex/configure --build=x86_64-suse-linux --prefix=/opt/epoch --libdir=/opt/epoch/lib64 --libexecdir=/opt/epoch/lib64 --with-slibdir=/opt/epoch/lib64/gcc/x86_64-suse-linux/4.7.1/libgcc/lib64 --with-gxx-include-dir=/opt/epoch/include/c++/4.7.1 --program-suffix=-4.7.1 --enable-version-specific-runtime-libs --enable-languages=c++ --disable-nls --disable-libssp --enable-__cxa_atexit --with-system-zlib --enable-gold --with-plugin-ld=/usr/bin/gold --enable-lto --enable-libstdcxx-time=rt --enable-linux-futex --enable-libstdcxx-threads --enable-libstdcxx-debug --enable-libstdcxx-allocator=new Thread model: posix gcc version 4.7.1 (GCC)
Created attachment 28183 [details] suggested patch
Thanks for the report, Zoltan, it's on my list to look at asap.
Thanks Jon, how weird to communicate with you on the gcc mailing lists... I hope you are doing well. Cheers, Zoltan > -----Original Message----- > From: redi at gcc dot gnu.org [mailto:gcc-bugzilla@gcc.gnu.org] > Sent: Tuesday, 9 October 2012 6:55 AM > To: zoltan@epochcapital.com.au > Subject: [Bug libstdc++/54562] mutex and condition variable timers > > > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=54562 > > Jonathan Wakely <redi at gcc dot gnu.org> changed: > > What |Removed |Added > -------------------------------------------------------------------------- > -- > Status|UNCONFIRMED |NEW > Last reconfirmed| |2012-10-08 > Ever Confirmed|0 |1 > > --- Comment #2 from Jonathan Wakely <redi at gcc dot gnu.org> 2012-10-08 > 19:55:28 UTC --- > Thanks for the report, Zoltan, it's on my list to look at asap. > > -- > Configure bugmail: http://gcc.gnu.org/bugzilla/userprefs.cgi?tab=email > ------- You are receiving this mail because: ------- > You reported the bug.
Created attachment 30320 [details] proposed patch to use clocks correctly The <condition_variable> change should be unnecessary, as high_resolution_clock is a typedef for system_clock in our implementation. The timed_mutex change is an improvement but not ideal, because the standard says implementations "should" use a steady clock for try_lock_for() but use the system_clock for try_lock_until(), and pthread_mutex_timedlock() uses CLOCK_REALTIME, which corresponds to our system_clock. I'm going to test this patch (against the current svn trunk, after gcc.gnu.org/r200180) to fix the timed_mutex issue. The patch attempts to use the steady clock for relative timeouts, converting to the high_resolution_clock (aka system_clock) for the pthread call, and re-checks on timeout to handle cases where a clock is adjusted during the wait. N.B. your test technically has undefined behaviour because it attempts to relock the mutex in the same thread, I'm using this to reproduce the problem instead: #include <iostream> #include <chrono> #include <mutex> #include <thread> typedef std::chrono::high_resolution_clock rt_clock; typedef std::chrono::time_point<rt_clock> rt_time_point; std::timed_mutex mutex; void f() { mutex.try_lock_for(std::chrono::seconds(3)); } int main() { mutex.lock(); std::cout << "mutex locked" << std::endl; rt_time_point t1 = rt_clock::now(); std::thread(f).join(); rt_time_point t2 = rt_clock::now(); std::cout << "delay: " << std::chrono::duration_cast<std::chrono::seconds>(t2-t1).count() << std::endl; return 0; }
Author: redi Date: Mon Nov 11 13:33:48 2013 New Revision: 204672 URL: http://gcc.gnu.org/viewcvs?rev=204672&root=gcc&view=rev Log: PR libstdc++/54562 * include/std/mutex (__timed_mutex_impl::__clock_t): Use high_resolution_clock for absolute timeouts, because pthread_mutex_timedlock uses CLOCK_REALTIME not CLOCK_MONOTONIC. (__timed_mutex_impl::_M_try_lock_for): Use steady_clock for relative timeouts as per [thread.req.timing]. (__timed_mutex_impl::_M_try_lock_until<Clock,Duration>): Convert to __clock_t time point instead of using _M_try_lock_for. Modified: trunk/libstdc++-v3/ChangeLog trunk/libstdc++-v3/include/std/mutex
Fixed for 4.9.0. We still don't handle clock adjustments, doing so requires looping which adds overhead that might not be desirable. I'll revisit that topic another time.