With valid but large steady clock time_points, condition_variable.wait_until does not sleep at all, but instead continues as if the time was passed. Perhaps related to http://gcc.gnu.org/bugzilla/show_bug.cgi?id=54562 Example: #include <chrono> #include <mutex> #include <condition_variable> int main(){ std::mutex m; std::condition_variable cv; std::unique_lock<std::mutex> lk(m); // does not sleep at all: cv.wait_until(lk, std::chrono::time_point<std::chrono::steady_clock>::max()); // sleeps fine: //cv.wait_until(lk, // std::chrono::steady_clock::now()+10000*std::chrono::hours{24*365}); } cheers / Johan -thanks for a great compiler! PS. * I compiled gcc with --enable-libstdcxx-time=yes. Using 64 bit linux 3.5.0 * The bug does not occur with system_clock. * I used time_point max() to let a worker thread wait when a queue of delayed events was empty.
I have a fix for PR 54562 so I'll see if it solves this. N.B. --enable-libstdcxx-time=yes should not be necessary for 4.8 if you have glibc 2.17 or later.
Fixing PR 54562 doesn't help. This can be reduced to #include <chrono> #include <cassert> int main() { using StClock = std::chrono::steady_clock; using SysClock = std::chrono::system_clock; auto st_atime = std::chrono::time_point<StClock>::max(); const StClock::time_point st_now = StClock::now(); const SysClock::time_point sys_now = SysClock::now(); const auto delta = st_atime - st_now; const auto sys_atime = sys_now + delta; assert( sys_atime > sys_now ); }
This is still a problem in current gcc trunk. The bug is in the condition_variable::wait_until clock conversion. It doesn't check for overflow in that math. Since the steady_clock and system_clock epochs can be very different, it's likely to overflow with values much less than max(). template<typename _Clock, typename _Duration> cv_status wait_until(unique_lock<mutex>& __lock, const chrono::time_point<_Clock, _Duration>& __atime) { // DR 887 - Sync unknown clock to known clock. const typename _Clock::time_point __c_entry = _Clock::now(); const __clock_t::time_point __s_entry = __clock_t::now(); const auto __delta = __atime - __c_entry; const auto __s_atime = __s_entry + __delta; return __wait_until_impl(__lock, __s_atime); } I modified my version of gcc to use steady_clock as condition_variable's "known clock" (__clock_t). This is more correct according to the C++ standard and most importantly it makes condition_variable resilient to clock changes when used in conjunction with steady_clock. Because of this, in my case, it works fine with steady_clock::time_point::max(), but fails with system_clock::time_point::max(). Because I made that change and since I don't do timed waits on system_clock (which is unsafe), the overflow hasn't been a problem for me and I haven't fixed it.
See bug 41861 for discussion of steady_clock wrt condition_variable.
Created attachment 43261 [details] Patch to check for overflow
This problem isn't as serious now that waiting on std::chrono::steady_clock doesn't use the generic wait_until implemenation, but it's worth fixing regardless. I have a reproduction case and Aaron's fix in attachment 43261 [details] (with some minor tweaks to accommodate the recent addition of __detail::ceil) still appears to work.