This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug libstdc++/40297] [C++0x] debug mode vs atomics



------- Comment #2 from jwakely dot gcc at gmail dot com  2009-06-16 12:35 -------
I think all the assertions are simply backwards, the load() operation requires:
"The order argument shall not be memory_order_release nor
memory_order_acq_rel."

so the assertions should be 
      __glibcxx_assert(__m != memory_order_release);
      __glibcxx_assert(__m != memory_order_acq_rel);

I can prepare a patch to fix that everywhere if Benjamin agrees.

Debug mode apart, I think there are more problems with the memory ordering.

__atomic2::atomic_flag::test_and_set() issues a full barrier when m ==
memory_order_relaxed (too much synchronization) but is not a release operation
when m == memory_order_acq_rel (not enough synchronization.) 
sync_lock_test_and_set is always an acquire barrier, so the full barrier is
only needed when the ordering specifies it should also be a release operation
i.e.

    bool
    test_and_set(memory_order __m = memory_order_seq_cst) volatile
    {
      // Redundant synchronize if built-in for lock is a full barrier.
      if (__m >= memory_order_release)
        __sync_synchronize();
      return __sync_lock_test_and_set(&_M_i, 1);
    }


__atomic2::atomic_flag::clear also issues an unwanted full memory barrier when
m == memory_order_relaxed, and in debug mode doesn't enforce the requirement
that m != acquire and m != acq_rel.  

It seems strange to me that clear() allows memory_order_consume but not
acquire. I'll ask on the lib reflector if that's an oversight, assuming it is,
I would write it:

    void
    clear(memory_order __m = memory_order_seq_cst) volatile
    {
      __glibcxx_assert(__m != memory_order_acquire);
      __glibcxx_assert(__m != memory_order_acq_rel);
      __sync_lock_release(&_M_i);
      if (__m == memory_order_seq_cst)
        __sync_synchronize();
    }

atomic_*::store() has no memory barrier when m == memory_order_release.

atomic_*::fetch() would benefit from an atomic load operation.

atomic_*::exchange() needs a release barrier when m >= memory_order_release.

N.B. I have not fully grokked the atomic operations library, so everything I've
said above might be wrong!


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40297


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]