This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: Use atomics in libgomp mutex
On 11/29/2011 04:27 AM, Alan Modra wrote:
> Did you see my comment about the OpenMP spec requirement that critical,
> ordered and parallel regions have a flush on entry and on exit? So
> should I commit a version with
No, I missed that. Too many branches to this thread. ;-)
> /* FIXME: This should just be MEMMODEL_ACQUIRE, MEMMODEL_RELAXED
> but GOMP_critical_start and other functions rely on the lock
> acquisition doing a flush. See OpenMP API version 3.1 section
> 2.8.6 flush Construct. */
> if (__builtin_expect (!__atomic_compare_exchange_n (mutex, &oldval, 1, false,
> MEMMODEL_ACQ_REL,
> MEMMODEL_ACQUIRE),
> 0))
No, I think we should simply put other barriers elsewhere.
> + /* There is an implicit flush on entry to an ordered region. */
> + __atomic_thread_fence (MEMMODEL_RELEASE);
> +
> /* ??? I believe it to be safe to access this data without taking the
> ws->lock. The only presumed race condition is with the previous
> thread on the queue incrementing ordered_cur such that it points
How about MEMMODEL_ACQ_REL, after the comment about the lock, which would have done the flush? Given that we don't have a lock, and no ACQ barrier, perhaps the full barrier makes more sense?
> + /* There is an implicit flush on entry to a critical region. */
> + __atomic_thread_fence (MEMMODEL_RELEASE);
> gomp_mutex_lock (&default_lock);
And, yes, this together with the ACQUIRE barrier inside the lock makes a full barrier. (Which also reminds me that for gcc 4.8 we should expose these barriers at both gimple and rtl levels and optimize them. We'll get two sequential lwsync-like insns here on any target where the barrier isn't included in the lock insn directly.)
As for parallels, I agree the barrier implementation should flush all data.
And I've found the menu for next week: http://www.masterchef.com.au/fried-brains-with-bacon-crumble-and-apple-vinaigrette.htm
r~