cxx-mem-model merge [3 of 9] doc

Joseph S. Myers joseph@codesourcery.com
Fri Nov 4 20:56:00 GMT 2011


On Fri, 4 Nov 2011, Andrew MacLeod wrote:

> + GCC will allow any integral scalar or pointer type that is 1, 2, 4, or 8
> + bytes in length. 16 bytes integral types are also allowed if

"16 bytes" appears to be used as an adjective; "16-byte" would be better.

> + @samp{__int128_t} is supported by the architecture.

The preferred name is __int128.  You probably also want a cross-reference 
to the __int128 section here.

> + to the same names in the C++11 standard.  Refer there or to the GCC wiki
> + on atomics for more detailed definitions.  These memory models integrate

"GCC wiki on atomics" should use a link (@uref).

> + restrictive __ATOMIC_SEQ_CST model.  Any of the other memory models will

@code{__ATOMIC_SEQ_CST}.

> + functions will map any runtime value to __ATOMIC_SEQ_CST rather than invoke

Likewise.

> + The valid memory model variants are
> + __ATOMIC_RELAXED, __ATOMIC_SEQ_CST, __ATOMIC_ACQUIRE, and
> + __ATOMIC_CONSUME.

Likewise.

> + __sync_lock_release on such hardware.

Likewise.

> + The valid memory model variants are
> + __ATOMIC_RELAXED, __ATOMIC_SEQ_CST, and __ATOMIC_RELEASE.

Likewise.

> + written.  This mimics the behaviour of __sync_lock_test_and_set on such
> + hardware.

Likewise.

> + The valid memory model variants are
> + __ATOMIC_RELAXED, __ATOMIC_SEQ_CST, __ATOMIC_ACQUIRE,
> + __ATOMIC_RELEASE, and __ATOMIC_ACQ_REL.

Likewise.

> + False is returned otherwise, and the execution is considered to conform
> + to @var{failure_memmodel}. This memory model cannot be __ATOMIC_RELEASE
> + nor __ATOMIC_ACQ_REL.  It also cannot be a stronger model than that
> + specified by @var{success_memmodel}.

Likewise.

> + __atomic_is_lock_free.

Likewise.

> + If this pattern is not provided, the __atomic_compare_exchange built-in
> + functions will utilize the legacy sync_compare_and_swap pattern with a
> + seq-cst memory model.

Likewise.

> + If not present, the __atomic_load built-in function will either resort to
> + a normal load with memory barriers, or a compare_and_swap operation if
> + a normal load would not be atomic.

Likewise.

> + If not present, the __atomic_store built-in function will attempt to
> + perform a normal store and surround it with any required memory fences.  If
> + the store would not be atomic, then an __atomic_exchange is attempted with
> + the result being ignored.

Likewise.

> + If this pattern is not present, the built-in function __atomic_exchange
> + will attempt to preform the operation with a compare and swap loop.

Likewise.

> + If these patterns are not defined, attempts will be made to use legacy
> + sync_op patterns.  If none of these are available a compare_and_swap loop
> + will be used.

Likewise.

> + If these patterns are not defined, attempts will be made to use legacy
> + sync_op patterns, or equivilent patterns which return the result before

Likewise.

> + If this pattern is not specified, all memory models except RELAXED will
> + result in issuing a sync_synchronize barrier pattern.

Likewise.

> + This pattern should impact the compiler optimizers the same way that
> + mem_signal_fence does, but it does not need to issue any barrier
> + instructions.

Likewise.

> + If this pattern is not specified, all memory models except RELAXED will
> + result in issuing a sync_synchronize barrier pattern.

Likewise.

-- 
Joseph S. Myers
joseph@codesourcery.com



More information about the Gcc-patches mailing list