This is the mail archive of the
libstdc++@gcc.gnu.org
mailing list for the libstdc++ project.
Re: [v3] api docs
On 10/12/2007, Benjamin Kosnik <bkoz@redhat.com> wrote:
>
> > > Configure defines _GLIBCXX_ATOMIC_BUILTINS if __sync_fetch_and_add
> > > exists.
> >
> > IIUC it's defined if is __sync_fetch_and_add is builtin - it might
> > exist, but be implemented in a library.
>
> Exactly right.
>
> > > There is no use of CAS (via __sync_bool_compare_and_swap or
> > > __sync_val_compare_and_swap) or other atomic builtins in
> > > libstdc++.
> >
> > Huh?
>
> I see I was wrong on this....
>
> > _S_atomic exists for shared_ptr, which requires
> > a builtin CAS. It would be nice if __default_lock_policy was a general
> > concurrence utility, but IMHO currently it's not. It's specific to
> > shared_ptr's needs and any changes to it had better not break
> > shared_ptr :)
>
> Got it.
>
> However, we need _S_atomic or other to exist for more than
> shared_ptr. As it stands now, I'm expecting the interface
> to be ext/concurrence.h's _Lock_policy, not _GLIBCXX_ATOMIC_BUILTINS.
>
> ie, the documentation for extensions for threads/atomics vs. macros.
Gotcha, I see what you mean, and I've looked at PR34106 now.
Here's how I see things in pseudo-c++
switch (__default_lock_policy)
{
case _S_atomic:
GCC provides builtin CAS on this platform, it will be used to
implement lock-free algorithms. Might be more accurately named
_S_lockfree or _S_builtin_cas
case _S_mutex:
No builtin CAS, but might have builtin fetch&add. Some critical
sections might need to use a mutex because they cannot be implemented
with f&a alone (e.g. _Sp_counted_base::_M_add_ref_lock). This policy
could theoretically be split into _S_no_builtin_cas and
_S_no_builtin_atomics. That would make shared_ptr a different type
when default==no_builtin_cas and default_no_builtin_atomics. Is that
desirable?
case _S_single:
No thread support.
}
compatiblity.h really could be simpler, couldn't it :)
fetch_and_add<T> can be implemented directly with
__exchange_and_add_dispatch, eradicating all the faa* functions.
template<typename T>
inline T
fetch_and_add(volatile T* ptr, T addend)
{
return __exchange_and_add_dispatch(ptr, addend);
}
compare_and_swap would currently be
template<typename T>
inline bool
compare_and_swap(volatile T* ptr, T comparand, T replacement)
{
#if (defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_2) \
&& defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4))
return __sync_bool_compare_and_swap(ptr, comparand, replacement);
#else
#pragma message("slow compare_and_swap_64")
bool res = false;
#pragma omp critical
{
if (*ptr == comparand)
{
*ptr = replacement;
res = true;
}
}
return res;
#endif
}
but now I see where you're coming from - it would be nicer to test if
(__default_lock_policy == _S_atomic) or something like that.
Hmm ...
Jon