This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [tsan] Instrument atomics


On Fri, Nov 23, 2012 at 6:05 PM, Jakub Jelinek <jakub@redhat.com> wrote:
> Hi!
>
> This patch attempts to instrument __atomic_* and __sync_* builtins.
> Unfortunately for none of the builtins there is 1:1 mapping to the tsan
> replacements, tsan uses weirdo memory model encoding (instead of values
> from 0 to 5 apparently 1 << 0 to 1 << 5, as if one could have more than
> one memory model at the same time), so even for the easy cases GCC
> has to transform them.  More importantly, there is no way to pass through
> __ATOMIC_HLE_ACQUIRE/__ATOMIC_HLE_RELEASE.  Right now the patch just gives
> up and doesn't replace anything if e.g.
> __atomic_store_n (p, 0, __ATOMIC_HLE_RELEASE | __ATOMIC_RELEASE);
> is seen, perhaps one could ignore the hle bits which are just an
> optimization, but there is no way to find out in the generic code
> whether the extra bits are just an optimization or change the behavior
> of the builtin.
>
> Another issue is that libtsan apparently internally uses just the deprecated
> __sync_* builtins (or no builtins at all for load/store).  That in some
> cases works (because __sync_* is usually equivalent to __atomic_* with
> __ATOMIC_SEQ_CST memmodel), but for the load/store cases and for atomic
> exchange it doesn't (the former don't provide any barrier, the latter
> in __sync_lock_test_and_set is only __ATOMIC_ACQUIRE barrier).
> Can libtsan at least conditionally, when built with GCC 4.7 or later,
> use __atomic_* builtins instead of __sync_*?  One still probably has to
> use __ATOMIC_SEQ_CST model anyway, otherwise there would need to be a switch
> based on the mem model, as  only constant arguments to __atomic_* builtins
> do something other than sequential consistency.
>
> Another issue is that there are no fetch + nand (or nand + fetch) tsan
> entrypoints, I could transform those into a loop using cas, but that looks
> overkill, then it would be better to change libtsan.
>
> The most important problem is that __tsan_atomic*_compare_exchange has just
> a single memory model argument, while the __atomic_* builtins have two (one
> for success, another one for failure).  Right now the patch just ignores
> the failure memory model, but that might actually lead into not detecting
> a race in a program when it should have been detected (note the failure
> memory model can't be stronger than success memory model).  Would it be
> possible to add another argument to those, and use the failure mode instead
> of success mode if the atomic operation fails?
>
> Apparently there are also no entry points for op and fetch (as opposed to
> fetch and op), but the patch translates those into the corresponding
> fetch and op libcalls with + val (etc.) on the result.
>
> Oh, and there are no 16-byte atomics provided by libtsan, the library
> would need to be built with -mcx16 on x86_64 for that and have the
> additional entry points for unsigned __int128.  The patch doesn't touch
> the 16-byte atomics, leaves them as is.


Hi,

I've added 128-bit atomic ops:
http://llvm.org/viewvc/llvm-project?view=rev&revision=168683

Refactored atomic ops so that the atomic operation itself is done
under the mutex:
http://llvm.org/viewvc/llvm-project?view=rev&revision=168682

And added atomic nand operation:
http://llvm.org/viewvc/llvm-project?view=rev&revision=168584


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]