This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug c/84522] New: GCC does not generate cmpxchg16b when mcx16 is used
- From: "nruslan_devel at yahoo dot com" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Thu, 22 Feb 2018 20:43:03 +0000
- Subject: [Bug c/84522] New: GCC does not generate cmpxchg16b when mcx16 is used
- Auto-submitted: auto-generated
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84522
Bug ID: 84522
Summary: GCC does not generate cmpxchg16b when mcx16 is used
Product: gcc
Version: unknown
Status: UNCONFIRMED
Severity: normal
Priority: P3
Component: c
Assignee: unassigned at gcc dot gnu.org
Reporter: nruslan_devel at yahoo dot com
Target Milestone: ---
I looked up similar bugs, but I could not quite understand why it redirects to
libatomic when used with 128-bit cmpxchg in x86-64 even when '-mcx16' flag is
specified. Especially because similar cmpxchg8b for x86 (32-bit) is still used
without redirecting to libatomic.
80878 mentioned something about read-only memory, but that should only apply to
atomic_load, not atomic_compare_and_exchange. Right?
It is especially annoying because libatomic will not guarantee lock-freedom,
therefore, these functions become useless in many cases.
This compiler behavior is inconsistent with clang.
For instance, for the following code:
#include <stdatomic.h>
__uint128_t cmpxhg_weak(_Atomic(__uint128_t) * obj, __uint128_t * expected,
__uint128_t desired)
{
return atomic_compare_exchange_weak(obj, expected, desired);
}
GCC generates:
(gcc -std=c11 -mcx16 -Wall -O2 -S test.c)
cmpxhg_weak:
subq $8, %rsp
movl $5, %r9d
movl $5, %r8d
call __atomic_compare_exchange_16@PLT
xorl %edx, %edx
movzbl %al, %eax
addq $8, %rsp
ret
While clang/llvm generates the code which is obviously lock-free:
cmpxhg_weak: # @cmpxhg_weak
pushq %rbx
movq %rdx, %r8
movq (%rsi), %rax
movq 8(%rsi), %rdx
xorl %r9d, %r9d
movq %r8, %rbx
lock cmpxchg16b (%rdi)
sete %cl
je .LBB0_2
movq %rax, (%rsi)
movq %rdx, 8(%rsi)
.LBB0_2:
movb %cl, %r9b
xorl %edx, %edx
movq %r9, %rax
popq %rbx
retq
However, for 32-bit GCC still generates cmpxchg8b:
#include <stdatomic.h>
#include <inttypes.h>
uint64_t cmpxhg_weak(_Atomic(uint64_t) * obj, uint64_t * expected, uint64_t
desired)
{
return atomic_compare_exchange_weak(obj, expected, desired);
}
gcc -std=c11 -m32 -Wall -O2 -S test.c
cmpxhg_weak:
pushl %edi
pushl %esi
pushl %ebx
movl 20(%esp), %esi
movl 24(%esp), %ebx
movl 28(%esp), %ecx
movl 16(%esp), %edi
movl (%esi), %eax
movl 4(%esi), %edx
lock cmpxchg8b (%edi)
movl %edx, %ecx
movl %eax, %edx
sete %al
je .L2
movl %edx, (%esi)
movl %ecx, 4(%esi)
.L2:
popl %ebx
movzbl %al, %eax
xorl %edx, %edx
popl %esi
popl %edi
ret