This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][AArch64] Allow const0_rtx operand for atomic compare-exchange patterns


Hi Andrew,

On 20/06/17 06:06, Andrew Pinski wrote:
On Tue, Feb 28, 2017 at 4:29 AM, Kyrill Tkachov
<kyrylo.tkachov@foss.arm.com> wrote:
Hi all,

For the testcase in this patch we currently generate:
foo:
         mov     w1, 0
         ldaxr   w2, [x0]
         cmp     w2, 3
         bne     .L2
         stxr    w3, w1, [x0]
         cmp     w3, 0
.L2:
         cset    w0, eq
         ret

Note that the STXR could have been storing the WZR register instead of
moving zero into w1.
This is due to overly strict predicates and constraints in the store
exclusive pattern and the
atomic compare exchange expanders and splitters.
This simple patch fixes that in the patterns concerned and with it we can
generate:
foo:
         ldaxr   w1, [x0]
         cmp     w1, 3
         bne     .L2
         stxr    w2, wzr, [x0]
         cmp     w2, 0
.L2:
         cset    w0, eq
         ret


Bootstrapped and tested on aarch64-none-linux-gnu.
Ok for GCC 8?

This patch broke compiling with -march=+lse

./home/apinski/src/local5/gcc/gcc/testsuite/gcc.target/aarch64/atomic_cmp_exchange_zero_reg_1.c:9:1:
error: unrecognizable insn:
  }
  ^
(insn 6 3 7 2 (parallel [
             (set (reg:CC 66 cc)
                 (unspec_volatile:CC [
                         (const_int 0 [0])
                     ] UNSPECV_ATOMIC_CMPSW))
             (set (reg:SI 78)
                 (mem/v:SI (reg/v/f:DI 77 [ a ]) [-1  S4 A32]))
             (set (mem/v:SI (reg/v/f:DI 77 [ a ]) [-1  S4 A32])
                 (unspec_volatile:SI [
                         (const_int 3 [0x3])
                         (const_int 0 [0])
                         (const_int 1 [0x1])
                         (const_int 2 [0x2])
                         (const_int 2 [0x2])
                     ] UNSPECV_ATOMIC_CMPSW))
         ]) "/home/apinski/src/local5/gcc/gcc/testsuite/gcc.target/aarch64/atomic_cmp_exchange_zero_reg_1.c":8
-1
      (nil))
during RTL pass: vregs

Note also your new testcase is broken even for defaulting to +lse as
it is not going to match stxr.  I might be the only person who tests
+lse by default :).

I reproduced the ICE, sorry for the trouble.
I believe the fix is as simple as relaxing the register_operand predicate
on the "value" operand of the LSE cas* patterns to aarch64_reg_or_zero
(and extending the constraint as well). This fixes the ICE for me.

I'll test a patch and submit ASAP.

Kyrill

Thanks,
Andrew Pinski

Thanks,
Kyrill

2017-02-28  Kyrylo Tkachov  <kyrylo.tkachov@arm.com>

     * config/aarch64/atomics.md (atomic_compare_and_swap<mode> expander):
     Use aarch64_reg_or_zero predicate for operand 4.
     (aarch64_compare_and_swap<mode> define_insn_and_split):
     Use aarch64_reg_or_zero predicate for operand 3.  Add 'Z' constraint.
     (aarch64_store_exclusive<mode>): Likewise for operand 2.

2017-02-28  Kyrylo Tkachov  <kyrylo.tkachov@arm.com>

     * gcc.target/aarch64/atomic_cmp_exchange_zero_reg_1.c: New test.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]