From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id B0DF8385DC07 for ; Wed, 29 Nov 2023 23:46:25 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org B0DF8385DC07 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org B0DF8385DC07 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701301588; cv=none; b=PSsSzmjDzJFPy1BrNH0TmHD+kuGuUtVtdPFoJ9OU7RQodAVCMfNKalxvjDmAn8cZcWkFneSD3IrdtaqlHz9LkM5J2EqjIDjIRIX3Qxkiz0A/1FotZjYofL4RlF33TH1h4NRf2jp4uj0bq1I+tfvKdJwKvE3nt4CZVccnurCQ6Eo= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701301588; c=relaxed/simple; bh=OnI9Gloj6qcXWuYzq8t1vUBINbJrZCf7WrHz/P5Ne80=; h=From:To:Subject:Date:Message-ID:MIME-Version; b=ZNg9+LLNy+bR4jQBkYOUs2nrY4wectFL/rUdYZ0+Z6lUkIrEEVMFu9+7tgvPQ14Rxqfyq2ly6VJ/pIVGlaCDYba1mUUqCqrz7BL4TouARrnB41SUFWi3P0QZXcUSZnt8cZXAg79QmzcW1meJMiVFXxzLG7prxv17lLd2E2KC7Ss= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3EBC32F4; Wed, 29 Nov 2023 15:47:12 -0800 (PST) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8B7273F73F; Wed, 29 Nov 2023 15:46:24 -0800 (PST) From: Richard Sandiford To: Wilco Dijkstra Mail-Followup-To: Wilco Dijkstra ,GCC Patches , Kyrylo Tkachov , richard.sandiford@arm.com Cc: GCC Patches , Kyrylo Tkachov Subject: Re: [PATCH] libatomic: Enable lock-free 128-bit atomics on AArch64 [PR110061] References: Date: Wed, 29 Nov 2023 23:46:23 +0000 In-Reply-To: (Wilco Dijkstra's message of "Mon, 6 Nov 2023 12:13:22 +0000") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-22.3 required=5.0 tests=BAYES_00,GIT_PATCH_0,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,KAM_SHORT,SPF_HELO_NONE,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Not my specialist subject, but here goes anyway: Wilco Dijkstra writes: > ping > > From: Wilco Dijkstra > Sent: 02 June 2023 18:28 > To: GCC Patches > Cc: Richard Sandiford ; Kyrylo Tkachov > Subject: [PATCH] libatomic: Enable lock-free 128-bit atomics on AArch64 [PR110061] > > > Enable lock-free 128-bit atomics on AArch64. This is backwards compatible with > existing binaries, gives better performance than locking atomics and is what > most users expect. Please add a justification for why it's backwards compatible, rather than just stating that it's so. > Note 128-bit atomic loads use a load/store exclusive loop if LSE2 is not supported. > This results in an implicit store which is invisible to software as long as the given > address is writeable (which will be true when using atomics in actual code). Thanks for adding this. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95722 suggests that it's still an open question whether this is a correct thing to do, but it sounds from Joseph's comment that he isn't sure whether atomic loads from read-only data are valid. Linus's comment in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70490 suggests that a reasonable compromise might be to use a storing implementation but not advertise that it is lock-free. Also, the comment above libat_is_lock_free says: /* Note that this can return that a size/alignment is not lock-free even if all the operations that we use to implement the respective accesses provide lock-free forward progress as specified in C++14: Users likely expect "lock-free" to also mean "fast", which is why we do not return true if, for example, we implement loads with this size/alignment using a CAS. */ We don't use a CAS for the fallbacks, but like you say, we do use a load/store exclusive loop. So did you consider not doing this: > +/* State we have lock-free 128-bit atomics. */ > +#undef FAST_ATOMIC_LDST_16 > +#define FAST_ATOMIC_LDST_16 1 ? Otherwise it looks reasonable to me, for whatever that's worth, but: > A simple test on an old Cortex-A72 showed 2.7x speedup of 128-bit atomics. > > Passes regress, OK for commit? > > libatomic/ > PR target/110061 > config/linux/aarch64/atomic_16.S: Implement lock-free ARMv8.0 atomics. > config/linux/aarch64/host-config.h: Use atomic_16.S for baseline v8.0. > State we have lock-free atomics. > > --- > > diff --git a/libatomic/config/linux/aarch64/atomic_16.S b/libatomic/config/linux/aarch64/atomic_16.S > index 05439ce394b9653c9bcb582761ff7aaa7c8f9643..0485c284117edf54f41959d2fab9341a9567b1cf 100644 > --- a/libatomic/config/linux/aarch64/atomic_16.S > +++ b/libatomic/config/linux/aarch64/atomic_16.S > @@ -22,6 +22,21 @@ > . */ > > > +/* AArch64 128-bit lock-free atomic implementation. > + > + 128-bit atomics are now lock-free for all AArch64 architecture versions. > + This is backwards compatible with existing binaries and gives better > + performance than locking atomics. > + > + 128-bit atomic loads use a exclusive loop if LSE2 is not supported. > + This results in an implicit store which is invisible to software as long > + as the given address is writeable. Since all other atomics have explicit > + writes, this will be true when using atomics in actual code. > + > + The libat__16 entry points are ARMv8.0. > + The libat__16_i1 entry points are used when LSE2 is available. */ > + > + > .arch armv8-a+lse > > #define ENTRY(name) \ > @@ -37,6 +52,10 @@ name: \ > .cfi_endproc; \ > .size name, .-name; > > +#define ALIAS(alias,name) \ > + .global alias; \ > + .set alias, name; > + > #define res0 x0 > #define res1 x1 > #define in0 x2 > @@ -70,6 +89,24 @@ name: \ > #define SEQ_CST 5 > > > +ENTRY (libat_load_16) > + mov x5, x0 > + cbnz w1, 2f > + > + /* RELAXED. */ > +1: ldxp res0, res1, [x5] > + stxp w4, res0, res1, [x5] > + cbnz w4, 1b > + ret > + > + /* ACQUIRE/CONSUME/SEQ_CST. */ > +2: ldaxp res0, res1, [x5] > + stxp w4, res0, res1, [x5] > + cbnz w4, 2b > + ret > +END (libat_load_16) > + > + > ENTRY (libat_load_16_i1) > cbnz w1, 1f > > @@ -93,6 +130,23 @@ ENTRY (libat_load_16_i1) > END (libat_load_16_i1) > > > +ENTRY (libat_store_16) > + cbnz w4, 2f > + > + /* RELAXED. */ > +1: ldxp xzr, tmp0, [x0] > + stxp w4, in0, in1, [x0] > + cbnz w4, 1b > + ret > + > + /* RELEASE/SEQ_CST. */ > +2: ldxp xzr, tmp0, [x0] > + stlxp w4, in0, in1, [x0] > + cbnz w4, 2b > + ret > +END (libat_store_16) > + > + > ENTRY (libat_store_16_i1) > cbnz w4, 1f > > @@ -101,14 +155,14 @@ ENTRY (libat_store_16_i1) > ret > > /* RELEASE/SEQ_CST. */ > -1: ldaxp xzr, tmp0, [x0] > +1: ldxp xzr, tmp0, [x0] > stlxp w4, in0, in1, [x0] > cbnz w4, 1b > ret > END (libat_store_16_i1) > > > -ENTRY (libat_exchange_16_i1) > +ENTRY (libat_exchange_16) > mov x5, x0 > cbnz w4, 2f > > @@ -126,22 +180,55 @@ ENTRY (libat_exchange_16_i1) > stxp w4, in0, in1, [x5] > cbnz w4, 3b > ret > -4: > - cmp w4, RELEASE > - b.ne 6f > > - /* RELEASE. */ > -5: ldxp res0, res1, [x5] > + /* RELEASE/ACQ_REL/SEQ_CST. */ > +4: ldaxp res0, res1, [x5] > stlxp w4, in0, in1, [x5] > - cbnz w4, 5b > + cbnz w4, 4b > ret > +END (libat_exchange_16) Please explain (here and in the commit message) why you're adding acquire semantics to the RELEASE case. Thanks, Richard > > - /* ACQ_REL/SEQ_CST. */ > -6: ldaxp res0, res1, [x5] > - stlxp w4, in0, in1, [x5] > - cbnz w4, 6b > + > +ENTRY (libat_compare_exchange_16) > + ldp exp0, exp1, [x1] > + cbz w4, 3f > + cmp w4, RELEASE > + b.hs 4f > + > + /* ACQUIRE/CONSUME. */ > +1: ldaxp tmp0, tmp1, [x0] > + cmp tmp0, exp0 > + ccmp tmp1, exp1, 0, eq > + bne 2f > + stxp w4, in0, in1, [x0] > + cbnz w4, 1b > + mov x0, 1 > ret > -END (libat_exchange_16_i1) > + > +2: stp tmp0, tmp1, [x1] > + mov x0, 0 > + ret > + > + /* RELAXED. */ > +3: ldxp tmp0, tmp1, [x0] > + cmp tmp0, exp0 > + ccmp tmp1, exp1, 0, eq > + bne 2b > + stxp w4, in0, in1, [x0] > + cbnz w4, 3b > + mov x0, 1 > + ret > + > + /* RELEASE/ACQ_REL/SEQ_CST. */ > +4: ldaxp tmp0, tmp1, [x0] > + cmp tmp0, exp0 > + ccmp tmp1, exp1, 0, eq > + bne 2b > + stlxp w4, in0, in1, [x0] > + cbnz w4, 4b > + mov x0, 1 > + ret > +END (libat_compare_exchange_16) > > > ENTRY (libat_compare_exchange_16_i1) > @@ -180,7 +267,7 @@ ENTRY (libat_compare_exchange_16_i1) > END (libat_compare_exchange_16_i1) > > > -ENTRY (libat_fetch_add_16_i1) > +ENTRY (libat_fetch_add_16) > mov x5, x0 > cbnz w4, 2f > > @@ -199,10 +286,10 @@ ENTRY (libat_fetch_add_16_i1) > stlxp w4, tmp0, tmp1, [x5] > cbnz w4, 2b > ret > -END (libat_fetch_add_16_i1) > +END (libat_fetch_add_16) > > > -ENTRY (libat_add_fetch_16_i1) > +ENTRY (libat_add_fetch_16) > mov x5, x0 > cbnz w4, 2f > > @@ -221,10 +308,10 @@ ENTRY (libat_add_fetch_16_i1) > stlxp w4, res0, res1, [x5] > cbnz w4, 2b > ret > -END (libat_add_fetch_16_i1) > +END (libat_add_fetch_16) > > > -ENTRY (libat_fetch_sub_16_i1) > +ENTRY (libat_fetch_sub_16) > mov x5, x0 > cbnz w4, 2f > > @@ -243,10 +330,10 @@ ENTRY (libat_fetch_sub_16_i1) > stlxp w4, tmp0, tmp1, [x5] > cbnz w4, 2b > ret > -END (libat_fetch_sub_16_i1) > +END (libat_fetch_sub_16) > > > -ENTRY (libat_sub_fetch_16_i1) > +ENTRY (libat_sub_fetch_16) > mov x5, x0 > cbnz w4, 2f > > @@ -265,10 +352,10 @@ ENTRY (libat_sub_fetch_16_i1) > stlxp w4, res0, res1, [x5] > cbnz w4, 2b > ret > -END (libat_sub_fetch_16_i1) > +END (libat_sub_fetch_16) > > > -ENTRY (libat_fetch_or_16_i1) > +ENTRY (libat_fetch_or_16) > mov x5, x0 > cbnz w4, 2f > > @@ -287,10 +374,10 @@ ENTRY (libat_fetch_or_16_i1) > stlxp w4, tmp0, tmp1, [x5] > cbnz w4, 2b > ret > -END (libat_fetch_or_16_i1) > +END (libat_fetch_or_16) > > > -ENTRY (libat_or_fetch_16_i1) > +ENTRY (libat_or_fetch_16) > mov x5, x0 > cbnz w4, 2f > > @@ -309,10 +396,10 @@ ENTRY (libat_or_fetch_16_i1) > stlxp w4, res0, res1, [x5] > cbnz w4, 2b > ret > -END (libat_or_fetch_16_i1) > +END (libat_or_fetch_16) > > > -ENTRY (libat_fetch_and_16_i1) > +ENTRY (libat_fetch_and_16) > mov x5, x0 > cbnz w4, 2f > > @@ -331,10 +418,10 @@ ENTRY (libat_fetch_and_16_i1) > stlxp w4, tmp0, tmp1, [x5] > cbnz w4, 2b > ret > -END (libat_fetch_and_16_i1) > +END (libat_fetch_and_16) > > > -ENTRY (libat_and_fetch_16_i1) > +ENTRY (libat_and_fetch_16) > mov x5, x0 > cbnz w4, 2f > > @@ -353,10 +440,10 @@ ENTRY (libat_and_fetch_16_i1) > stlxp w4, res0, res1, [x5] > cbnz w4, 2b > ret > -END (libat_and_fetch_16_i1) > +END (libat_and_fetch_16) > > > -ENTRY (libat_fetch_xor_16_i1) > +ENTRY (libat_fetch_xor_16) > mov x5, x0 > cbnz w4, 2f > > @@ -375,10 +462,10 @@ ENTRY (libat_fetch_xor_16_i1) > stlxp w4, tmp0, tmp1, [x5] > cbnz w4, 2b > ret > -END (libat_fetch_xor_16_i1) > +END (libat_fetch_xor_16) > > > -ENTRY (libat_xor_fetch_16_i1) > +ENTRY (libat_xor_fetch_16) > mov x5, x0 > cbnz w4, 2f > > @@ -397,10 +484,10 @@ ENTRY (libat_xor_fetch_16_i1) > stlxp w4, res0, res1, [x5] > cbnz w4, 2b > ret > -END (libat_xor_fetch_16_i1) > +END (libat_xor_fetch_16) > > > -ENTRY (libat_fetch_nand_16_i1) > +ENTRY (libat_fetch_nand_16) > mov x5, x0 > mvn in0, in0 > mvn in1, in1 > @@ -421,10 +508,10 @@ ENTRY (libat_fetch_nand_16_i1) > stlxp w4, tmp0, tmp1, [x5] > cbnz w4, 2b > ret > -END (libat_fetch_nand_16_i1) > +END (libat_fetch_nand_16) > > > -ENTRY (libat_nand_fetch_16_i1) > +ENTRY (libat_nand_fetch_16) > mov x5, x0 > mvn in0, in0 > mvn in1, in1 > @@ -445,21 +532,38 @@ ENTRY (libat_nand_fetch_16_i1) > stlxp w4, res0, res1, [x5] > cbnz w4, 2b > ret > -END (libat_nand_fetch_16_i1) > +END (libat_nand_fetch_16) > > > -ENTRY (libat_test_and_set_16_i1) > - mov w2, 1 > - cbnz w1, 2f > - > - /* RELAXED. */ > - swpb w0, w2, [x0] > - ret > +/* __atomic_test_and_set is always inlined, so this entry is unused and > + only required for completeness. */ > +ENTRY (libat_test_and_set_16) > > - /* ACQUIRE/CONSUME/RELEASE/ACQ_REL/SEQ_CST. */ > -2: swpalb w0, w2, [x0] > + /* RELAXED/ACQUIRE/CONSUME/RELEASE/ACQ_REL/SEQ_CST. */ > + mov x5, x0 > +1: ldaxrb w0, [x5] > + stlxrb w4, w2, [x5] > + cbnz w4, 1b > ret > -END (libat_test_and_set_16_i1) > +END (libat_test_and_set_16) > + > + > +/* Alias entry points which are the same in baseline and LSE2. */ > + > +ALIAS (libat_exchange_16_i1, libat_exchange_16) > +ALIAS (libat_fetch_add_16_i1, libat_fetch_add_16) > +ALIAS (libat_add_fetch_16_i1, libat_add_fetch_16) > +ALIAS (libat_fetch_sub_16_i1, libat_fetch_sub_16) > +ALIAS (libat_sub_fetch_16_i1, libat_sub_fetch_16) > +ALIAS (libat_fetch_or_16_i1, libat_fetch_or_16) > +ALIAS (libat_or_fetch_16_i1, libat_or_fetch_16) > +ALIAS (libat_fetch_and_16_i1, libat_fetch_and_16) > +ALIAS (libat_and_fetch_16_i1, libat_and_fetch_16) > +ALIAS (libat_fetch_xor_16_i1, libat_fetch_xor_16) > +ALIAS (libat_xor_fetch_16_i1, libat_xor_fetch_16) > +ALIAS (libat_fetch_nand_16_i1, libat_fetch_nand_16) > +ALIAS (libat_nand_fetch_16_i1, libat_nand_fetch_16) > +ALIAS (libat_test_and_set_16_i1, libat_test_and_set_16) > > > /* GNU_PROPERTY_AARCH64_* macros from elf.h for use in asm code. */ > diff --git a/libatomic/config/linux/aarch64/host-config.h b/libatomic/config/linux/aarch64/host-config.h > index bea26825b4f75bb8ff348ab4b5fc45f4a5bd561e..851c78c01cd643318aaa52929ce4550266238b79 100644 > --- a/libatomic/config/linux/aarch64/host-config.h > +++ b/libatomic/config/linux/aarch64/host-config.h > @@ -35,10 +35,19 @@ > #endif > #define IFUNC_NCOND(N) (1) > > -#if N == 16 && IFUNC_ALT != 0 > +#endif /* HAVE_IFUNC */ > + > +/* All 128-bit atomic functions are defined in aarch64/atomic_16.S. */ > +#if N == 16 > # define DONE 1 > #endif > > -#endif /* HAVE_IFUNC */ > +/* State we have lock-free 128-bit atomics. */ > +#undef FAST_ATOMIC_LDST_16 > +#define FAST_ATOMIC_LDST_16 1 > +#undef MAYBE_HAVE_ATOMIC_CAS_16 > +#define MAYBE_HAVE_ATOMIC_CAS_16 1 > +#undef MAYBE_HAVE_ATOMIC_EXCHANGE_16 > +#define MAYBE_HAVE_ATOMIC_EXCHANGE_16 1 > > #include_next