From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id EF77F384602A; Thu, 17 Mar 2022 17:50:32 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org EF77F384602A From: "cvs-commit at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/104688] gcc and libatomic can use SSE for 128-bit atomic loads on Intel CPUs with AVX Date: Thu, 17 Mar 2022 17:50:32 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: missed-optimization, patch X-Bugzilla-Severity: normal X-Bugzilla-Who: cvs-commit at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: gcc-bugs@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-bugs mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Mar 2022 17:50:33 -0000 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D104688 --- Comment #6 from CVS Commits --- The master branch has been updated by Jakub Jelinek : https://gcc.gnu.org/g:1d47c0512a265d4bb3ab9e56259fd1e4f4d42c75 commit r12-7689-g1d47c0512a265d4bb3ab9e56259fd1e4f4d42c75 Author: Jakub Jelinek Date: Thu Mar 17 18:49:00 2022 +0100 libatomic: Improve 16-byte atomics on Intel AVX [PR104688] As mentioned in the PR, the latest Intel SDM has added: "Processors that enumerate support for Intel=C2=AE AVX (by setting the = feature flag CPUID.01H:ECX.AVX[bit 28]) guarantee that the 16-byte memory operations performed by the following instructions will always be carried out atomically: =C3=A2=C2=A2 MOVAPD, MOVAPS, and MOVDQA. =C3=A2=C2=A2 VMOVAPD, VMOVAPS, and VMOVDQA when encoded with VEX.128. =C3=A2=C2=A2 VMOVAPD, VMOVAPS, VMOVDQA32, and VMOVDQA64 when encoded wi= th EVEX.128 and k0 (masking disabled). (Note that these instructions require the linear addresses of their mem= ory operands to be 16-byte aligned.)" The following patch deals with it just on the libatomic library side so far, currently (since ~ 2017) we emit all the __atomic_* 16-byte builtins as library calls since and this is something that we can hopefully backpor= t. The patch simply introduces yet another ifunc variant that takes priori= ty over the pure CMPXCHG16B one, one that checks AVX and CMPXCHG16B bits a= nd on non-Intel clears the AVX bit during detection for now (if AMD comes with the same guarantee, we could revert the config/x86/init.c hunk), which implements 16-byte atomic load as vmovdqa and 16-byte atomic store as vmovdqa followed by mfence. 2022-03-17 Jakub Jelinek PR target/104688 * Makefile.am (IFUNC_OPTIONS): Change on x86_64 to -mcx16 -mcx1= 6. (libatomic_la_LIBADD): Add $(addsuffix _16_2_.lo,$(SIZEOBJS)) f= or x86_64. * Makefile.in: Regenerated. * config/x86/host-config.h (IFUNC_COND_1): For x86_64 define to both AVX and CMPXCHG16B bits. (IFUNC_COND_2): Define. (IFUNC_NCOND): For x86_64 define to 2 * (N =3D=3D 16). (MAYBE_HAVE_ATOMIC_CAS_16, MAYBE_HAVE_ATOMIC_EXCHANGE_16, MAYBE_HAVE_ATOMIC_LDST_16): Define to IFUNC_COND_2 rather than IFUNC_COND_1. (HAVE_ATOMIC_CAS_16): Redefine to 1 whenever IFUNC_ALT !=3D 0. (HAVE_ATOMIC_LDST_16): Redefine to 1 whenever IFUNC_ALT =3D=3D = 1. (atomic_compare_exchange_n): Define whenever IFUNC_ALT !=3D 0 on x86_64 for N =3D=3D 16. (__atomic_load_n, __atomic_store_n): Redefine whenever IFUNC_AL= T =3D=3D 1 on x86_64 for N =3D=3D 16. (atomic_load_n, atomic_store_n): New functions. * config/x86/init.c (__libat_feat1_init): On x86_64 clear bit_A= VX if CPU vendor is not Intel.=