From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 1E20D3870874 for ; Mon, 23 Nov 2020 15:43:00 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 1E20D3870874 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B1E7D15AD; Mon, 23 Nov 2020 07:42:59 -0800 (PST) Received: from eagle.buzzard.freeserve.co.uk (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 06D8E3F718; Mon, 23 Nov 2020 07:42:58 -0800 (PST) From: Richard Earnshaw To: libc-alpha@sourceware.org Cc: Richard Earnshaw , dj@redhat.com, szabolcs.nagy@arm.com Subject: [PATCH v3 7/8] aarch64: Add sysv specific enabling code for memory tagging Date: Mon, 23 Nov 2020 15:42:35 +0000 Message-Id: <20201123154236.25809-8-rearnsha@arm.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201123154236.25809-1-rearnsha@arm.com> References: <20201123154236.25809-1-rearnsha@arm.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="------------2.29.2" Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-14.1 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Nov 2020 15:43:01 -0000 This is a multi-part message in MIME format. --------------2.29.2 Content-Type: text/plain; charset=UTF-8; format=fixed Content-Transfer-Encoding: 8bit Add various defines and stubs for enabling MTE on AArch64 sysv-like systems such as Linux. The HWCAP feature bit is copied over in the same way as other feature bits. Similarly we add a new wrapper header for mman.h to define the PROT_MTE flag that can be used with mmap and related functions. We add a new field to struct cpu_features that can be used, for example, to check whether or not certain ifunc'd routines should be bound to MTE-safe versions. Finally, if we detect that MTE should be enabled (ie via the glibc tunable); we enable MTE during startup as required. --- sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h | 1 + sysdeps/unix/sysv/linux/aarch64/bits/mman.h | 7 +++++ .../unix/sysv/linux/aarch64/cpu-features.c | 28 +++++++++++++++++++ .../unix/sysv/linux/aarch64/cpu-features.h | 1 + 4 files changed, 37 insertions(+) --------------2.29.2 Content-Type: text/x-patch; name="v3-0007-aarch64-Add-sysv-specific-enabling-code-for-memor.patch" Content-Transfer-Encoding: 8bit Content-Disposition: attachment; filename="v3-0007-aarch64-Add-sysv-specific-enabling-code-for-memor.patch" diff --git a/sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h b/sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h index af90d8a626..389852f1d9 100644 --- a/sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h +++ b/sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h @@ -73,3 +73,4 @@ #define HWCAP2_DGH (1 << 15) #define HWCAP2_RNG (1 << 16) #define HWCAP2_BTI (1 << 17) +#define HWCAP2_MTE (1 << 18) diff --git a/sysdeps/unix/sysv/linux/aarch64/bits/mman.h b/sysdeps/unix/sysv/linux/aarch64/bits/mman.h index ecae046344..3658b958b5 100644 --- a/sysdeps/unix/sysv/linux/aarch64/bits/mman.h +++ b/sysdeps/unix/sysv/linux/aarch64/bits/mman.h @@ -1,4 +1,5 @@ /* Definitions for POSIX memory map interface. Linux/AArch64 version. + Copyright (C) 2020 Free Software Foundation, Inc. This file is part of the GNU C Library. @@ -25,6 +26,12 @@ #define PROT_BTI 0x10 +/* The following definitions basically come from the kernel headers. + But the kernel header is not namespace clean. */ + +/* Other flags. */ +#define PROT_MTE 0x20 /* Normal Tagged mapping. */ + #include /* Include generic Linux declarations. */ diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c index b9ab827aca..aa4a82c6e8 100644 --- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c +++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c @@ -19,6 +19,7 @@ #include #include #include +#include #define DCZID_DZP_MASK (1 << 4) #define DCZID_BS_MASK (0xf) @@ -86,4 +87,31 @@ init_cpu_features (struct cpu_features *cpu_features) /* Check if BTI is supported. */ cpu_features->bti = GLRO (dl_hwcap2) & HWCAP2_BTI; + + /* Setup memory tagging support if the HW and kernel support it, and if + the user has requested it. */ + cpu_features->mte_state = 0; + +#ifdef _LIBC_MTAG +# if HAVE_TUNABLES + int mte_state = TUNABLE_GET (glibc, memtag, enable, unsigned, 0); + cpu_features->mte_state = (GLRO (dl_hwcap2) & HWCAP2_MTE) ? mte_state : 0; + /* If we lack the MTE feature, disable the tunable, since it will + otherwise cause instructions that won't run on this CPU to be used. */ + TUNABLE_SET (glibc, memtag, enable, unsigned, cpu_features->mte_state); +# endif + + /* For now, disallow tag 0, so that we can clearly see when tagged + addresses are being allocated. */ + if (cpu_features->mte_state & 2) + __prctl (PR_SET_TAGGED_ADDR_CTRL, + (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_SYNC + | (0xfffe << PR_MTE_TAG_SHIFT)), + 0, 0, 0); + else if (cpu_features->mte_state) + __prctl (PR_SET_TAGGED_ADDR_CTRL, + (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_ASYNC + | (0xfffe << PR_MTE_TAG_SHIFT)), + 0, 0, 0); +#endif } diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h index 00a4d0c8e7..838d5c9aba 100644 --- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h +++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h @@ -70,6 +70,7 @@ struct cpu_features uint64_t midr_el1; unsigned zva_size; bool bti; + unsigned mte_state; }; #endif /* _CPU_FEATURES_AARCH64_H */ --------------2.29.2--