From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id F30893857C58 for ; Fri, 17 Nov 2023 17:27:08 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org F30893857C58 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org F30893857C58 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1700242031; cv=none; b=h7Tw9CCS06Qhsq9UJSfvRip9VC/HyiwRXqqHXQeFHm+i0z5dfJMYiOtTWrwovoihVtnolHltGLBGwPthGOrnzHxLJKhHXCyNWHFBn3l6VYpqIRvEdUm428NQdge7m/xmWuKh7mvVpu8CLz1sfyxbzzTINgtd8QQUtt0TmrMr9Jc= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1700242031; c=relaxed/simple; bh=624MomvQ7LB083fYrfoqRvL1vzdrCBvP8KfbEWDDAk8=; h=From:To:Subject:Date:Message-ID:MIME-Version; b=cJKKuXiG//M6JXheoZbVGNJkockkV7RVMA8i7N4TaKeysJz9oBurJW+ETqaqORHHTH5tqZr5cF4sSrVTVIqlILUyDazY/ZUWzN8bhvzL/7LZRniGNcdn9jpVhiFuRFZYIc2Q1AaoXAIIyKymOVagqXqdTUVn4Vs6hiqUzvE9XaM= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D11FA1477 for ; Fri, 17 Nov 2023 09:27:54 -0800 (PST) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6CEF53F73F for ; Fri, 17 Nov 2023 09:27:08 -0800 (PST) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [PATCH 14/21] aarch64: Add a VNx1TI mode References: Date: Fri, 17 Nov 2023 17:27:07 +0000 In-Reply-To: (Richard Sandiford's message of "Fri, 17 Nov 2023 17:23:28 +0000") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-22.9 required=5.0 tests=BAYES_00,GIT_PATCH_0,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,SPF_HELO_NONE,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Although TI isn't really a native SVE element mode, it's convenient for SME if we define VNx1TI anyway, so that it can be used to distinguish .Q ZA operations from others. It's purely an RTL convenience and isn't (yet) a valid storage mode. gcc/ * config/aarch64/aarch64-modes.def: Add VNx1TI. --- gcc/config/aarch64/aarch64-modes.def | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/gcc/config/aarch64/aarch64-modes.def b/gcc/config/aarch64/aarch64-modes.def index 6b4f4e17dd5..a3efc5b8484 100644 --- a/gcc/config/aarch64/aarch64-modes.def +++ b/gcc/config/aarch64/aarch64-modes.def @@ -156,7 +156,7 @@ ADV_SIMD_Q_REG_STRUCT_MODES (4, V4x16, V4x8, V4x4, V4x2) for 8-bit, 16-bit, 32-bit and 64-bit elements respectively. It isn't strictly necessary to set the alignment here, since the default would be clamped to BIGGEST_ALIGNMENT anyhow, but it seems clearer. */ -#define SVE_MODES(NVECS, VB, VH, VS, VD) \ +#define SVE_MODES(NVECS, VB, VH, VS, VD, VT) \ VECTOR_MODES_WITH_PREFIX (VNx, INT, 16 * NVECS, NVECS == 1 ? 1 : 4); \ VECTOR_MODES_WITH_PREFIX (VNx, FLOAT, 16 * NVECS, NVECS == 1 ? 1 : 4); \ \ @@ -164,6 +164,7 @@ ADV_SIMD_Q_REG_STRUCT_MODES (4, V4x16, V4x8, V4x4, V4x2) ADJUST_NUNITS (VH##HI, aarch64_sve_vg * NVECS * 4); \ ADJUST_NUNITS (VS##SI, aarch64_sve_vg * NVECS * 2); \ ADJUST_NUNITS (VD##DI, aarch64_sve_vg * NVECS); \ + ADJUST_NUNITS (VT##TI, exact_div (aarch64_sve_vg * NVECS, 2)); \ ADJUST_NUNITS (VH##BF, aarch64_sve_vg * NVECS * 4); \ ADJUST_NUNITS (VH##HF, aarch64_sve_vg * NVECS * 4); \ ADJUST_NUNITS (VS##SF, aarch64_sve_vg * NVECS * 2); \ @@ -173,17 +174,23 @@ ADV_SIMD_Q_REG_STRUCT_MODES (4, V4x16, V4x8, V4x4, V4x2) ADJUST_ALIGNMENT (VH##HI, 16); \ ADJUST_ALIGNMENT (VS##SI, 16); \ ADJUST_ALIGNMENT (VD##DI, 16); \ + ADJUST_ALIGNMENT (VT##TI, 16); \ ADJUST_ALIGNMENT (VH##BF, 16); \ ADJUST_ALIGNMENT (VH##HF, 16); \ ADJUST_ALIGNMENT (VS##SF, 16); \ ADJUST_ALIGNMENT (VD##DF, 16); -/* Give SVE vectors the names normally used for 256-bit vectors. - The actual number depends on command-line flags. */ -SVE_MODES (1, VNx16, VNx8, VNx4, VNx2) -SVE_MODES (2, VNx32, VNx16, VNx8, VNx4) -SVE_MODES (3, VNx48, VNx24, VNx12, VNx6) -SVE_MODES (4, VNx64, VNx32, VNx16, VNx8) +/* Give SVE vectors names of the form VNxX, where X describes what is + stored in each 128-bit unit. The actual size of the mode depends + on command-line flags. + + VNx1TI isn't really a native SVE mode, but it can be useful in some + limited situations. */ +VECTOR_MODE_WITH_PREFIX (VNx, INT, TI, 1, 1); +SVE_MODES (1, VNx16, VNx8, VNx4, VNx2, VNx1) +SVE_MODES (2, VNx32, VNx16, VNx8, VNx4, VNx2) +SVE_MODES (3, VNx48, VNx24, VNx12, VNx6, VNx3) +SVE_MODES (4, VNx64, VNx32, VNx16, VNx8, VNx4) /* Partial SVE vectors: -- 2.25.1