From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-VI1-obe.outbound.protection.outlook.com (mail-vi1eur03on2065.outbound.protection.outlook.com [40.107.103.65]) by sourceware.org (Postfix) with ESMTPS id 34EFF3888C7C; Mon, 15 May 2023 09:49:32 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 34EFF3888C7C Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=C2fEh8kSS6FqcZpVq6hl4Ot6m2zp159oPAgunmUiXSo=; b=xaeUVB6xdUsZlMclwuZ9vKHBAabcYSDbdRlSCAgZr0k2bz6XNwPE4Q3hreM6GcgK7ryrYH3L/VyJVFMp+MN0ojBHU2qQsvix2hyyF1n1DL/UsCpuRp1KdmiOeyl7mvQreCvcv3LQ0+XhTa4GtDIRccexoWlEc/Zioo+WY0/cMaA= Received: from AS9PR05CA0289.eurprd05.prod.outlook.com (2603:10a6:20b:492::7) by DU2PR08MB9963.eurprd08.prod.outlook.com (2603:10a6:10:495::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May 2023 09:49:26 +0000 Received: from AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:492:cafe::e1) by AS9PR05CA0289.outlook.office365.com (2603:10a6:20b:492::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30 via Frontend Transport; Mon, 15 May 2023 09:49:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT027.mail.protection.outlook.com (100.127.140.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.14 via Frontend Transport; Mon, 15 May 2023 09:49:26 +0000 Received: ("Tessian outbound 3570909035da:v136"); Mon, 15 May 2023 09:49:25 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 19b1067a7729bfdb X-CR-MTA-TID: 64aa7808 Received: from 8b65649b2500.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 4C5A26C7-8780-4F95-8984-FD7DFD774699.1; Mon, 15 May 2023 09:49:14 +0000 Received: from EUR05-VI1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8b65649b2500.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Mon, 15 May 2023 09:49:14 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iOQd0FRVI26HY366PDelpv4OT6drfeUu41yzQe5jvDl2X8QCF/ZJCR1giNMVEDYm838FvUy4PUc+FnoGomSqh6X1QT/a4+XQ73poyfOam0c2e5vSGOH0lHmAcHV1ltDnkbkd55QPnchVyPqwv8UlXksy7nslv/spZulfkbKPLR4xvqVvK4iUuhLFo0jFhhBoW6YE6Q5IopFZqnX/wCpB1SADjLZ0mSCaLfoo/KLeLmoOgJQVObFa8R1Evj4bWavB0s451hzZX2nxOSQ89a6fbUTVxE4m8MJyx/jXo33kV/IQmkFm8rMxqgXgZg6taUn/Jyd2BbwdqnGSAsKWV9s1QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=C2fEh8kSS6FqcZpVq6hl4Ot6m2zp159oPAgunmUiXSo=; b=A4CpPqY/LW8Qs4vE5hPAbg49e4epSAJpzuMfjJ2tz1xa0At8vPIe2Zq7c46VfwPb0sDCFjxDZqZmrZ9CzEFomPAqYD638Ozoeyc91iKF9apFXo6CnjsEwqlzBxl5SETm1vD0a5OmrpYQyBN5W+vwC0mZ+V16g+ejVhXOScpRwZF1EL5c2hMwBpjOXyic4mWi/LDdSHBePg7OQb+Pt8njhPg8qJ8vn9K71NVhdC6zkl8V83Vz110DPaht4spIvOVqTVZ3bMbRhg3/Kr98TngRyvQG1PxP110sWnr2LMU+Njua5sux7oGo7m8hs6fbghhom9pvSNet5Q3z7HU8RVEjcQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=C2fEh8kSS6FqcZpVq6hl4Ot6m2zp159oPAgunmUiXSo=; b=xaeUVB6xdUsZlMclwuZ9vKHBAabcYSDbdRlSCAgZr0k2bz6XNwPE4Q3hreM6GcgK7ryrYH3L/VyJVFMp+MN0ojBHU2qQsvix2hyyF1n1DL/UsCpuRp1KdmiOeyl7mvQreCvcv3LQ0+XhTa4GtDIRccexoWlEc/Zioo+WY0/cMaA= Received: from PAXPR08MB6926.eurprd08.prod.outlook.com (2603:10a6:102:138::24) by DU0PR08MB8424.eurprd08.prod.outlook.com (2603:10a6:10:404::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May 2023 09:49:10 +0000 Received: from PAXPR08MB6926.eurprd08.prod.outlook.com ([fe80::db73:66ba:ae70:1ff1]) by PAXPR08MB6926.eurprd08.prod.outlook.com ([fe80::db73:66ba:ae70:1ff1%3]) with mapi id 15.20.6387.030; Mon, 15 May 2023 09:49:10 +0000 From: Kyrylo Tkachov To: Richard Sandiford , Evandro Menezes via Gcc-patches CC: "evandro+gcc@gcc.gnu.org" , Evandro Menezes , Tamar Christina Subject: RE: [PATCH] aarch64: Add SVE instruction types Thread-Topic: [PATCH] aarch64: Add SVE instruction types Thread-Index: AQHZhSa6mxSOUpJviEek+tS5ZpIXYK9bDTfagAAMmnA= Date: Mon, 15 May 2023 09:49:10 +0000 Message-ID: References: <1D567E08-9EBB-4EF7-9626-BA95D8E0EB36@icloud.com> In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; x-ms-traffictypediagnostic: PAXPR08MB6926:EE_|DU0PR08MB8424:EE_|AM7EUR03FT027:EE_|DU2PR08MB9963:EE_ X-MS-Office365-Filtering-Correlation-Id: ed85c9f9-b2be-4b9e-6d77-08db5529ab79 x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: MB0O0Gk//UPjvLPlZ3/fkWTE6CzP1MkS9oa/wHc/NMiY0IHlJ8rdM33XBsC8qFFzk/sVja1tr+dnyDGUVHJIoPTq22wegSLqUT8ckCTNVRuMjLT0AJ3wNVKpPeholHLG054aeOdxfWr9GyilJ4AYFXHpcYlMKFCjWBBnoVtD13Hd9gLp9yQeqNa3HKu4sOr492kLnxZQlgO5fcX/HtbImcz5Z7oclzezofk3MnopB1xoChL+Ksgt54GeVyk1aJTc6d+IdbqeaN6KU4G92Xk25rRdmZQPRlTnzodavXRut2geveC9ifNWfft9DKWqfVJcFOV7IT+ne/MPeHwuQtWRxW2+/QNbjU4u0iark/YBsUO9HyTdIfmRXzVSX4vxZJWfDbgXRIRizvqCylvQwASYauko8KeVAnHnsoAfExOi3EH7MkyXQyLy684njmziHnLAjXtFqA+ODGNTwnvCDlVyRfXRi6ETzHMw4/nz5Tb32QrZcIZoooOoZC0wuLxSBnK0QS3+hpRyRtL57aZwEnXvhFnZ7sVVfeSHryKqWfhCAvCJ6mQ6NW3hGPBOEL6AGdaoCfOOPenwYngIzpOB/PRNNNWulplTSr9Y5pH+Nk4PLnU= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6926.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(396003)(376002)(39860400002)(366004)(451199021)(64756008)(66446008)(4326008)(8936002)(71200400001)(8676002)(66476007)(66556008)(316002)(76116006)(41300700001)(478600001)(66946007)(122000001)(55016003)(54906003)(110136005)(38100700002)(186003)(53546011)(9686003)(38070700005)(6506007)(30864003)(2906002)(26005)(83380400001)(33656002)(86362001)(966005)(7696005)(5660300002)(52536014)(559001)(579004);DIR:OUT;SFP:1101; Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8424 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 990f1915-ad27-48df-51b3-08db5529a1da X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: x9XX2slEJz6t5nqI2IrP4JxSyTJuqbJROVgXHlWtGuhje1N2nP/Cy/XQMVG9Ysr4+1KqNaM+JBii2H+3iShDFr3E0ipXNHZjv7/C1eeCdZBQDzttI8lms/ATU7x8XuPbdS7jC2/zmIV+x3p2CrAq66/K5wNb/UchFSxLnbQJYXJ5z+GO2VEKIXLFQye59+8EhyxjZnBDpfFTd6yzIrM30+poD1Y2LTyn/e23b3yKYlWCWKPOkibBZozxY9mmsfHGXTpSR58v3gC5xhDtXVMPnItAHhSsHs7RBU7UT72W81kl13haPqvIqkGOuYZDkBn0t999yiDvAjXbp7u/txmMLL9NDdIL2ab2g84l2sZ5ZRLSzzIB8vvHjr1dxkGqhPXGOWMzzOvoaiEwz0QWKfMSPqINnFQzEHj26axfUY96NcWtiBByS8mvf8DVijDGIRULsfp7urH5VoCVvfEuhGa/6XgOfuKWC0x7DBsHmxI6ACEmUExRhVwbF33a9NGbVDyHEQjmqUWM2bVAci49k491CeT5lkD+4xl342g1RHSDsa9fJBZN7Sc1RCA4JZHmLDM6ESAwveQI9v8+Tvy0xSY7uXk7FPQIN2InKfQUUYfzwR7ccUoJZQuqipWbokhBtAFS+jDw6TSz1MnC6zMP2VBwOWu1rD2UxCeR34Yaz6CRWrJ0nYS/80WRdThyFFH7OH9t X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39850400004)(396003)(376002)(346002)(451199021)(46966006)(36840700001)(70206006)(70586007)(81166007)(47076005)(450100002)(8936002)(83380400001)(33656002)(26005)(82310400005)(7696005)(6506007)(336012)(82740400003)(356005)(4326008)(8676002)(2906002)(30864003)(5660300002)(316002)(41300700001)(36860700001)(52536014)(86362001)(110136005)(54906003)(478600001)(55016003)(186003)(53546011)(9686003)(40480700001)(966005)(579004)(559001);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 09:49:26.1001 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ed85c9f9-b2be-4b9e-6d77-08db5529ab79 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB9963 X-Spam-Status: No, score=-11.1 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,KAM_ASCII_DIVIDERS,KAM_DMARC_NONE,KAM_SHORT,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: > -----Original Message----- > From: Richard Sandiford > Sent: Monday, May 15, 2023 10:01 AM > To: Evandro Menezes via Gcc-patches > Cc: evandro+gcc@gcc.gnu.org; Evandro Menezes ; > Kyrylo Tkachov ; Tamar Christina > > Subject: Re: [PATCH] aarch64: Add SVE instruction types >=20 > Evandro Menezes via Gcc-patches writes: > > This patch adds the attribute `type` to most SVE1 instructions, as in t= he > other > > instructions. >=20 > Thanks for doing this. >=20 > Could you say what criteria you used for picking the granularity? Other > maintainers might disagree, but personally I'd prefer to distinguish two > instructions only if: >=20 > (a) a scheduling description really needs to distinguish them or > (b) grouping them together would be very artificial (because they're > logically unrelated) >=20 > It's always possible to split types later if new scheduling descriptions > require it. Because of that, I don't think we should try to predict ahea= d > of time what future scheduling descriptions will need. >=20 > Of course, this depends on having results that show that scheduling > makes a significant difference on an SVE core. I think one of the > problems here is that, when a different scheduling model changes the > performance of a particular test, it's difficult to tell whether > the gain/loss is caused by the model being more/less accurate than > the previous one, or if it's due to important "secondary" effects > on register live ranges. Instinctively, I'd have expected these > secondary effects to dominate on OoO cores. I agree with Richard on these points. The key here is getting the granulari= ty right without having too maintain too many types that aren't useful in t= he models. FWIW I had posted https://gcc.gnu.org/pipermail/gcc-patches/2022-November/6= 07101.html in November. It adds annotations to SVE2 patterns as well as for= base SVE. Feel free to reuse it if you'd like. I see you had posted a Neoverse V1 scheduling model. Does that give an impr= ovement on SVE code when combined with the scheduling attributes somehow? Thanks, Kyrill >=20 > Richard >=20 > > > > -- > > Evandro Menezes > > > > > > > > From be61df66d1a86bc7ec415eb23504002831c67c51 Mon Sep 17 00:00:00 > 2001 > > From: Evandro Menezes > > Date: Mon, 8 May 2023 17:39:10 -0500 > > Subject: [PATCH 2/3] aarch64: Add SVE instruction types > > > > gcc/ChangeLog: > > > > * config/aarch64/aarch64-sve.md: Use the instruction types. > > * config/arm/types.md: (sve_loop_p, sve_loop_ps, sve_loop_gs, > > sve_loop_end, sve_logic_p, sve_logic_ps, sve_cnt_p, > > sve_cnt_pv, sve_cnt_pvx, sve_rev_p, sve_sel_p, sve_set_p, > > sve_set_ps, sve_trn_p, sve_upk_p, sve_zip_p, sve_arith, > > sve_arith_r, sve_arith_sat, sve_arith_sat_x, sve_arith_x, > > sve_logic, sve_logic_r, sve_logic_x, sve_shift, sve_shift_d, > > sve_shift_dx, sve_shift_x, sve_compare_s, sve_cnt, sve_cnt_x, > > sve_copy, sve_copy_g, sve_move, sve_move_x, sve_move_g, > > sve_permute, sve_splat, sve_splat_m, sve_splat_g, sve_cext, > > sve_cext_x, sve_cext_g, sve_ext, sve_ext_x, sve_sext, > > sve_sext_x, sve_uext, sve_uext_x, sve_index, sve_index_g, > > sve_ins, sve_ins_x, sve_ins_g, sve_ins_gx, sve_rev, sve_rev_x, > > sve_tbl, sve_trn, sve_upk, sve_zip, sve_int_to_fp, > > sve_int_to_fp_x, sve_fp_to_int, sve_fp_to_int_x, sve_fp_to_fp, > > sve_fp_to_fp_x, sve_fp_round, sve_fp_round_x, sve_bf_to_fp, > > sve_bf_to_fp_x, sve_div, sve_div_x, sve_dot, sve_dot_x, > > sve_mla, sve_mla_x, sve_mmla, sve_mmla_x, sve_mul, sve_mul_x, > > sve_prfx, sve_fp_arith, sve_fp_arith_a, sve_fp_arith_c, > > sve_fp_arith_cx, sve_fp_arith_r, sve_fp_arith_x, > > sve_fp_compare, sve_fp_copy, sve_fp_move, sve_fp_move_x, > > sve_fp_div_d, sve_fp_div_dx, sve_fp_div_s, sve_fp_div_sx > > sve_fp_dot, sve_fp_mla, sve_fp_mla_x, sve_fp_mla_c, > > sve_fp_mla_cx, sve_fp_mla_t, sve_fp_mla_tx, sve_fp_mmla, > > sve_fp_mmla_x, sve_fp_mul, sve_fp_mul_x, sve_fp_sqrt_d, > > sve_fp_sqrt_dx, sve_fp_sqrt_s, sve_fp_sqrt_sx, sve_fp_trig, > > sve_fp_trig_x, sve_fp_estimate, sve_fp_step, sve_bf_dot, > > sve_bf_dot_x, sve_bf_mla, sve_bf_mla_x, sve_bf_mmla, > > sve_bf_mmla_x, sve_ldr, sve_ldr_p, sve_load1, > > sve_load1_gather_d, sve_load1_gather_dl, sve_load1_gather_du, > > sve_load1_gather_s, sve_load1_gather_sl, sve_load1_gather_su, > > sve_load2, sve_load3, sve_load4, sve_str, sve_str_p, > > sve_store1, sve_store1_scatter, sve_store2, sve_store3, > > sve_store4, sve_rd_ffr, sve_rd_ffr_p, sve_rd_ffr_ps, > > sve_wr_ffr): New types. > > > > Signed-off-by: Evandro Menezes > > --- > > gcc/config/aarch64/aarch64-sve.md | 632 ++++++++++++++++++++++-------- > > gcc/config/arm/types.md | 342 ++++++++++++++++ > > 2 files changed, 819 insertions(+), 155 deletions(-) > > > > diff --git a/gcc/config/aarch64/aarch64-sve.md > b/gcc/config/aarch64/aarch64-sve.md > > index 2898b85376b..58c5cb2ddbc 100644 > > --- a/gcc/config/aarch64/aarch64-sve.md > > +++ b/gcc/config/aarch64/aarch64-sve.md > > @@ -699,6 +699,7 @@ > > str\t%1, %0 > > mov\t%0.d, %1.d > > * return aarch64_output_sve_mov_immediate (operands[1]);" > > + [(set_attr "type" "sve_ldr, sve_str, sve_move, *")] > > ) > > > > ;; Unpredicated moves that cannot use LDR and STR, i.e. partial vector= s > > @@ -714,6 +715,7 @@ > > "@ > > mov\t%0.d, %1.d > > * return aarch64_output_sve_mov_immediate (operands[1]);" > > + [(set_attr "type" "sve_move, sve_move_x")] > > ) > > > > ;; Handle memory reloads for modes that can't use LDR and STR. We use > > @@ -758,6 +760,8 @@ > > "&& register_operand (operands[0], mode) > > && register_operand (operands[2], mode)" > > [(set (match_dup 0) (match_dup 2))] > > + "" > > + [(set_attr "type" "sve_load1, sve_store1, *")] > > ) > > > > ;; A pattern for optimizing SUBREGs that have a reinterpreting effect > > @@ -778,6 +782,7 @@ > > aarch64_split_sve_subreg_move (operands[0], operands[1], > operands[2]); > > DONE; > > } > > + [(set_attr "type" "sve_rev")] > > ) > > > > ;; Reinterpret operand 1 in operand 0's mode, without changing its > contents. > > @@ -959,6 +964,7 @@ > > str\t%1, %0 > > ldr\t%0, %1 > > * return aarch64_output_sve_mov_immediate (operands[1]);" > > + [(set_attr "type" "sve_logic_p, sve_str_p, sve_ldr_p, *")] > > ) > > > > ;; Match PTRUES Pn.B when both the predicate and flags are useful. > > @@ -984,6 +990,7 @@ > > { > > operands[2] =3D operands[3] =3D CONSTM1_RTX (VNx16BImode); > > } > > + [(set_attr "type" "sve_set_ps")] > > ) > > > > ;; Match PTRUES Pn.[HSD] when both the predicate and flags are useful. > > @@ -1011,6 +1018,7 @@ > > operands[2] =3D CONSTM1_RTX (VNx16BImode); > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_set_ps")] > > ) > > > > ;; Match PTRUES Pn.B when only the flags result is useful (which is > > @@ -1036,6 +1044,7 @@ > > { > > operands[2] =3D operands[3] =3D CONSTM1_RTX (VNx16BImode); > > } > > + [(set_attr "type" "sve_set_ps")] > > ) > > > > ;; Match PTRUES Pn.[HWD] when only the flags result is useful (which i= s > > @@ -1063,6 +1072,7 @@ > > operands[2] =3D CONSTM1_RTX (VNx16BImode); > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_set_ps")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -1086,6 +1096,7 @@ > > "@ > > setffr > > wrffr\t%0.b" > > + [(set_attr "type" "sve_wr_ffr, sve_wr_ffr")] > > ) > > > > ;; [L2 in the block comment above about FFR handling] > > @@ -1125,6 +1136,7 @@ > > (reg:VNx16BI FFRT_REGNUM))] > > "TARGET_SVE" > > "rdffr\t%0.b" > > + [(set_attr "type" "sve_rd_ffr")] > > ) > > > > ;; Likewise with zero predication. > > @@ -1135,6 +1147,7 @@ > > (match_operand:VNx16BI 1 "register_operand" "Upa")))] > > "TARGET_SVE" > > "rdffr\t%0.b, %1/z" > > + [(set_attr "type" "sve_rd_ffr_p")] > > ) > > > > ;; Read the FFR to test for a fault, without using the predicate resul= t. > > @@ -1151,6 +1164,7 @@ > > (clobber (match_scratch:VNx16BI 0 "=3DUpa"))] > > "TARGET_SVE" > > "rdffrs\t%0.b, %1/z" > > + [(set_attr "type" "sve_rd_ffr_ps")] > > ) > > > > ;; Same for unpredicated RDFFR when tested with a known PTRUE. > > @@ -1165,6 +1179,7 @@ > > (clobber (match_scratch:VNx16BI 0 "=3DUpa"))] > > "TARGET_SVE" > > "rdffrs\t%0.b, %1/z" > > + [(set_attr "type" "sve_rd_ffr_ps")] > > ) > > > > ;; Read the FFR with zero predication and test the result. > > @@ -1184,6 +1199,7 @@ > > (match_dup 1)))] > > "TARGET_SVE" > > "rdffrs\t%0.b, %1/z" > > + [(set_attr "type" "sve_rd_ffr_ps")] > > ) > > > > ;; Same for unpredicated RDFFR when tested with a known PTRUE. > > @@ -1199,6 +1215,7 @@ > > (reg:VNx16BI FFRT_REGNUM))] > > "TARGET_SVE" > > "rdffrs\t%0.b, %1/z" > > + [(set_attr "type" "sve_rd_ffr_ps")] > > ) > > > > ;; [R3 in the block comment above about FFR handling] > > @@ -1248,6 +1265,7 @@ > > UNSPEC_LD1_SVE))] > > "TARGET_SVE" > > "ld1\t%0., %2/z, %1" > > + [(set_attr "type" "sve_load1")] > > ) > > > > ;; Unpredicated LD[234]. > > @@ -1272,6 +1290,7 @@ > > UNSPEC_LDN))] > > "TARGET_SVE" > > "ld\t%0, %2/z, %1" > > + [(set_attr "type" "sve_load")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -1303,6 +1322,7 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_load1")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -1329,6 +1349,7 @@ > > SVE_LDFF1_LDNF1))] > > "TARGET_SVE" > > "ldf1\t%0., %2/z, %1" > > + [(set_attr "type" "sve_load1")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -1367,6 +1388,7 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_load1")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -1388,6 +1410,7 @@ > > UNSPEC_LDNT1_SVE))] > > "TARGET_SVE" > > "ldnt1\t%0., %2/z, %1" > > + [(set_attr "type" "sve_load1")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -1435,6 +1458,8 @@ > > ld1\t%0.s, %5/z, [%1, %2.s, uxtw] > > ld1\t%0.s, %5/z, [%1, %2.s, sxtw %p4] > > ld1\t%0.s, %5/z, [%1, %2.s, uxtw %p4]" > > + [(set_attr "type" "sve_load1_gather_s, sve_load1_gather_s, > sve_load1_gather_su, > > + sve_load1_gather_su, sve_load1_gather_sl, > sve_load1_gather_sl")] > > ) > > > > ;; Predicated gather loads for 64-bit elements. The value of operand = 3 > > @@ -1455,6 +1480,8 @@ > > ld1\t%0.d, %5/z, [%2.d, #%1] > > ld1\t%0.d, %5/z, [%1, %2.d] > > ld1\t%0.d, %5/z, [%1, %2.d, lsl %p4]" > > + [(set_attr "type" "sve_load1_gather_d, sve_load1_gather_d, > > + sve_load1_gather_du, sve_load1_gather_dl")] > > ) > > > > ;; Likewise, but with the offset being extended from 32 bits. > > @@ -1480,6 +1507,7 @@ > > { > > operands[6] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; Likewise, but with the offset being truncated to 32 bits and then > > @@ -1507,6 +1535,7 @@ > > { > > operands[6] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; Likewise, but with the offset being truncated to 32 bits and then > > @@ -1527,6 +1556,7 @@ > > "@ > > ld1\t%0.d, %5/z, [%1, %2.d, uxtw] > > ld1\t%0.d, %5/z, [%1, %2.d, uxtw %p4]" > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -1569,6 +1599,8 @@ > > { > > operands[6] =3D CONSTM1_RTX (VNx4BImode); > > } > > + [(set_attr "type" "sve_load1_gather_s, sve_load1_gather_s, > sve_load1_gather_su, > > + sve_load1_gather_su, sve_load1_gather_sl, > sve_load1_gather_sl")] > > ) > > > > ;; Predicated extending gather loads for 64-bit elements. The value o= f > > @@ -1597,6 +1629,8 @@ > > { > > operands[6] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_load1_gather_d, sve_load1_gather_d, > > + sve_load1_gather_du, sve_load1_gather_dl")] > > ) > > > > ;; Likewise, but with the offset being extended from 32 bits. > > @@ -1627,6 +1661,7 @@ > > operands[6] =3D CONSTM1_RTX (VNx2BImode); > > operands[7] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; Likewise, but with the offset being truncated to 32 bits and then > > @@ -1659,6 +1694,7 @@ > > operands[6] =3D CONSTM1_RTX (VNx2BImode); > > operands[7] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; Likewise, but with the offset being truncated to 32 bits and then > > @@ -1687,6 +1723,7 @@ > > { > > operands[7] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -1718,6 +1755,8 @@ > > ldff1w\t%0.s, %5/z, [%1, %2.s, uxtw] > > ldff1w\t%0.s, %5/z, [%1, %2.s, sxtw %p4] > > ldff1w\t%0.s, %5/z, [%1, %2.s, uxtw %p4]" > > + [(set_attr "type" "sve_load1_gather_s, sve_load1_gather_s, > sve_load1_gather_su, > > + sve_load1_gather_su, sve_load1_gather_sl, > sve_load1_gather_sl")] > > ) > > > > ;; Predicated first-faulting gather loads for 64-bit elements. The va= lue > > @@ -1739,6 +1778,8 @@ > > ldff1d\t%0.d, %5/z, [%2.d, #%1] > > ldff1d\t%0.d, %5/z, [%1, %2.d] > > ldff1d\t%0.d, %5/z, [%1, %2.d, lsl %p4]" > > + [(set_attr "type" "sve_load1_gather_d, sve_load1_gather_d, > > + sve_load1_gather_du, sve_load1_gather_dl")] > > ) > > > > ;; Likewise, but with the offset being sign-extended from 32 bits. > > @@ -1766,6 +1807,7 @@ > > { > > operands[6] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; Likewise, but with the offset being zero-extended from 32 bits. > > @@ -1786,6 +1828,7 @@ > > "@ > > ldff1d\t%0.d, %5/z, [%1, %2.d, uxtw] > > ldff1d\t%0.d, %5/z, [%1, %2.d, uxtw %p4]" > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -1829,6 +1872,8 @@ > > { > > operands[6] =3D CONSTM1_RTX (VNx4BImode); > > } > > + [(set_attr "type" "sve_load1_gather_s, sve_load1_gather_s, > sve_load1_gather_su, > > + sve_load1_gather_su, sve_load1_gather_sl, > sve_load1_gather_sl")] > > ) > > > > ;; Predicated extending first-faulting gather loads for 64-bit element= s. > > @@ -1858,6 +1903,8 @@ > > { > > operands[6] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_load1_gather_d, sve_load1_gather_d, > > + sve_load1_gather_du, sve_load1_gather_dl")] > > ) > > > > ;; Likewise, but with the offset being sign-extended from 32 bits. > > @@ -1890,6 +1937,7 @@ > > operands[6] =3D CONSTM1_RTX (VNx2BImode); > > operands[7] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; Likewise, but with the offset being zero-extended from 32 bits. > > @@ -1918,6 +1966,7 @@ > > { > > operands[7] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -1950,6 +1999,7 @@ > > operands[1] =3D gen_rtx_MEM (mode, operands[1]); > > return aarch64_output_sve_prefetch ("prf", operands[2], "%= 0, > %1"); > > } > > + [(set_attr "type" "sve_ldr")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -1998,6 +2048,8 @@ > > const char *const *parts =3D insns[which_alternative]; > > return aarch64_output_sve_prefetch (parts[0], operands[6], parts[1= ]); > > } > > + [(set_attr "type" "sve_load1_gather_s, sve_load1_gather_s, > sve_load1_gather_su, > > + sve_load1_gather_su, sve_load1_gather_sl, > sve_load1_gather_sl")] > > ) > > > > ;; Predicated gather prefetches for 64-bit elements. The value of ope= rand 3 > > @@ -2025,6 +2077,8 @@ > > const char *const *parts =3D insns[which_alternative]; > > return aarch64_output_sve_prefetch (parts[0], operands[6], parts[1= ]); > > } > > + [(set_attr "type" "sve_load1_gather_d, sve_load1_gather_d, > > + sve_load1_gather_du, sve_load1_gather_dl")] > > ) > > > > ;; Likewise, but with the offset being sign-extended from 32 bits. > > @@ -2058,6 +2112,7 @@ > > { > > operands[9] =3D copy_rtx (operands[0]); > > } > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; Likewise, but with the offset being zero-extended from 32 bits. > > @@ -2084,6 +2139,7 @@ > > const char *const *parts =3D insns[which_alternative]; > > return aarch64_output_sve_prefetch (parts[0], operands[6], parts[1= ]); > > } > > + [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -2122,6 +2178,7 @@ > > UNSPEC_ST1_SVE))] > > "TARGET_SVE" > > "st1\t%1., %2, %0" > > + [(set_attr "type" "sve_store1")] > > ) > > > > ;; Unpredicated ST[234]. This is always a full update, so the depende= nce > > @@ -2152,6 +2209,7 @@ > > UNSPEC_STN))] > > "TARGET_SVE" > > "st\t%1, %2, %0" > > + [(set_attr "type" "sve_store")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -2174,6 +2232,7 @@ > > UNSPEC_ST1_SVE))] > > "TARGET_SVE" > > "st1\t%1., %2, %0" > > + [(set_attr "type" "sve_store1")] > > ) > > > > ;; Predicated truncate and store, with 4 elements per 128-bit block. > > @@ -2187,6 +2246,7 @@ > > UNSPEC_ST1_SVE))] > > "TARGET_SVE" > > "st1\t%1., %2, %0" > > + [(set_attr "type" "sve_store1")] > > ) > > > > ;; Predicated truncate and store, with 2 elements per 128-bit block. > > @@ -2200,6 +2260,7 @@ > > UNSPEC_ST1_SVE))] > > "TARGET_SVE" > > "st1\t%1., %2, %0" > > + [(set_attr "type" "sve_store1")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -2221,6 +2282,7 @@ > > UNSPEC_STNT1_SVE))] > > "TARGET_SVE" > > "stnt1\t%1., %2, %0" > > + [(set_attr "type" "sve_store1")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -2268,6 +2330,8 @@ > > st1\t%4.s, %5, [%0, %1.s, uxtw] > > st1\t%4.s, %5, [%0, %1.s, sxtw %p3] > > st1\t%4.s, %5, [%0, %1.s, uxtw %p3]" > > + [(set_attr "type" "sve_store1_scatter, sve_store1_scatter, > sve_store1_scatter, > > + sve_store1_scatter, sve_store1_scatter, sve_store= 1_scatter")] > > ) > > > > ;; Predicated scatter stores for 64-bit elements. The value of operan= d 2 > > @@ -2288,6 +2352,8 @@ > > st1\t%4.d, %5, [%1.d, #%0] > > st1\t%4.d, %5, [%0, %1.d] > > st1\t%4.d, %5, [%0, %1.d, lsl %p3]" > > + [(set_attr "type" "sve_store1_scatter, sve_store1_scatter, > > + sve_store1_scatter, sve_store1_scatter")] > > ) > > > > ;; Likewise, but with the offset being extended from 32 bits. > > @@ -2313,6 +2379,7 @@ > > { > > operands[6] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_store1_scatter, sve_store1_scatter")] > > ) > > > > ;; Likewise, but with the offset being truncated to 32 bits and then > > @@ -2340,6 +2407,7 @@ > > { > > operands[6] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_store1_scatter, sve_store1_scatter")] > > ) > > > > ;; Likewise, but with the offset being truncated to 32 bits and then > > @@ -2360,6 +2428,7 @@ > > "@ > > st1\t%4.d, %5, [%0, %1.d, uxtw] > > st1\t%4.d, %5, [%0, %1.d, uxtw %p3]" > > + [(set_attr "type" "sve_store1_scatter, sve_store1_scatter")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -2392,6 +2461,8 @@ > > st1\t%4.s, %5, [%0, %1.s, uxtw] > > st1\t%4.s, %5, [%0, %1.s, sxtw %p3] > > st1\t%4.s, %5, [%0, %1.s, uxtw %p3]" > > + [(set_attr "type" "sve_store1_scatter, sve_store1_scatter, > sve_store1_scatter, > > + sve_store1_scatter, sve_store1_scatter, sve_store= 1_scatter")] > > ) > > > > ;; Predicated truncating scatter stores for 64-bit elements. The valu= e of > > @@ -2413,6 +2484,8 @@ > > st1\t%4.d, %5, [%1.d, #%0] > > st1\t%4.d, %5, [%0, %1.d] > > st1\t%4.d, %5, [%0, %1.d, lsl %p3]" > > + [(set_attr "type" "sve_store1_scatter, sve_store1_scatter, > > + sve_store1_scatter, sve_store1_scatter")] > > ) > > > > ;; Likewise, but with the offset being sign-extended from 32 bits. > > @@ -2440,6 +2513,7 @@ > > { > > operands[6] =3D copy_rtx (operands[5]); > > } > > + [(set_attr "type" "sve_store1_scatter, sve_store1_scatter")] > > ) > > > > ;; Likewise, but with the offset being zero-extended from 32 bits. > > @@ -2460,6 +2534,7 @@ > > "@ > > st1\t%4.d, %5, [%0, %1.d, uxtw] > > st1\t%4.d, %5, [%0, %1.d, uxtw %p3]" > > + [(set_attr "type" "sve_store1_scatter, sve_store1_scatter")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -2529,7 +2604,8 @@ > > CONST0_RTX (mode))); > > DONE; > > } > > - [(set_attr "length" "4,4,8")] > > + [(set_attr "length" "4,4,8") > > + (set_attr "type" "sve_move, sve_move, sve_load1")] > > ) > > > > ;; Duplicate an Advanced SIMD vector to fill an SVE vector (LE version= ). > > @@ -2562,6 +2638,7 @@ > > emit_insn (gen_aarch64_sve_ld1rq (operands[0], operands[1], > gp)); > > DONE; > > } > > + [(set_attr "type" "sve_splat, sve_load1")] > > ) > > > > ;; Duplicate an Advanced SIMD vector to fill an SVE vector (BE version= ). > > @@ -2583,6 +2660,7 @@ > > operands[1] =3D gen_rtx_REG (mode, REGNO (operands[1])); > > return "dup\t%0.q, %1.q[0]"; > > } > > + [(set_attr "type" "sve_splat")] > > ) > > > > ;; This is used for vec_duplicates from memory, but can also > > @@ -2598,6 +2676,7 @@ > > UNSPEC_SEL))] > > "TARGET_SVE" > > "ld1r\t%0., %1/z, %2" > > + [(set_attr "type" "sve_load1")] > > ) > > > > ;; Load 128 bits from memory under predicate control and duplicate to > > @@ -2613,6 +2692,7 @@ > > operands[1] =3D gen_rtx_MEM (mode, XEXP (operands[1], 0)); > > return "ld1rq\t%0., %2/z, %1"; > > } > > + [(set_attr "type" "sve_load1")] > > ) > > > > (define_insn "@aarch64_sve_ld1ro" > > @@ -2627,6 +2707,7 @@ > > operands[1] =3D gen_rtx_MEM (mode, XEXP (operands[1], 0)); > > return "ld1ro\t%0., %2/z, %1"; > > } > > + [(set_attr "type" "sve_load1")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -2659,7 +2740,8 @@ > > insr\t%0., %2 > > movprfx\t%0, %1\;insr\t%0., %2 > > movprfx\t%0, %1\;insr\t%0., %2" > > - [(set_attr "movprfx" "*,*,yes,yes")] > > + [(set_attr "movprfx" "*,*,yes,yes") > > + (set_attr "type" "sve_ins_g, sve_ins, sve_ins_gx, sve_ins_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -2679,6 +2761,7 @@ > > index\t%0., #%1, %2 > > index\t%0., %1, #%2 > > index\t%0., %1, %2" > > + [(set_attr "type" "sve_index_g, sve_index_g, sve_index_g")] > > ) > > > > ;; Optimize {x, x, x, x, ...} + {0, n, 2*n, 3*n, ...} if n is in range > > @@ -2694,6 +2777,7 @@ > > operands[2] =3D aarch64_check_zero_based_sve_index_immediate > (operands[2]); > > return "index\t%0., %1, #%2"; > > } > > + [(set_attr "type" "sve_index_g")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -2846,6 +2930,7 @@ > > operands[0] =3D gen_rtx_REG (mode, REGNO (operands[0])); > > return "dup\t%0., %1.[%2]"; > > } > > + [(set_attr "type" "sve_splat")] > > ) > > > > ;; Extract an element outside the range of DUP. This pattern requires= the > > @@ -2863,7 +2948,8 @@ > > ? "ext\t%0.b, %0.b, %0.b, #%2" > > : "movprfx\t%0, %1\;ext\t%0.b, %0.b, %1.b, #%2"); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_ext, sve_ext_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -2886,6 +2972,7 @@ > > "@ > > last\t%0, %1, %2. > > last\t%0, %1, %2." > > + [(set_attr "type" "sve_ins_g, sve_ins")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -2955,7 +3042,8 @@ > > "@ > > \t%0., %1/m, %2. > > movprfx\t%0, %2\;\t%0., %1/m, %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_arith, sve_arith_x")] > > ) > > > > ;; Predicated integer unary arithmetic with merging. > > @@ -2983,7 +3071,8 @@ > > "@ > > \t%0., %1/m, %0. > > movprfx\t%0, %2\;\t%0., %1/m, %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_arith, sve_arith_x")] > > ) > > > > ;; Predicated integer unary arithmetic, merging with an independent va= lue. > > @@ -3006,7 +3095,8 @@ > > \t%0., %1/m, %2. > > movprfx\t%0., %1/z, %2.\;\t%0., > %1/m, %2. > > movprfx\t%0, %3\;\t%0., %1/m, %2." > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_arith, sve_arith_x, sve_arith_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -3032,7 +3122,8 @@ > > "@ > > \t%0., %1/m, %2. > > movprfx\t%0, %2\;\t%0., %1/m, %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_rev, sve_rev_x")] > > ) > > > > ;; Another way of expressing the REVB, REVH and REVW patterns, with th= is > > @@ -3051,7 +3142,8 @@ > > "@ > > rev\t%0., %1/m, > %2. > > movprfx\t%0, %2\;rev\t%0., > %1/m, %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_rev, sve_rev")] > > ) > > > > ;; Predicated integer unary operations with merging. > > @@ -3069,7 +3161,8 @@ > > \t%0., %1/m, %2. > > movprfx\t%0., %1/z, %2.\;\t%0., > %1/m, %2. > > movprfx\t%0, %3\;\t%0., %1/m, %2." > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_rev, sve_rev_x, sve_rev_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -3110,7 +3203,8 @@ > > "@ > > xt\t%0., %1/m, > %2. > > movprfx\t%0, > %2\;xt\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_ext, sve_ext_x")] > > ) > > > > ;; Predicated truncate-and-sign-extend operations. > > @@ -3127,7 +3221,8 @@ > > "@ > > sxt\t%0., %1/m, > %2. > > movprfx\t%0, > %2\;sxt\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_sext, sve_sext_x")] > > ) > > > > ;; Predicated truncate-and-sign-extend operations with merging. > > @@ -3146,7 +3241,8 @@ > > sxt\t%0., %1/m, > %2. > > movprfx\t%0., %1/z, > %2.\;sxt\t%0. DI:Vetype>, %1/m, %2. > > movprfx\t%0, > %3\;sxt\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_sext, sve_sext_x, sve_sext_x")] > > ) > > > > ;; Predicated truncate-and-zero-extend operations, merging with the > > @@ -3167,7 +3263,8 @@ > > "@ > > uxt%e3\t%0., %1/m, %0. > > movprfx\t%0, %2\;uxt%e3\t%0., %1/m, %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_uext, sve_sext_x")] > > ) > > > > ;; Predicated truncate-and-zero-extend operations, merging with an > > @@ -3192,7 +3289,8 @@ > > uxt%e3\t%0., %1/m, %2. > > movprfx\t%0., %1/z, %2.\;uxt%e3\t%0., > %1/m, %2. > > movprfx\t%0, %4\;uxt%e3\t%0., %1/m, %2." > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_uext, sve_sext_x, sve_sext_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -3263,7 +3361,8 @@ > > "@ > > cnot\t%0., %1/m, %2. > > movprfx\t%0, %2\;cnot\t%0., %1/m, %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_logic, sve_logic_x")] > > ) > > > > ;; Predicated logical inverse with merging. > > @@ -3319,7 +3418,8 @@ > > { > > operands[5] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_logic, sve_logic_x")] > > ) > > > > ;; Predicated logical inverse, merging with an independent value. > > @@ -3356,7 +3456,8 @@ > > { > > operands[5] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_logic, sve_logic_x, sve_logic_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -3374,6 +3475,7 @@ > > SVE_FP_UNARY_INT))] > > "TARGET_SVE" > > "\t%0., %1." > > + [(set_attr "type" "sve_fp_trig")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -3617,6 +3719,7 @@ > > (match_operand:PRED_ALL 1 "register_operand" "Upa")))] > > "TARGET_SVE" > > "not\t%0.b, %1/z, %2.b" > > + [(set_attr "type" "sve_logic_p")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -3820,7 +3923,9 @@ > > movprfx\t%0, %1\;add\t%0., %0., #%D2 > > movprfx\t%0, %1\;sub\t%0., %0., #%N2 > > add\t%0., %1., %2." > > - [(set_attr "movprfx" "*,*,*,yes,yes,*")] > > + [(set_attr "movprfx" "*,*,*,yes,yes,*") > > + (set_attr "type" "sve_arith, sve_arith, sve_cnt_p, > > + sve_arith_x, sve_arith_x, sve_arith")] > > ) > > > > ;; Merging forms are handled through SVE_INT_BINARY. > > @@ -3843,7 +3948,8 @@ > > sub\t%0., %1., %2. > > subr\t%0., %0., #%D1 > > movprfx\t%0, %2\;subr\t%0., %0., #%D1" > > - [(set_attr "movprfx" "*,*,yes")] > > + [(set_attr "movprfx" "*,*,yes") > > + (set_attr "type" "sve_arith, sve_arith, sve_arith_x")] > > ) > > > > ;; Merging forms are handled through SVE_INT_BINARY. > > @@ -3865,6 +3971,7 @@ > > UNSPEC_ADR))] > > "TARGET_SVE" > > "adr\t%0., [%1., %2.]" > > + [(set_attr "type" "sve_arith")] > > ) > > > > ;; Same, but with the offset being sign-extended from the low 32 bits. > > @@ -3885,6 +3992,7 @@ > > { > > operands[3] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_arith")] > > ) > > > > ;; Same, but with the offset being zero-extended from the low 32 bits. > > @@ -3898,6 +4006,7 @@ > > UNSPEC_ADR))] > > "TARGET_SVE" > > "adr\t%0.d, [%1.d, %2.d, uxtw]" > > + [(set_attr "type" "sve_arith")] > > ) > > > > ;; Same, matching as a PLUS rather than unspec. > > @@ -3910,6 +4019,7 @@ > > (match_operand:VNx2DI 1 "register_operand" "w")))] > > "TARGET_SVE" > > "adr\t%0.d, [%1.d, %2.d, uxtw]" > > + [(set_attr "type" "sve_arith")] > > ) > > > > ;; ADR with a nonzero shift. > > @@ -3945,6 +4055,7 @@ > > { > > operands[4] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_arith")] > > ) > > > > ;; Same, but with the index being sign-extended from the low 32 bits. > > @@ -3969,6 +4080,7 @@ > > { > > operands[5] =3D operands[4] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_arith")] > > ) > > > > ;; Same, but with the index being zero-extended from the low 32 bits. > > @@ -3990,6 +4102,7 @@ > > { > > operands[5] =3D CONSTM1_RTX (VNx2BImode); > > } > > + [(set_attr "type" "sve_arith")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -4035,7 +4148,8 @@ > > "@ > > abd\t%0., %1/m, %0., %3. > > movprfx\t%0, %2\;abd\t%0., %1/m, %0., > %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_arith, sve_arith_x")] > > ) > > > > (define_expand "@aarch64_cond_abd" > > @@ -4091,7 +4205,8 @@ > > { > > operands[4] =3D operands[5] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_arith, sve_arith_x")] > > ) > > > > ;; Predicated integer absolute difference, merging with the second inp= ut. > > @@ -4122,7 +4237,8 @@ > > { > > operands[4] =3D operands[5] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_arith, sve_arith_x")] > > ) > > > > ;; Predicated integer absolute difference, merging with an independent > value. > > @@ -4169,7 +4285,8 @@ > > else > > FAIL; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_arith_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -4194,7 +4311,8 @@ > > movprfx\t%0, %1\;\t%0., %0., #%D2 > > movprfx\t%0, %1\;\t%0., %0., #%N2 > > \t%0., %1., %2." > > - [(set_attr "movprfx" "*,*,yes,yes,*")] > > + [(set_attr "movprfx" "*,*,yes,yes,*") > > + (set_attr "type" "sve_arith_sat, sve_arith_sat, sve_arith_sat_x, > sve_arith_sat_x, sve_arith_sat")] > > ) > > > > ;; Unpredicated saturating unsigned addition and subtraction. > > @@ -4208,7 +4326,8 @@ > > \t%0., %0., #%D2 > > movprfx\t%0, %1\;\t%0., %0., #%D2 > > \t%0., %1., %2." > > - [(set_attr "movprfx" "*,yes,*")] > > + [(set_attr "movprfx" "*,yes,*") > > + (set_attr "type" "sve_arith_sat, sve_arith_sat_x, sve_arith_sat")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -4249,7 +4368,8 @@ > > "@ > > mulh\t%0., %1/m, %0., %3. > > movprfx\t%0, %2\;mulh\t%0., %1/m, %0., > %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_mul, sve_mul_x")] > > ) > > > > ;; Predicated highpart multiplications with merging. > > @@ -4286,7 +4406,9 @@ > > "@ > > \t%0., %1/m, %0., %3. > > movprfx\t%0, %2\;\t%0., %1/m, %0., > %3." > > - [(set_attr "movprfx" "*,yes")]) > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_mul, sve_mul_x")] > > +) > > > > ;; Predicated highpart multiplications, merging with zero. > > (define_insn "*cond__z" > > @@ -4303,7 +4425,9 @@ > > "@ > > movprfx\t%0., %1/z, %0.\;\t%0., > %1/m, %0., %3. > > movprfx\t%0., %1/z, %2.\;\t%0., > %1/m, %0., %3." > > - [(set_attr "movprfx" "yes")]) > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_mul_x")] > > +) > > > > ;; -------------------------------------------------------------------= ------ > > ;; ---- [INT] Division > > @@ -4344,7 +4468,8 @@ > > \t%0., %1/m, %0., %3. > > r\t%0., %1/m, %0., %2. > > movprfx\t%0, %2\;\t%0., %1/m, %0., > %3." > > - [(set_attr "movprfx" "*,*,yes")] > > + [(set_attr "movprfx" "*,*,yes") > > + (set_attr "type" "sve_div, sve_div, sve_div_x")] > > ) > > > > ;; Predicated integer division with merging. > > @@ -4374,7 +4499,8 @@ > > "@ > > \t%0., %1/m, %0., %3. > > movprfx\t%0, %2\;\t%0., %1/m, %0., > %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_div, sve_div_x")] > > ) > > > > ;; Predicated integer division, merging with the second input. > > @@ -4391,7 +4517,8 @@ > > "@ > > \t%0., %1/m, %0., %2. > > movprfx\t%0, %3\;\t%0., %1/m, %0., > %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_div, sve_div_x")] > > ) > > > > ;; Predicated integer division, merging with an independent value. > > @@ -4421,7 +4548,8 @@ > > operands[4], operands[1])); > > operands[4] =3D operands[2] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_div_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -4444,7 +4572,8 @@ > > \t%0., %0., #%C2 > > movprfx\t%0, %1\;\t%0., %0., #%C2 > > \t%0.d, %1.d, %2.d" > > - [(set_attr "movprfx" "*,yes,*")] > > + [(set_attr "movprfx" "*,yes,*") > > + (set_attr "type" "sve_logic, sve_logic_x, sve_logic")] > > ) > > > > ;; Merging forms are handled through SVE_INT_BINARY. > > @@ -4487,6 +4616,7 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_logic")] > > ) > > > > ;; Predicated BIC with merging. > > @@ -4517,7 +4647,8 @@ > > "@ > > bic\t%0., %1/m, %0., %3. > > movprfx\t%0, %2\;bic\t%0., %1/m, %0., %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_logic, sve_logic_x")] > > ) > > > > ;; Predicated integer BIC, merging with an independent value. > > @@ -4545,7 +4676,8 @@ > > operands[4], operands[1])); > > operands[4] =3D operands[2] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_logic_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -4623,7 +4755,8 @@ > > && !register_operand (operands[3], mode)" > > [(set (match_dup 0) (ASHIFT:SVE_I (match_dup 2) (match_dup 3)))] > > "" > > - [(set_attr "movprfx" "*,*,*,yes")] > > + [(set_attr "movprfx" "*,*,*,yes") > > + (set_attr "type" "sve_shift, sve_shift, sve_shift, sve_shift_x")] > > ) > > > > ;; Unpredicated shift operations by a constant (post-RA only). > > @@ -4636,6 +4769,7 @@ > > (match_operand:SVE_I 2 "aarch64_simd_shift_imm")))] > > "TARGET_SVE && reload_completed" > > "\t%0., %1., #%2" > > + [(set_attr "type" "sve_shift")] > > ) > > > > ;; Predicated integer shift, merging with the first input. > > @@ -4652,7 +4786,8 @@ > > "@ > > \t%0., %1/m, %0., #%3 > > movprfx\t%0, %2\;\t%0., %1/m, %0., #%3" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_shift, sve_shift_x")] > > ) > > > > ;; Predicated integer shift, merging with an independent value. > > @@ -4678,7 +4813,8 @@ > > operands[4], operands[1])); > > operands[4] =3D operands[2] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_shift_x")] > > ) > > > > ;; Unpredicated shifts of narrow elements by 64-bit amounts. > > @@ -4690,6 +4826,7 @@ > > SVE_SHIFT_WIDE))] > > "TARGET_SVE" > > "\t%0., %1., %2.d" > > + [(set_attr "type" "sve_shift")] > > ) > > > > ;; Merging predicated shifts of narrow elements by 64-bit amounts. > > @@ -4722,7 +4859,9 @@ > > "@ > > \t%0., %1/m, %0., %3.d > > movprfx\t%0, %2\;\t%0., %1/m, %0., > %3.d" > > - [(set_attr "movprfx" "*, yes")]) > > + [(set_attr "movprfx" "*, yes") > > + (set_attr "type" "sve_shift, sve_shift_x")] > > +) > > > > ;; Predicated shifts of narrow elements by 64-bit amounts, merging wit= h > zero. > > (define_insn "*cond__z" > > @@ -4739,7 +4878,9 @@ > > "@ > > movprfx\t%0., %1/z, %0.\;\t%0., > %1/m, %0., %3.d > > movprfx\t%0., %1/z, %2.\;\t%0., > %1/m, %0., %3.d" > > - [(set_attr "movprfx" "yes")]) > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_shift_x")] > > +) > > > > ;; -------------------------------------------------------------------= ------ > > ;; ---- [INT] Shifts (rounding towards 0) > > @@ -4781,7 +4922,9 @@ > > "@ > > asrd\t%0., %1/m, %0., #%3 > > movprfx\t%0, %2\;asrd\t%0., %1/m, %0., #%3" > > - [(set_attr "movprfx" "*,yes")]) > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_shift_d, sve_shift_dx")] > > +) > > > > ;; Predicated shift with merging. > > (define_expand "@cond_" > > @@ -4825,7 +4968,9 @@ > > { > > operands[4] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes")]) > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_shift, sve_shift_x")] > > +) > > > > ;; Predicated shift, merging with an independent value. > > (define_insn_and_rewrite "*cond__any" > > @@ -4854,7 +4999,8 @@ > > operands[4], operands[1])); > > operands[4] =3D operands[2] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_shift_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -4876,6 +5022,7 @@ > > SVE_FP_BINARY_INT))] > > "TARGET_SVE" > > "\t%0., %1., %2." > > + [(set_attr "type" "sve_fp_trig")] > > ) > > > > ;; Predicated floating-point binary operations that take an integer > > @@ -4892,7 +5039,8 @@ > > "@ > > \t%0., %1/m, %0., %3. > > movprfx\t%0, %2\;\t%0., %1/m, %0., > %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_trig, sve_fp_trig_x")] > > ) > > > > ;; Predicated floating-point binary operations with merging, taking an > > @@ -4934,7 +5082,8 @@ > > { > > operands[4] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_trig, sve_fp_trig_x")] > > ) > > > > (define_insn "*cond__2_strict" > > @@ -4953,7 +5102,8 @@ > > "@ > > \t%0., %1/m, %0., %3. > > movprfx\t%0, %2\;\t%0., %1/m, %0., > %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_trig, sve_fp_trig_x")] > > ) > > > > ;; Predicated floating-point binary operations that take an integer as > > @@ -4992,7 +5142,8 @@ > > else > > FAIL; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_trig_x")] > > ) > > > > (define_insn_and_rewrite "*cond__any_strict" > > @@ -5021,7 +5172,8 @@ > > operands[4], operands[1])); > > operands[4] =3D operands[2] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_trig_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -5042,7 +5194,8 @@ > > (match_operand:SVE_FULL_F 1 "register_operand" "w") > > (match_operand:SVE_FULL_F 2 "register_operand" "w")))] > > "TARGET_SVE && reload_completed" > > - "\t%0., %1., %2.") > > + "\t%0., %1., %2." > > +) > > > > ;; -------------------------------------------------------------------= ------ > > ;; ---- [FP] General binary arithmetic corresponding to unspecs > > @@ -5421,7 +5574,9 @@ > > && INTVAL (operands[4]) =3D=3D SVE_RELAXED_GP" > > [(set (match_dup 0) (plus:SVE_FULL_F (match_dup 2) (match_dup 3)))] > > "" > > - [(set_attr "movprfx" "*,*,*,*,yes,yes,yes")] > > + [(set_attr "movprfx" "*,*,*,*,yes,yes,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith, sve_fp_arith, sve_fp_= arith, > > + sve_fp_arith_x, sve_fp_arith_x, sve_fp_arith_x")] > > ) > > > > ;; Predicated floating-point addition of a constant, merging with the > > @@ -5448,7 +5603,8 @@ > > { > > operands[4] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,*,yes,yes")] > > + [(set_attr "movprfx" "*,*,yes,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith, sve_fp_arith_x, > sve_fp_arith_x")] > > ) > > > > (define_insn "*cond_add_2_const_strict" > > @@ -5469,7 +5625,8 @@ > > fsub\t%0., %1/m, %0., #%N3 > > movprfx\t%0, %2\;fadd\t%0., %1/m, %0., #%3 > > movprfx\t%0, %2\;fsub\t%0., %1/m, %0., #%N3" > > - [(set_attr "movprfx" "*,*,yes,yes")] > > + [(set_attr "movprfx" "*,*,yes,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith, sve_fp_arith_x, > sve_fp_arith_x")] > > ) > > > > ;; Predicated floating-point addition of a constant, merging with an > > @@ -5509,7 +5666,8 @@ > > else > > FAIL; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_arith_x")] > > ) > > > > (define_insn_and_rewrite "*cond_add_any_const_strict" > > @@ -5540,7 +5698,8 @@ > > operands[4], operands[1])); > > operands[4] =3D operands[2] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_arith_x")] > > ) > > > > ;; Register merging forms are handled through SVE_COND_FP_BINARY. > > @@ -5565,7 +5724,8 @@ > > "@ > > fcadd\t%0., %1/m, %0., %3., # > > movprfx\t%0, %2\;fcadd\t%0., %1/m, %0., > %3., #" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith_c, sve_fp_arith_cx")] > > ) > > > > ;; Predicated FCADD with merging. > > @@ -5619,7 +5779,8 @@ > > { > > operands[4] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith_c, sve_fp_arith_cx")] > > ) > > > > (define_insn "*cond__2_strict" > > @@ -5638,7 +5799,8 @@ > > "@ > > fcadd\t%0., %1/m, %0., %3., # > > movprfx\t%0, %2\;fcadd\t%0., %1/m, %0., > %3., #" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith_c, sve_fp_arith_cx")] > > ) > > > > ;; Predicated FCADD, merging with an independent value. > > @@ -5675,7 +5837,8 @@ > > else > > FAIL; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_arith_cx")] > > ) > > > > (define_insn_and_rewrite "*cond__any_strict" > > @@ -5704,7 +5867,8 @@ > > operands[4], operands[1])); > > operands[4] =3D operands[2] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_arith_cx")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -5739,7 +5903,9 @@ > > && INTVAL (operands[4]) =3D=3D SVE_RELAXED_GP" > > [(set (match_dup 0) (minus:SVE_FULL_F (match_dup 2) (match_dup 3)))] > > "" > > - [(set_attr "movprfx" "*,*,*,*,yes,yes")] > > + [(set_attr "movprfx" "*,*,*,*,yes,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith, sve_fp_arith, > > + sve_fp_arith, sve_fp_arith_x, sve_fp_arith_x")] > > ) > > > > ;; Predicated floating-point subtraction from a constant, merging with= the > > @@ -5764,7 +5930,8 @@ > > { > > operands[4] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith_x")] > > ) > > > > (define_insn "*cond_sub_3_const_strict" > > @@ -5783,7 +5950,8 @@ > > "@ > > fsubr\t%0., %1/m, %0., #%2 > > movprfx\t%0, %3\;fsubr\t%0., %1/m, %0., #%2" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith_x")] > > ) > > > > ;; Predicated floating-point subtraction from a constant, merging with= an > > @@ -5820,7 +5988,8 @@ > > else > > FAIL; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_arith_x")] > > ) > > > > (define_insn_and_rewrite "*cond_sub_const_strict" > > @@ -5848,7 +6017,8 @@ > > operands[4], operands[1])= ); > > operands[4] =3D operands[3] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_arith_x")] > > ) > > ;; Register merging forms are handled through SVE_COND_FP_BINARY. > > > > @@ -5896,7 +6066,8 @@ > > { > > operands[5] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith_x")] > > ) > > > > (define_insn "*aarch64_pred_abd_strict" > > @@ -5915,7 +6086,8 @@ > > "@ > > fabd\t%0., %1/m, %0., %3. > > movprfx\t%0, %2\;fabd\t%0., %1/m, %0., %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith_x")] > > ) > > > > (define_expand "@aarch64_cond_abd" > > @@ -5968,7 +6140,8 @@ > > operands[4] =3D copy_rtx (operands[1]); > > operands[5] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith_x")] > > ) > > > > (define_insn "*aarch64_cond_abd_2_strict" > > @@ -5991,7 +6164,8 @@ > > "@ > > fabd\t%0., %1/m, %0., %3. > > movprfx\t%0, %2\;fabd\t%0., %1/m, %0., %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith_x")] > > ) > > > > ;; Predicated floating-point absolute difference, merging with the sec= ond > > @@ -6022,7 +6196,8 @@ > > operands[4] =3D copy_rtx (operands[1]); > > operands[5] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith_x")] > > ) > > > > (define_insn "*aarch64_cond_abd_3_strict" > > @@ -6045,7 +6220,8 @@ > > "@ > > fabd\t%0., %1/m, %0., %2. > > movprfx\t%0, %3\;fabd\t%0., %1/m, %0., %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith_x")] > > ) > > > > ;; Predicated floating-point absolute difference, merging with an > > @@ -6094,7 +6270,8 @@ > > else > > FAIL; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_arith_x")] > > ) > > > > (define_insn_and_rewrite "*aarch64_cond_abd_any_strict" > > @@ -6130,7 +6307,8 @@ > > operands[4], operands[1])); > > operands[4] =3D operands[3] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_arith_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -6163,7 +6341,9 @@ > > && INTVAL (operands[4]) =3D=3D SVE_RELAXED_GP" > > [(set (match_dup 0) (mult:SVE_FULL_F (match_dup 2) (match_dup 3)))] > > "" > > - [(set_attr "movprfx" "*,*,*,yes,yes")] > > + [(set_attr "movprfx" "*,*,*,yes,yes") > > + (set_attr "type" "sve_fp_mul, *, sve_fp_mul, > > + sve_fp_mul_x, sve_fp_mul_x")] > > ) > > > > ;; Merging forms are handled through SVE_COND_FP_BINARY and > > @@ -6180,6 +6360,7 @@ > > (match_operand:SVE_FULL_F 1 "register_operand" "w")))] > > "TARGET_SVE" > > "fmul\t%0., %1., %2.[%3]" > > + [(set_attr "type" "sve_fp_mul")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -6243,6 +6424,7 @@ > > LOGICALF))] > > "TARGET_SVE" > > "\t%0.d, %1.d, %2.d" > > + [(set_attr "type" "sve_logic")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -6363,7 +6545,9 @@ > > \t%0., %1/m, %0., %3. > > movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 > > movprfx\t%0, %2\;\t%0., %1/m, %0., > %3." > > - [(set_attr "movprfx" "*,*,yes,yes")] > > + [(set_attr "movprfx" "*,*,yes,yes") > > + (set_attr "type" "sve_fp_arith, sve_fp_arith, > > + sve_fp_arith_x, sve_fp_arith_x")] > > ) > > > > ;; Merging forms are handled through SVE_COND_FP_BINARY and > > @@ -6390,6 +6574,7 @@ > > (match_operand:PRED_ALL 2 "register_operand" "Upa")))] > > "TARGET_SVE" > > "and\t%0.b, %1/z, %2.b, %2.b" > > + [(set_attr "type" "sve_logic_p")] > > ) > > > > ;; Unpredicated predicate EOR and ORR. > > @@ -6416,6 +6601,7 @@ > > (match_operand:PRED_ALL 1 "register_operand" "Upa")))] > > "TARGET_SVE" > > "\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_logic_p")] > > ) > > > > ;; Perform a logical operation on operands 2 and 3, using operand 1 as > > @@ -6438,6 +6624,7 @@ > > (match_dup 4)))] > > "TARGET_SVE" > > "s\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_logic_ps")] > > ) > > > > ;; Same with just the flags result. > > @@ -6456,6 +6643,7 @@ > > (clobber (match_scratch:VNx16BI 0 "=3DUpa"))] > > "TARGET_SVE" > > "s\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_logic_ps")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -6476,6 +6664,7 @@ > > (match_operand:PRED_ALL 1 "register_operand" "Upa")))] > > "TARGET_SVE" > > "\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_logic_p")] > > ) > > > > ;; Same, but set the flags as a side-effect. > > @@ -6499,6 +6688,7 @@ > > (match_dup 4)))] > > "TARGET_SVE" > > "s\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_logic_ps")] > > ) > > > > ;; Same with just the flags result. > > @@ -6518,6 +6708,7 @@ > > (clobber (match_scratch:VNx16BI 0 "=3DUpa"))] > > "TARGET_SVE" > > "s\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_logic_ps")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -6538,6 +6729,7 @@ > > (match_operand:PRED_ALL 1 "register_operand" "Upa")))] > > "TARGET_SVE" > > "\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_logic")] > > ) > > > > ;; Same, but set the flags as a side-effect. > > @@ -6562,6 +6754,7 @@ > > (match_dup 4)))] > > "TARGET_SVE" > > "s\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_logic_ps")] > > ) > > > > ;; Same with just the flags result. > > @@ -6582,6 +6775,7 @@ > > (clobber (match_scratch:VNx16BI 0 "=3DUpa"))] > > "TARGET_SVE" > > "s\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_logic_ps")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -6631,7 +6825,8 @@ > > mad\t%0., %1/m, %3., %4. > > mla\t%0., %1/m, %2., %3. > > movprfx\t%0, %4\;mla\t%0., %1/m, %2., %3." > > - [(set_attr "movprfx" "*,*,yes")] > > + [(set_attr "movprfx" "*,*,yes") > > + (set_attr "type" "sve_mla, sve_mla, sve_mla_x")] > > ) > > > > ;; Predicated integer addition of product with merging. > > @@ -6673,7 +6868,8 @@ > > "@ > > mad\t%0., %1/m, %3., %4. > > movprfx\t%0, %2\;mad\t%0., %1/m, %3., %4." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_mla, sve_mla_x")] > > ) > > > > ;; Predicated integer addition of product, merging with the third inpu= t. > > @@ -6692,7 +6888,8 @@ > > "@ > > mla\t%0., %1/m, %2., %3. > > movprfx\t%0, %4\;mla\t%0., %1/m, %2., %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_mla, sve_mla_x")] > > ) > > > > ;; Predicated integer addition of product, merging with an independent > value. > > @@ -6726,7 +6923,8 @@ > > operands[5], operands[1])); > > operands[5] =3D operands[4] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_mla_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -6772,7 +6970,8 @@ > > msb\t%0., %1/m, %3., %4. > > mls\t%0., %1/m, %2., %3. > > movprfx\t%0, %4\;mls\t%0., %1/m, %2., %3." > > - [(set_attr "movprfx" "*,*,yes")] > > + [(set_attr "movprfx" "*,*,yes") > > + (set_attr "type" "sve_mla, sve_mla, sve_mla_x")] > > ) > > > > ;; Predicated integer subtraction of product with merging. > > @@ -6814,7 +7013,8 @@ > > "@ > > msb\t%0., %1/m, %3., %4. > > movprfx\t%0, %2\;msb\t%0., %1/m, %3., %4." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_mla, sve_mla_x")] > > ) > > > > ;; Predicated integer subtraction of product, merging with the third i= nput. > > @@ -6833,7 +7033,8 @@ > > "@ > > mls\t%0., %1/m, %2., %3. > > movprfx\t%0, %4\;mls\t%0., %1/m, %2., %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_mla, sve_mla_x")] > > ) > > > > ;; Predicated integer subtraction of product, merging with an > > @@ -6868,7 +7069,8 @@ > > operands[5], operands[1])); > > operands[5] =3D operands[4] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_mla_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -6894,7 +7096,8 @@ > > "@ > > dot\\t%0., %1., %2. > > movprfx\t%0, %3\;dot\\t%0., %1., > %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_dot, sve_dot_x")] > > ) > > > > ;; Four-element integer dot-product by selected lanes with accumulatio= n. > > @@ -6913,7 +7116,8 @@ > > "@ > > dot\\t%0., %1., %2.[%3] > > movprfx\t%0, %4\;dot\\t%0., %1., > %2.[%3]" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_dot, sve_dot_x")] > > ) > > > > (define_insn "@dot_prod" > > @@ -6928,7 +7132,8 @@ > > "@ > > dot\\t%0.s, %1.b, %2.b > > movprfx\t%0, %3\;dot\\t%0.s, %1.b, %2.b" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_dot, sve_dot_x")] > > ) > > > > (define_insn "@aarch64_dot_prod_lane" > > @@ -6946,7 +7151,8 @@ > > "@ > > dot\\t%0.s, %1.b, %2.b[%3] > > movprfx\t%0, %4\;dot\\t%0.s, %1.b, %2.b[%3]" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_dot, sve_dot_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -7000,7 +7206,8 @@ > > "@ > > mmla\\t%0.s, %2.b, %3.b > > movprfx\t%0, %1\;mmla\\t%0.s, %2.b, %3.b" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_mmla, sve_mmla_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -7048,7 +7255,8 @@ > > \t%0., %1/m, %2., %3. > > \t%0., %1/m, %3., %4. > > movprfx\t%0, %4\;\t%0., %1/m, %2., > %3." > > - [(set_attr "movprfx" "*,*,yes")] > > + [(set_attr "movprfx" "*,*,yes") > > + (set_attr "type" "sve_fp_mla, sve_fp_mla, sve_fp_mla_x")] > > ) > > > > ;; Predicated floating-point ternary operations with merging. > > @@ -7096,7 +7304,8 @@ > > { > > operands[5] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_mla, sve_fp_mla_x")] > > ) > > > > (define_insn "*cond__2_strict" > > @@ -7116,7 +7325,8 @@ > > "@ > > \t%0., %1/m, %3., %4. > > movprfx\t%0, %2\;\t%0., %1/m, %3., > %4." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_mla, sve_fp_mla_x")] > > ) > > > > ;; Predicated floating-point ternary operations, merging with the > > @@ -7142,7 +7352,8 @@ > > { > > operands[5] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_mla, sve_fp_mla_x")] > > ) > > > > (define_insn "*cond__4_strict" > > @@ -7162,7 +7373,8 @@ > > "@ > > \t%0., %1/m, %2., %3. > > movprfx\t%0, %4\;\t%0., %1/m, %2., > %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_mla, sve_fp_mla_x")] > > ) > > > > ;; Predicated floating-point ternary operations, merging with an > > @@ -7206,7 +7418,8 @@ > > else > > FAIL; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_mla_x")] > > ) > > > > (define_insn_and_rewrite "*cond__any_strict" > > @@ -7241,7 +7454,8 @@ > > operands[5], operands[1])); > > operands[5] =3D operands[4] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_mla_x")] > > ) > > > > ;; Unpredicated FMLA and FMLS by selected lanes. It doesn't seem wort= h > using > > @@ -7260,7 +7474,8 @@ > > "@ > > \t%0., %1., %2.[%3] > > movprfx\t%0, %4\;\t%0., %1., > %2.[%3]" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_mla, sve_fp_mla_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -7284,7 +7499,8 @@ > > "@ > > fcmla\t%0., %1/m, %2., %3., # > > movprfx\t%0, %4\;fcmla\t%0., %1/m, %2., > %3., #" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_mla_c, sve_fp_mla_cx")] > > ) > > > > ;; unpredicated optab pattern for auto-vectorizer > > @@ -7382,7 +7598,8 @@ > > { > > operands[5] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_mla_c, sve_fp_mla_cx")] > > ) > > > > (define_insn "*cond__4_strict" > > @@ -7402,7 +7619,8 @@ > > "@ > > fcmla\t%0., %1/m, %2., %3., # > > movprfx\t%0, %4\;fcmla\t%0., %1/m, %2., > %3., #" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_mla_c, sve_fp_mla_cx")] > > ) > > > > ;; Predicated FCMLA, merging with an independent value. > > @@ -7440,7 +7658,8 @@ > > else > > FAIL; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_mla_cx")] > > ) > > > > (define_insn_and_rewrite "*cond__any_strict" > > @@ -7470,7 +7689,8 @@ > > operands[5], operands[1])); > > operands[5] =3D operands[4] =3D operands[0]; > > } > > - [(set_attr "movprfx" "yes")] > > + [(set_attr "movprfx" "yes") > > + (set_attr "type" "sve_fp_mla_cx")] > > ) > > > > ;; Unpredicated FCMLA with indexing. > > @@ -7488,7 +7708,8 @@ > > "@ > > fcmla\t%0., %1., %2.[%3], # > > movprfx\t%0, %4\;fcmla\t%0., %1., %2.[%3], > #" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_mla_c, sve_fp_mla_cx")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -7509,7 +7730,8 @@ > > "@ > > ftmad\t%0., %0., %2., #%3 > > movprfx\t%0, %1\;ftmad\t%0., %0., %2., #%3" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_trig, sve_fp_trig_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -7571,7 +7793,8 @@ > > "@ > > \\t%0., %2., %3. > > movprfx\t%0, %1\;\\t%0., %2., > %3." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_mmla, sve_fp_mmla_x")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -7639,7 +7862,9 @@ > > movprfx\t%0., %3/z, %0.\;fmov\t%0., %3/m, > #%1 > > movprfx\t%0, %2\;mov\t%0., %3/m, #%I1 > > movprfx\t%0, %2\;fmov\t%0., %3/m, #%1" > > - [(set_attr "movprfx" "*,*,*,*,yes,yes,yes")] > > + [(set_attr "movprfx" "*,*,*,*,yes,yes,yes") > > + (set_attr "type" "sve_move, sve_move, sve_move, sve_fp_move, > > + sve_fp_move_x, sve_move_x, sve_move_x")] > > ) > > > > ;; Optimize selects between a duplicated scalar variable and another v= ector, > > @@ -7662,7 +7887,9 @@ > > movprfx\t%0., %3/z, %0.\;mov\t%0., %3/m, > %1 > > movprfx\t%0, %2\;mov\t%0., %3/m, %1 > > movprfx\t%0, %2\;mov\t%0., %3/m, %1" > > - [(set_attr "movprfx" "*,*,yes,yes,yes,yes")] > > + [(set_attr "movprfx" "*,*,yes,yes,yes,yes") > > + (set_attr "type" "sve_move, sve_move, > > + sve_move_x, sve_move_x, sve_move_x, sve_move_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -7813,6 +8040,7 @@ > > "@ > > cmp\t%0., %1/z, %3., #%4 > > cmp\t%0., %1/z, %3., %4." > > + [(set_attr "type" "sve_compare_s, sve_compare_s")] > > ) > > > > ;; Predicated integer comparisons in which both the flag and predicate > > @@ -7849,6 +8077,7 @@ > > operands[6] =3D copy_rtx (operands[4]); > > operands[7] =3D operands[5]; > > } > > + [(set_attr "type" "sve_compare_s, sve_compare_s")] > > ) > > > > ;; Predicated integer comparisons in which only the flags result is > > @@ -7878,6 +8107,7 @@ > > operands[6] =3D copy_rtx (operands[4]); > > operands[7] =3D operands[5]; > > } > > + [(set_attr "type" "sve_compare_s, sve_compare_s")] > > ) > > > > ;; Predicated integer comparisons, formed by combining a PTRUE- > predicated > > @@ -7925,6 +8155,7 @@ > > (clobber (reg:CC_NZC CC_REGNUM))] > > "TARGET_SVE" > > "cmp\t%0., %1/z, %3., %4.d" > > + [(set_attr "type" "sve_compare_s")] > > ) > > > > ;; Predicated integer wide comparisons in which both the flag and > > @@ -7956,6 +8187,7 @@ > > "TARGET_SVE > > && aarch64_sve_same_pred_for_ptest_p (&operands[4], &operands[6])" > > "cmp\t%0., %1/z, %2., %3.d" > > + [(set_attr "type" "sve_compare_s")] > > ) > > > > ;; Predicated integer wide comparisons in which only the flags result > > @@ -7979,6 +8211,7 @@ > > "TARGET_SVE > > && aarch64_sve_same_pred_for_ptest_p (&operands[4], &operands[6])" > > "cmp\t%0., %1/z, %2., %3.d" > > + [(set_attr "type" "sve_compare_s")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8007,6 +8240,7 @@ > > (clobber (reg:CC_NZC CC_REGNUM))] > > "TARGET_SVE" > > "while\t%0., %1, %2" > > + [(set_attr "type" "sve_loop_gs")] > > ) > > > > ;; The WHILE instructions set the flags in the same way as a PTEST wit= h > > @@ -8036,6 +8270,7 @@ > > operands[3] =3D CONSTM1_RTX (VNx16BImode); > > operands[4] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_loop_gs")] > > ) > > > > ;; Same, but handle the case in which only the flags result is useful. > > @@ -8060,6 +8295,7 @@ > > operands[3] =3D CONSTM1_RTX (VNx16BImode); > > operands[4] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_loop_gs")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8104,6 +8340,7 @@ > > "@ > > fcm\t%0., %1/z, %3., #0.0 > > fcm\t%0., %1/z, %3., %4." > > + [(set_attr "type" "sve_fp_compare, sve_fp_compare")] > > ) > > > > ;; Same for unordered comparisons. > > @@ -8117,6 +8354,7 @@ > > UNSPEC_COND_FCMUO))] > > "TARGET_SVE" > > "fcmuo\t%0., %1/z, %3., %4." > > + [(set_attr "type" "sve_fp_compare")] > > ) > > > > ;; Floating-point comparisons predicated on a PTRUE, with the results > ANDed > > @@ -8204,10 +8442,10 @@ > > (not: > > (match_dup 5)) > > (match_dup 4)))] > > -{ > > - if (can_create_pseudo_p ()) > > - operands[5] =3D gen_reg_rtx (mode); > > -} > > + { > > + if (can_create_pseudo_p ()) > > + operands[5] =3D gen_reg_rtx (mode); > > + } > > ) > > > > ;; Make sure that we expand to a nor when the operand 4 of > > @@ -8245,10 +8483,10 @@ > > (not: > > (match_dup 4))) > > (match_dup 1)))] > > -{ > > - if (can_create_pseudo_p ()) > > - operands[5] =3D gen_reg_rtx (mode); > > -} > > + { > > + if (can_create_pseudo_p ()) > > + operands[5] =3D gen_reg_rtx (mode); > > + } > > ) > > > > (define_insn_and_split "*fcmuo_bic_combine" > > @@ -8280,10 +8518,10 @@ > > (not: > > (match_dup 5)) > > (match_dup 4)))] > > -{ > > - if (can_create_pseudo_p ()) > > - operands[5] =3D gen_reg_rtx (mode); > > -} > > + { > > + if (can_create_pseudo_p ()) > > + operands[5] =3D gen_reg_rtx (mode); > > + } > > ) > > > > ;; Same for unordered comparisons. > > @@ -8320,10 +8558,10 @@ > > (not: > > (match_dup 4))) > > (match_dup 1)))] > > -{ > > - if (can_create_pseudo_p ()) > > - operands[5] =3D gen_reg_rtx (mode); > > -} > > + { > > + if (can_create_pseudo_p ()) > > + operands[5] =3D gen_reg_rtx (mode); > > + } > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8380,6 +8618,7 @@ > > operands[5] =3D copy_rtx (operands[1]); > > operands[6] =3D copy_rtx (operands[1]); > > } > > + [(set_attr "type" "sve_fp_compare")] > > ) > > > > (define_insn "*aarch64_pred_fac_strict" > > @@ -8400,6 +8639,7 @@ > > SVE_COND_FP_ABS_CMP))] > > "TARGET_SVE" > > "fac\t%0., %1/z, %2., %3." > > + [(set_attr "type" "sve_fp_compare")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8420,6 +8660,7 @@ > > (match_operand:PRED_ALL 2 "register_operand" "Upa"))))] > > "TARGET_SVE" > > "sel\t%0.b, %3, %1.b, %2.b" > > + [(set_attr "type" "sve_sel_p")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8468,6 +8709,7 @@ > > UNSPEC_PTEST))] > > "TARGET_SVE" > > "ptest\t%0, %3.b" > > + [(set_attr "type" "sve_set_ps")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -8495,6 +8737,7 @@ > > "@ > > clast\t%0, %2, %0, %3. > > clast\t%0, %2, %0, %3." > > + [(set_attr "type" "sve_cext, sve_cext")] > > ) > > > > (define_insn "@aarch64_fold_extract_vector__" > > @@ -8508,6 +8751,8 @@ > > "@ > > clast\t%0., %2, %0., %3. > > movprfx\t%0, %1\;clast\t%0., %2, %0., > %3." > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cext, sve_cext_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8548,6 +8793,7 @@ > > SVE_INT_ADDV))] > > "TARGET_SVE && >=3D " > > "addv\t%d0, %1, %2." > > + [(set_attr "type" "sve_arith_r")] > > ) > > > > ;; Unpredicated integer reductions. > > @@ -8570,6 +8816,7 @@ > > SVE_INT_REDUCTION))] > > "TARGET_SVE" > > "\t%0, %1, %2." > > + [(set_attr "type" "sve_arith_r")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8614,6 +8861,7 @@ > > SVE_FP_REDUCTION))] > > "TARGET_SVE" > > "\t%0, %1, %2." > > + [(set_attr "type" "sve_fp_arith_r")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8645,6 +8893,7 @@ > > UNSPEC_FADDA))] > > "TARGET_SVE" > > "fadda\t%0, %3, %0, %2." > > + [(set_attr "type" "sve_fp_arith_a")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -8679,6 +8928,7 @@ > > UNSPEC_TBL))] > > "TARGET_SVE" > > "tbl\t%0., %1., %2." > > + [(set_attr "type" "sve_tbl")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8699,6 +8949,7 @@ > > UNSPEC_SVE_COMPACT))] > > "TARGET_SVE" > > "compact\t%0., %1, %2." > > + [(set_attr "type" "sve_cext")] > > ) > > > > ;; Duplicate one element of a vector. > > @@ -8711,6 +8962,7 @@ > > "TARGET_SVE > > && IN_RANGE (INTVAL (operands[2]) * / 8, 0, 63)" > > "dup\t%0., %1.[%2]" > > + [(set_attr "type" "sve_splat")] > > ) > > > > ;; Use DUP.Q to duplicate a 128-bit segment of a register. > > @@ -8747,6 +8999,7 @@ > > operands[2] =3D gen_int_mode (byte / 16, DImode); > > return "dup\t%0.q, %1.q[%2]"; > > } > > + [(set_attr "type" "sve_splat")] > > ) > > > > ;; Reverse the order of elements within a full vector. > > @@ -8756,7 +9009,9 @@ > > [(match_operand:SVE_ALL 1 "register_operand" "w")] > > UNSPEC_REV))] > > "TARGET_SVE" > > - "rev\t%0., %1.") > > + "rev\t%0., %1." > > + [(set_attr "type" "sve_rev")] > > +) > > > > ;; -------------------------------------------------------------------= ------ > > ;; ---- [INT,FP] Special-purpose binary permutes > > @@ -8784,7 +9039,8 @@ > > "@ > > splice\t%0., %1, %0., %3. > > movprfx\t%0, %2\;splice\t%0., %1, %0., %3." > > - [(set_attr "movprfx" "*, yes")] > > + [(set_attr "movprfx" "*, yes") > > + (set_attr "type" "sve_cext, sve_cext_x")] > > ) > > > > ;; Permutes that take half the elements from one vector and half the > > @@ -8797,6 +9053,7 @@ > > PERMUTE))] > > "TARGET_SVE" > > "\t%0., %1., %2." > > + [(set_attr "type" "sve_ext")] > > ) > > > > ;; Apply PERMUTE to 128-bit sequences. The behavior of these patterns > > @@ -8809,6 +9066,7 @@ > > PERMUTEQ))] > > "TARGET_SVE_F64MM" > > "\t%0.q, %1.q, %2.q" > > + [(set_attr "type" "sve_ext")] > > ) > > > > ;; Concatenate two vectors and extract a subvector. Note that the > > @@ -8828,7 +9086,8 @@ > > ? "ext\\t%0.b, %0.b, %2.b, #%3" > > : "movprfx\t%0, %1\;ext\\t%0.b, %0.b, %2.b, #%3"); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_ext, sve_ext_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8843,7 +9102,9 @@ > > (unspec:PRED_ALL [(match_operand:PRED_ALL 1 "register_operand" > "Upa")] > > UNSPEC_REV))] > > "TARGET_SVE" > > - "rev\t%0., %1.") > > + "rev\t%0., %1." > > + [(set_attr "type" "sve_rev_p")] > > +) > > > > ;; -------------------------------------------------------------------= ------ > > ;; ---- [PRED] Special-purpose binary permutes > > @@ -8866,6 +9127,7 @@ > > PERMUTE))] > > "TARGET_SVE" > > "\t%0., %1., %2." > > + [(set_attr "type" "sve_trn_p")] > > ) > > > > ;; Special purpose permute used by the predicate generation instructio= ns. > > @@ -8880,6 +9142,7 @@ > > UNSPEC_TRN1_CONV))] > > "TARGET_SVE" > > "trn1\t%0., %1., > %2." > > + [(set_attr "type" "sve_trn_p")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -8903,6 +9166,7 @@ > > UNSPEC_PACK))] > > "TARGET_SVE" > > "uzp1\t%0., %1., %2." > > + [(set_attr "type" "sve_zip")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8939,6 +9203,7 @@ > > UNPACK))] > > "TARGET_SVE" > > "unpk\t%0., %1." > > + [(set_attr "type" "sve_upk")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -8976,7 +9241,8 @@ > > "@ > > fcvtz\t%0., %1/m, %2. > > movprfx\t%0, %2\;fcvtz\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_to_int, sve_fp_to_int_x")] > > ) > > > > ;; Predicated narrowing float-to-integer conversion. > > @@ -8991,7 +9257,8 @@ > > "@ > > fcvtz\t%0., %1/m, %2. > > movprfx\t%0, %2\;fcvtz\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_to_int, sve_fp_to_int_x")] > > ) > > > > ;; Predicated float-to-integer conversion with merging, either to the = same > > @@ -9035,7 +9302,8 @@ > > { > > operands[4] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_fp_to_int, sve_fp_to_int_x, sve_fp_to_int_x")= ] > > ) > > > > (define_insn > "*cond__nontrunc_stric > t" > > @@ -9054,7 +9322,8 @@ > > fcvtz\t%0., %1/m, %2. > > movprfx\t%0., %1/z, > %2.\;fcvtz\t%0., > %1/m, %2. > > movprfx\t%0, %3\;fcvtz\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_fp_to_int, sve_fp_to_int_x, sve_fp_to_int_x")= ] > > ) > > > > ;; Predicated narrowing float-to-integer conversion with merging. > > @@ -9088,7 +9357,8 @@ > > fcvtz\t%0., %1/m, %2. > > movprfx\t%0., %1/z, > %2.\;fcvtz\t%0., %1/m, > %2. > > movprfx\t%0, %3\;fcvtz\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_fp_to_int, sve_fp_to_int_x, sve_fp_to_int_x")= ] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -9163,7 +9433,8 @@ > > "@ > > cvtf\t%0., %1/m, %2. > > movprfx\t%0, %2\;cvtf\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_int_to_fp, sve_int_to_fp_x")] > > ) > > > > ;; Predicated widening integer-to-float conversion. > > @@ -9178,7 +9449,8 @@ > > "@ > > cvtf\t%0., %1/m, %2. > > movprfx\t%0, %2\;cvtf\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_int_to_fp, sve_int_to_fp_x")] > > ) > > > > ;; Predicated integer-to-float conversion with merging, either to the = same > > @@ -9222,7 +9494,8 @@ > > { > > operands[4] =3D copy_rtx (operands[1]); > > } > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_int_to_fp, sve_int_to_fp_x, sve_int_to_fp_x")= ] > > ) > > > > (define_insn > "*cond__nonextend_str > ict" > > @@ -9241,7 +9514,8 @@ > > cvtf\t%0., %1/m, %2. > > movprfx\t%0., %1/z, > %2.\;cvtf\t%0., %1/m, > %2. > > movprfx\t%0, %3\;cvtf\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_int_to_fp, sve_int_to_fp_x, sve_int_to_fp_x")= ] > > ) > > > > ;; Predicated widening integer-to-float conversion with merging. > > @@ -9275,7 +9549,8 @@ > > cvtf\t%0., %1/m, %2. > > movprfx\t%0., %1/z, > %2.\;cvtf\t%0., %1/m, > %2. > > movprfx\t%0, %3\;cvtf\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_int_to_fp, sve_int_to_fp_x, sve_int_to_fp_x")= ] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -9361,7 +9636,8 @@ > > "@ > > fcvt\t%0., %1/m, %2. > > movprfx\t%0, %2\;fcvt\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x")] > > ) > > > > ;; Predicated float-to-float truncation with merging. > > @@ -9395,7 +9671,8 @@ > > fcvt\t%0., %1/m, %2. > > movprfx\t%0., %1/z, > %2.\;fcvt\t%0., %1/m, > %2. > > movprfx\t%0, %3\;fcvt\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x, sve_fp_to_fp_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -9418,7 +9695,8 @@ > > "@ > > bfcvt\t%0.h, %1/m, %2.s > > movprfx\t%0, %2\;bfcvt\t%0.h, %1/m, %2.s" > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x")] > > ) > > > > ;; Predicated BFCVT with merging. > > @@ -9452,7 +9730,8 @@ > > bfcvt\t%0.h, %1/m, %2.s > > movprfx\t%0.s, %1/z, %2.s\;bfcvt\t%0.h, %1/m, %2.s > > movprfx\t%0, %3\;bfcvt\t%0.h, %1/m, %2.s" > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x, sve_fp_to_fp_x")] > > ) > > > > ;; Predicated BFCVTNT. This doesn't give a natural aarch64_pred_*/con= d_* > > @@ -9470,6 +9749,7 @@ > > UNSPEC_COND_FCVTNT))] > > "TARGET_SVE_BF16" > > "bfcvtnt\t%0.h, %2/m, %3.s" > > + [(set_attr "type" "sve_fp_to_fp")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -9518,7 +9798,8 @@ > > "@ > > fcvt\t%0., %1/m, %2. > > movprfx\t%0, %2\;fcvt\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x")] > > ) > > > > ;; Predicated float-to-float extension with merging. > > @@ -9552,7 +9833,8 @@ > > fcvt\t%0., %1/m, %2. > > movprfx\t%0., %1/z, > %2.\;fcvt\t%0., %1/m, > %2. > > movprfx\t%0, %3\;fcvt\t%0., %1/m, > %2." > > - [(set_attr "movprfx" "*,yes,yes")] > > + [(set_attr "movprfx" "*,yes,yes") > > + (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x, sve_fp_to_fp_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -9572,6 +9854,7 @@ > > UNSPEC_PACK))] > > "TARGET_SVE" > > "uzp1\t%0., %1., %2." > > + [(set_attr "type" "sve_zip_p")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -9605,6 +9888,7 @@ > > UNPACK_UNSIGNED))] > > "TARGET_SVE" > > "punpk\t%0.h, %1.b" > > + [(set_attr "type" "sve_upk_p")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -9635,6 +9919,7 @@ > > "@ > > brk\t%0.b, %1/z, %2.b > > brk\t%0.b, %1/m, %2.b" > > + [(set_attr "type" "sve_loop_p, sve_loop_p")] > > ) > > > > ;; Same, but also producing a flags result. > > @@ -9658,6 +9943,7 @@ > > SVE_BRK_UNARY))] > > "TARGET_SVE" > > "brks\t%0.b, %1/z, %2.b" > > + [(set_attr "type" "sve_loop_ps")] > > ) > > > > ;; Same, but with only the flags result being interesting. > > @@ -9676,6 +9962,7 @@ > > (clobber (match_scratch:VNx16BI 0 "=3DUpa"))] > > "TARGET_SVE" > > "brks\t%0.b, %1/z, %2.b" > > + [(set_attr "type" "sve_loop_ps")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -9700,6 +9987,7 @@ > > SVE_BRK_BINARY))] > > "TARGET_SVE" > > "brk\t%0.b, %1/z, %2.b, %.b" > > + [(set_attr "type" "sve_loop_p")] > > ) > > > > ;; BRKN, producing both a predicate and a flags result. Unlike other > > @@ -9730,6 +10018,7 @@ > > operands[4] =3D CONST0_RTX (VNx16BImode); > > operands[5] =3D CONST0_RTX (VNx16BImode); > > } > > + [(set_attr "type" "sve_loop_ps")] > > ) > > > > ;; Same, but with only the flags result being interesting. > > @@ -9754,6 +10043,7 @@ > > operands[4] =3D CONST0_RTX (VNx16BImode); > > operands[5] =3D CONST0_RTX (VNx16BImode); > > } > > + [(set_attr "type" "sve_loop_ps")] > > ) > > > > ;; BRKPA and BRKPB, producing both a predicate and a flags result. > > @@ -9777,6 +10067,7 @@ > > SVE_BRKP))] > > "TARGET_SVE" > > "brks\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_loop_ps")] > > ) > > > > ;; Same, but with only the flags result being interesting. > > @@ -9795,6 +10086,7 @@ > > (clobber (match_scratch:VNx16BI 0 "=3DUpa"))] > > "TARGET_SVE" > > "brks\t%0.b, %1/z, %2.b, %3.b" > > + [(set_attr "type" "sve_loop_ps")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -9815,6 +10107,7 @@ > > (clobber (reg:CC_NZC CC_REGNUM))] > > "TARGET_SVE && >=3D " > > "\t%0., %1, %0." > > + [(set_attr "type" "sve_set_ps")] > > ) > > > > ;; Same, but also producing a flags result. > > @@ -9845,6 +10138,7 @@ > > operands[4] =3D operands[2]; > > operands[5] =3D operands[3]; > > } > > + [(set_attr "type" "sve_set_ps")] > > ) > > > > ;; Same, but with only the flags result being interesting. > > @@ -9870,6 +10164,7 @@ > > operands[4] =3D operands[2]; > > operands[5] =3D operands[3]; > > } > > + [(set_attr "type" "sve_set_ps")] > > ) > > > > ;; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D=3D=3D=3D=3D=3D > > @@ -9902,6 +10197,7 @@ > > { > > return aarch64_output_sve_cnt_pat_immediate ("cnt", "%x0", operand= s > + 1); > > } > > + [(set_attr "type" "sve_cnt")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -9928,6 +10224,7 @@ > > return aarch64_output_sve_cnt_pat_immediate ("", "%x0", > > operands + 2); > > } > > + [(set_attr "type" "sve_cnt")] > > ) > > > > ;; Increment an SImode register by the number of elements in an svpatt= ern > > @@ -9944,6 +10241,7 @@ > > { > > return aarch64_output_sve_cnt_pat_immediate ("inc", "%x0", operand= s > + 2); > > } > > + [(set_attr "type" "sve_cnt")] > > ) > > > > ;; Increment an SImode register by the number of elements in an svpatt= ern > > @@ -9965,6 +10263,7 @@ > > return aarch64_output_sve_cnt_pat_immediate ("", register= s, > > operands + 2); > > } > > + [(set_attr "type" "sve_cnt")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -9995,7 +10294,8 @@ > > return aarch64_output_sve_cnt_pat_immediate ("", > "%0.", > > operands + 2); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt, sve_cnt_x")] > > ) > > > > ;; Increment a vector of SIs by the number of elements in an svpattern= . > > @@ -10016,7 +10316,8 @@ > > return aarch64_output_sve_cnt_pat_immediate ("", > "%0.", > > operands + 2); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt, sve_cnt_x")] > > ) > > > > ;; Increment a vector of HIs by the number of elements in an svpattern= . > > @@ -10051,7 +10352,8 @@ > > return aarch64_output_sve_cnt_pat_immediate ("", > "%0.", > > operands + 2); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt, sve_cnt_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -10078,6 +10380,7 @@ > > return aarch64_output_sve_cnt_pat_immediate ("", "%x0", > > operands + 2); > > } > > + [(set_attr "type" "sve_cnt")] > > ) > > > > ;; Decrement an SImode register by the number of elements in an svpatt= ern > > @@ -10094,6 +10397,7 @@ > > { > > return aarch64_output_sve_cnt_pat_immediate ("dec", "%x0", operand= s > + 2); > > } > > + [(set_attr "type" "sve_cnt")] > > ) > > > > ;; Decrement an SImode register by the number of elements in an svpatt= ern > > @@ -10115,6 +10419,7 @@ > > return aarch64_output_sve_cnt_pat_immediate ("", register= s, > > operands + 2); > > } > > + [(set_attr "type" "sve_cnt")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -10145,7 +10450,8 @@ > > return aarch64_output_sve_cnt_pat_immediate ("", > "%0.", > > operands + 2); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt, sve_cnt_x")] > > ) > > > > ;; Decrement a vector of SIs by the number of elements in an svpattern= . > > @@ -10166,7 +10472,8 @@ > > return aarch64_output_sve_cnt_pat_immediate ("", > "%0.", > > operands + 2); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt, sve_cnt_x")] > > ) > > > > ;; Decrement a vector of HIs by the number of elements in an svpattern= . > > @@ -10201,7 +10508,8 @@ > > return aarch64_output_sve_cnt_pat_immediate ("", > "%0.", > > operands + 2); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt, sve_cnt_x")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -10221,7 +10529,9 @@ > > (match_operand:PRED_ALL 3 "register_operand" "Upa")] > > UNSPEC_CNTP)))] > > "TARGET_SVE" > > - "cntp\t%x0, %1, %3.") > > + "cntp\t%x0, %1, %3." > > + [(set_attr "type" "sve_cnt_p")] > > +) > > > > ;; -------------------------------------------------------------------= ------ > > ;; ---- [INT] Increment by the number of elements in a predicate (scal= ar) > > @@ -10264,6 +10574,7 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_cnt_p")] > > ) > > > > ;; Increment an SImode register by the number of set bits in a predica= te > > @@ -10283,6 +10594,7 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_cnt_p")] > > ) > > > > ;; Increment an SImode register by the number of set bits in a predica= te > > @@ -10324,6 +10636,7 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_cnt_p")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -10373,7 +10686,8 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")] > > ) > > > > ;; Increment a vector of SIs by the number of set bits in a predicate. > > @@ -10412,7 +10726,8 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")] > > ) > > > > ;; Increment a vector of HIs by the number of set bits in a predicate. > > @@ -10453,7 +10768,8 @@ > > { > > operands[4] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -10497,6 +10813,7 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_cnt_p")] > > ) > > > > ;; Decrement an SImode register by the number of set bits in a predica= te > > @@ -10516,6 +10833,7 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_cnt_p")] > > ) > > > > ;; Decrement an SImode register by the number of set bits in a predica= te > > @@ -10557,6 +10875,7 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > + [(set_attr "type" "sve_cnt_p")] > > ) > > > > ;; -------------------------------------------------------------------= ------ > > @@ -10606,7 +10925,8 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")] > > ) > > > > ;; Decrement a vector of SIs by the number of set bits in a predicate. > > @@ -10645,7 +10965,8 @@ > > { > > operands[3] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes")] > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")] > > ) > > > > ;; Decrement a vector of HIs by the number of set bits in a predicate. > > @@ -10686,5 +11007,6 @@ > > { > > operands[4] =3D CONSTM1_RTX (mode); > > } > > - [(set_attr "movprfx" "*,yes")] > > -) > > + [(set_attr "movprfx" "*,yes") > > + (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")] > > +) > > \ No newline at end of file > > diff --git a/gcc/config/arm/types.md b/gcc/config/arm/types.md > > index 83e29563c8e..baccdd02860 100644 > > --- a/gcc/config/arm/types.md > > +++ b/gcc/config/arm/types.md > > @@ -562,6 +562,183 @@ > > ; crypto_sha256_slow > > ; crypto_pmull > > ; > > +; The classification below is for SVE instructions. > > +; > > +; SVE Suffixes: > > +; a: accumulated result > > +; c: complex math > > +; d: double precision > > +; g: GPR operand > > +; l: scaled > > +; p: predicate operand > > +; r: reduced result > > +; s: change flags > > +; s: single precision > > +; t: trigonometry math > > +; u: unscaled > > +; v: vector operand > > +; x: prefixed > > +; > > +; sve_loop_p > > +; sve_loop_ps > > +; sve_loop_gs > > +; sve_loop_end > > +; > > +; sve_logic_p > > +; sve_logic_ps > > +; > > +; sve_cnt_p > > +; sve_cnt_pv > > +; sve_cnt_pvx > > +; sve_rev_p > > +; sve_sel_p > > +; sve_set_p > > +; sve_set_ps > > +; sve_trn_p > > +; sve_upk_p > > +; sve_zip_p > > +; > > +; sve_arith > > +; sve_arith_sat > > +; sve_arith_sat_x > > +; sve_arith_r > > +; sve_arith_x > > +; sve_logic > > +; sve_logic_r > > +; sve_logic_x > > +; > > +; sve_shift > > +; sve_shift_d > > +; sve_shift_dx > > +; sve_shift_x > > +; > > +; sve_compare_s > > +; > > +; sve_cnt > > +; sve_cnt_x > > +; sve_copy > > +; sve_copy_g > > +; sve_move > > +; sve_move_x > > +; sve_move_g > > +; sve_permute > > +; sve_splat > > +; sve_splat_m > > +; sve_splat_g > > +; sve_cext > > +; sve_cext_x > > +; sve_cext_g > > +; sve_ext > > +; sve_ext_x > > +; sve_sext > > +; sve_sext_x > > +; sve_uext > > +; sve_uext_x > > +; sve_index > > +; sve_index_g > > +; sve_ins > > +; sve_ins_x > > +; sve_ins_g > > +; sve_ins_gx > > +; sve_rev > > +; sve_rev_x > > +; sve_tbl > > +; sve_trn > > +; sve_upk > > +; sve_zip > > +; > > +; sve_int_to_fp > > +; sve_int_to_fp_x > > +; sve_fp_round > > +; sve_fp_round_x > > +; sve_fp_to_int > > +; sve_fp_to_int_x > > +; sve_fp_to_fp > > +; sve_fp_to_fp_x > > +; sve_bf_to_fp > > +; sve_bf_to_fp_x > > +; > > +; sve_div > > +; sve_div_x > > +; sve_dot > > +; sve_dot_x > > +; sve_mla > > +; sve_mla_x > > +; sve_mmla > > +; sve_mmla_x > > +; sve_mul > > +; sve_mul_x > > +; > > +; sve_prfx > > +; > > +; sve_fp_arith > > +; sve_fp_arith_a > > +; sve_fp_arith_c > > +; sve_fp_arith_cx > > +; sve_fp_arith_r > > +; sve_fp_arith_x > > +; > > +; sve_fp_compare > > +; sve_fp_copy > > +; sve_fp_move > > +; sve_fp_move_x > > +; > > +; sve_fp_div_d > > +; sve_fp_div_dx > > +; sve_fp_div_s > > +; sve_fp_div_sx > > +; sve_fp_dot > > +; sve_fp_mla > > +; sve_fp_mla_x > > +; sve_fp_mla_c > > +; sve_fp_mla_cx > > +; sve_fp_mla_t > > +; sve_fp_mla_tx > > +; sve_fp_mmla > > +; sve_fp_mmla_x > > +; sve_fp_mul > > +; sve_fp_mul_x > > +; sve_fp_sqrt_d > > +; sve_fp_sqrt_s > > +; sve_fp_trig > > +; sve_fp_trig_x > > +; > > +; sve_fp_estimate, > > +; sve_fp_step, > > +; > > +; sve_bf_dot > > +; sve_bf_dot_x > > +; sve_bf_mla > > +; sve_bf_mla_x > > +; sve_bf_mmla > > +; sve_bf_mmla_x > > +; > > +; sve_ldr > > +; sve_ldr_p > > +; sve_load1 > > +; sve_load1_gather_d > > +; sve_load1_gather_dl > > +; sve_load1_gather_du > > +; sve_load1_gather_s > > +; sve_load1_gather_sl > > +; sve_load1_gather_su > > +; sve_load2 > > +; sve_load3 > > +; sve_load4 > > +; > > +; sve_str > > +; sve_str_p > > +; sve_store1 > > +; sve_store1_scatter > > +; sve_store2 > > +; sve_store3 > > +; sve_store4 > > +; > > +; sve_rd_ffr > > +; sve_rd_ffr_p > > +; sve_rd_ffr_ps > > +; sve_wr_ffr > > +; > > ; The classification below is for coprocessor instructions > > ; > > ; coproc > > @@ -1120,6 +1297,171 @@ > > crypto_sha3,\ > > crypto_sm3,\ > > crypto_sm4,\ > > +\ > > + sve_loop_p,\ > > + sve_loop_ps,\ > > + sve_loop_gs,\ > > + sve_loop_end,\ > > +\ > > + sve_logic_p,\ > > + sve_logic_ps,\ > > +\ > > + sve_cnt_p,\ > > + sve_cnt_pv,\ > > + sve_cnt_pvx,\ > > + sve_rev_p,\ > > + sve_sel_p,\ > > + sve_set_p,\ > > + sve_set_ps,\ > > + sve_trn_p,\ > > + sve_upk_p,\ > > + sve_zip_p,\ > > +\ > > + sve_arith,\ > > + sve_arith_sat,\ > > + sve_arith_sat_x,\ > > + sve_arith_r,\ > > + sve_arith_x,\ > > + sve_logic,\ > > + sve_logic_r,\ > > + sve_logic_x,\ > > +\ > > + sve_shift,\ > > + sve_shift_d,\ > > + sve_shift_dx,\ > > + sve_shift_x,\ > > +\ > > + sve_compare_s,\ > > +\ > > + sve_cnt,\ > > + sve_cnt_x,\ > > + sve_copy,\ > > + sve_copy_g,\ > > + sve_move,\ > > + sve_move_x,\ > > + sve_move_g,\ > > + sve_permute,\ > > + sve_splat,\ > > + sve_splat_m,\ > > + sve_splat_g,\ > > + sve_cext,\ > > + sve_cext_x,\ > > + sve_cext_g,\ > > + sve_ext,\ > > + sve_ext_x,\ > > + sve_sext,\ > > + sve_sext_x,\ > > + sve_uext,\ > > + sve_uext_x,\ > > + sve_index,\ > > + sve_index_g,\ > > + sve_ins,\ > > + sve_ins_x,\ > > + sve_ins_g,\ > > + sve_ins_gx,\ > > + sve_rev,\ > > + sve_rev_x,\ > > + sve_tbl,\ > > + sve_trn,\ > > + sve_upk,\ > > + sve_zip,\ > > +\ > > + sve_int_to_fp,\ > > + sve_int_to_fp_x,\ > > + sve_fp_round,\ > > + sve_fp_round_x,\ > > + sve_fp_to_int,\ > > + sve_fp_to_int_x,\ > > + sve_fp_to_fp,\ > > + sve_fp_to_fp_x,\ > > + sve_bf_to_fp,\ > > + sve_bf_to_fp_x,\ > > +\ > > + sve_div,\ > > + sve_div_x,\ > > + sve_dot,\ > > + sve_dot_x,\ > > + sve_mla,\ > > + sve_mla_x,\ > > + sve_mmla,\ > > + sve_mmla_x,\ > > + sve_mul,\ > > + sve_mul_x,\ > > +\ > > + sve_prfx,\ > > +\ > > + sve_fp_arith,\ > > + sve_fp_arith_a,\ > > + sve_fp_arith_c,\ > > + sve_fp_arith_cx,\ > > + sve_fp_arith_r,\ > > + sve_fp_arith_x,\ > > +\ > > + sve_fp_compare,\ > > + sve_fp_copy,\ > > + sve_fp_move,\ > > + sve_fp_move_x,\ > > +\ > > + sve_fp_div_d,\ > > + sve_fp_div_dx,\ > > + sve_fp_div_s,\ > > + sve_fp_div_sx,\ > > + sve_fp_dot,\ > > + sve_fp_mla,\ > > + sve_fp_mla_x,\ > > + sve_fp_mla_c,\ > > + sve_fp_mla_cx,\ > > + sve_fp_mla_t,\ > > + sve_fp_mla_tx,\ > > + sve_fp_mmla,\ > > + sve_fp_mmla_x,\ > > + sve_fp_mul,\ > > + sve_fp_mul_x,\ > > + sve_fp_sqrt_d,\ > > + sve_fp_sqrt_dx,\ > > + sve_fp_sqrt_s,\ > > + sve_fp_sqrt_sx,\ > > + sve_fp_trig,\ > > + sve_fp_trig_x,\ > > +\ > > + sve_fp_estimate, > > + sve_fp_estimate_x, > > + sve_fp_step, > > + sve_fp_step_x, > > +\ > > + sve_bf_dot,\ > > + sve_bf_dot_x,\ > > + sve_bf_mla,\ > > + sve_bf_mla_x,\ > > + sve_bf_mmla,\ > > + sve_bf_mmla_x,\ > > +\ > > + sve_ldr,\ > > + sve_ldr_p,\ > > + sve_load1,\ > > + sve_load1_gather_d,\ > > + sve_load1_gather_dl,\ > > + sve_load1_gather_du,\ > > + sve_load1_gather_s,\ > > + sve_load1_gather_sl,\ > > + sve_load1_gather_su,\ > > + sve_load2,\ > > + sve_load3,\ > > + sve_load4,\ > > +\ > > + sve_str,\ > > + sve_str_p,\ > > + sve_store1,\ > > + sve_store1_scatter,\ > > + sve_store2,\ > > + sve_store3,\ > > + sve_store4,\ > > +\ > > + sve_rd_ffr,\ > > + sve_rd_ffr_p,\ > > + sve_rd_ffr_ps,\ > > + sve_wr_ffr,\ > > +\ > > coproc,\ > > tme,\ > > memtag,\