From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05on2060.outbound.protection.outlook.com [40.107.20.60]) by sourceware.org (Postfix) with ESMTPS id 340913858D37 for ; Fri, 22 Sep 2023 08:08:19 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 340913858D37 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oGHGaTrHHmDlSrLxpRkQa4jTh6NXH2aGidcIpBAMXgM=; b=i7+R1wKxELiSghv4LXF0fM046MPBMFVl8oruMskEBSpzCzt23BD5ty+ke9pgiAPDi/0syEcV4+pC/QKf5buVMnoBhhJXx1LbJSTLIHMRBhvz6ZkF3GiOyJ0TLaV+ouul1ydOdpucAci4+m9QpNKk72H7DXc/unOkuUfV6dAl/M4= Received: from AM6P191CA0070.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:7f::47) by AM0PR08MB5460.eurprd08.prod.outlook.com (2603:10a6:208:187::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6813.19; Fri, 22 Sep 2023 08:08:01 +0000 Received: from AM7EUR03FT023.eop-EUR03.prod.protection.outlook.com (2603:10a6:209:7f:cafe::d6) by AM6P191CA0070.outlook.office365.com (2603:10a6:209:7f::47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.30 via Frontend Transport; Fri, 22 Sep 2023 08:08:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT023.mail.protection.outlook.com (100.127.140.73) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6813.20 via Frontend Transport; Fri, 22 Sep 2023 08:08:01 +0000 Received: ("Tessian outbound 0b7d6027328f:v175"); Fri, 22 Sep 2023 08:08:01 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 81d6c1ba9ca979e8 X-CR-MTA-TID: 64aa7808 Received: from 2518bbe178dc.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5656C465-0D74-4743-B394-723984B75E08.1; Fri, 22 Sep 2023 08:07:54 +0000 Received: from EUR05-VI1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2518bbe178dc.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 22 Sep 2023 08:07:54 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=G3etSbpaX+GfXLAavVu/kNCBzDyoVDmTLKM7ZGH4f9UaOBlPoXBLEo5NS1xtz/aO3ISrC3wM7lnE32sSQL608MiV9hnPHN3k64HymrIO8JhVboCEnk70OzFGDmP7AiaZ2sOTAjUFL3ndjs4NNnKngT1Zo2fZ4mZxTucjn2kHT57cEockft4rKff07zhe23FDDVn4BX4RRxFek+6h2sjViA4I4u+RvpfA/uWKRj2HsP8NGbZZxWGVDii1t3Chomhz0bIyQObNL/YFfH1VvLcAcztkz+NNSyjdYgEGTJFJ3roBTPjty9cSD6KuVHOOBRSJJ2pvyCuaspOYqk/K0WPrVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oGHGaTrHHmDlSrLxpRkQa4jTh6NXH2aGidcIpBAMXgM=; b=HRezkFf/B9MLjWJWbT4p8bPlCYJDsHczI82hcxewmUuJ3+yl8SbhrHv21o3yh2mD/k7O0kOc8bHIkxKqtnEZEo/DZwtrMAypOoZZjO8gM3Aadv2PyqhxG1/rRzZutXyv/kCpKNE9MWet9ehSHC+UAuKjSKswvgc834VGmCtMjGlmkZPie+kuGqjx3403qSr/nJhtJt7pHlRcye/7o+JSXnIHdgKHg+xntYDkSK9j3PbMccrzLd4XNQWoDM1T5kgLJ/Qo9DpszqzHlWYBSE+OCrJB2uqlI26giML6oDnLFpvNsTD5GnOxutjL2A6ewPkLiNWD+Uc4wF5CG1y9SXrI5A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oGHGaTrHHmDlSrLxpRkQa4jTh6NXH2aGidcIpBAMXgM=; b=i7+R1wKxELiSghv4LXF0fM046MPBMFVl8oruMskEBSpzCzt23BD5ty+ke9pgiAPDi/0syEcV4+pC/QKf5buVMnoBhhJXx1LbJSTLIHMRBhvz6ZkF3GiOyJ0TLaV+ouul1ydOdpucAci4+m9QpNKk72H7DXc/unOkuUfV6dAl/M4= Received: from AS9PR05CA0064.eurprd05.prod.outlook.com (2603:10a6:20b:499::14) by GVXPR08MB7845.eurprd08.prod.outlook.com (2603:10a6:150:1::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.27; Fri, 22 Sep 2023 08:07:41 +0000 Received: from AM7EUR03FT040.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:499:cafe::84) by AS9PR05CA0064.outlook.office365.com (2603:10a6:20b:499::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.32 via Frontend Transport; Fri, 22 Sep 2023 08:07:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by AM7EUR03FT040.mail.protection.outlook.com (100.127.140.128) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6813.20 via Frontend Transport; Fri, 22 Sep 2023 08:07:41 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Fri, 22 Sep 2023 08:07:38 +0000 Received: from e124257.nice.arm.com (10.34.101.64) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.27 via Frontend Transport; Fri, 22 Sep 2023 08:07:37 +0000 From: Andrea Corallo To: CC: , , , Andrea Corallo Subject: [PATCH 3/3] aarch64: Convert aarch64 multi choice patterns to new syntax Date: Fri, 22 Sep 2023 10:07:03 +0200 Message-ID: <20230922080703.93612-3-andrea.corallo@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230922080703.93612-1-andrea.corallo@arm.com> References: <20230922080703.93612-1-andrea.corallo@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: AM7EUR03FT040:EE_|GVXPR08MB7845:EE_|AM7EUR03FT023:EE_|AM0PR08MB5460:EE_ X-MS-Office365-Filtering-Correlation-Id: 9a3cbc9a-016b-4b3e-c80b-08dbbb430a94 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: 6YP63npnIZyCHEGy8VbO01NFaq0NceZHq3jSttvvQJj0qD18hkKLp1fxwBAuPwgeA7UAswFtKl7rqTTN/1w1s/2FXJLKWHvx/RMQMTZXjd0o5WlYo32o6/BmIEZf9keT/VuLjkYDPVHgPAlnVWUeq9lncnCM0V8ZlGKBm9LoAUjSpXQjI9PZ+4BNfnJPHPPHEw/+NbFNzKX5vgBGhJ8KEmZSAqqUsYWGuqjx/ag54pLpNidLHcYt/tPVPZMaILslTASqbpNaHlsJp6/NJcTbyG5zQYvtbwpYPFPmaHy3aYX8JnqU07DL++w9qlAV6MTvzlZDRauGU/SvtHe9TnfqhSKNxpbO3X0YheFsl+okFlly2R6NJpvX5y6njeIUS1y8nHxN4vr01rPg22bX13Tg2PXAgVk0mbI2iUgByrqeNRUavIT09zd9gJ0RD5ZfZIQVdzrfH77lEFMZ8eTV0RTBq0ZaItQCOkCwiRev/sOoBaqL53yLRhMb8BouF/TRZ2RQzbqFyqm342d4LMHL18uerjV2j+VdLSFkKjtwZyBjk/L6qbmDVfA/uzZa7y8sK1LrdURO3u3d0DC4uTotEmGjtJkVtwNqWVUVShAjFnNJEacPb+tNLgNVeC6bbNb6CF+sRlyml/U3eeN89MdtI6hJT9uONMPAxcchjH+m9aGPR7niGefZEP28QjAicMzkg3AtBx1hnN4mW9KtdVfan17O/IpdIuv6f8lyqA86U1ns4zkURn1uqk+HDWYD2GhUKUuqwU8hNXugdLSTiUaXUtqwi+aCLxDcQKKyILo2yEImRbs= X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(4636009)(39860400002)(346002)(376002)(396003)(136003)(82310400011)(186009)(230921699003)(451199024)(1800799009)(36840700001)(40470700004)(46966006)(316002)(1076003)(54906003)(6916009)(70586007)(4326008)(40460700003)(26005)(2616005)(41300700001)(36756003)(336012)(40480700001)(86362001)(426003)(83380400001)(6666004)(478600001)(82740400003)(81166007)(356005)(7696005)(47076005)(36860700001)(8676002)(8936002)(70206006)(30864003)(2906002)(44832011)(5660300002)(36900700001)(579004)(559001);DIR:OUT;SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR08MB7845 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT023.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: fbaeb85f-1a35-4b81-95a7-08dbbb42fe3a X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MVqqnUtWUGF/fyr0E7jr7aqsO1PxR1SgTjLBCol0z5KMXUxFtZz0QfsPY/dDnei6yoQ3D8g/OaPy/BpvlQZfZuQl8sQoz5S10V1SsskWZjsJVx1jPfzlotooG0o/tjB7x4h663avd8Bi/U9/jxvvwRhKHNs19wvv8AedMtSXxD9E5ctlyta3KOJtx+vdlGjErYeJYEfNYbn+lNYE0+K+rvqy4JNVF/PvlgBIJuZWeaY5cjuI64re9vIhK9+CMxcKY4DnsOooBZf0axxKEEsDqhJBmpO7YnsBeYo4hj9Y7i0XRimiESP/tTfeLGD3MOh+AcCwV47hEP1klBEh4Azcdls2xDkcwqJyN1GLtY8wcP129N/mihvApU49FnDiE9fcQQ7S+XWbrICZhEmP/ZK+aWMlaG07k87YpaAS9GhEvd1sfu6CmRGTKSQrjzFwp0srPygtcK+bdcU+v1YZTpdvo7N6FYf8gu3mbJgRBmksLVnjzgQL1Q/la0XmQORSHldg+Px0bqypmmwmrOrVbSkJw2u87CtoXBKN9GHbKfYxwPhGJbJvPWZJVkXyWrlLL74lZ27f+7mUFexMZ09fz/V2fMPvFCj3NlRChsPOCnODB8fY2Z55mr5JgZYPYMrD3GW4huSa/7lscZeq3EJfLob0bA== X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230031)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199024)(230921699003)(186009)(1800799009)(82310400011)(40470700004)(36840700001)(46966006)(1076003)(41300700001)(2616005)(5660300002)(40460700003)(26005)(36860700001)(426003)(336012)(47076005)(30864003)(86362001)(36756003)(82740400003)(4326008)(81166007)(40480700001)(2906002)(8936002)(83380400001)(44832011)(6666004)(478600001)(7696005)(8676002)(316002)(6916009)(70586007)(54906003)(70206006)(579004)(559001);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Sep 2023 08:08:01.4033 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9a3cbc9a-016b-4b3e-c80b-08dbbb430a94 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT023.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5460 X-Spam-Status: No, score=-11.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,KAM_DMARC_NONE,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE,TXREP,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Hi all, this patch converts a number of multi multi choice patterns within the aarch64 backend to the new syntax. The list of the converted patterns is in the Changelog. For completeness here follows the list of multi choice patterns that were rejected for conversion by my parser, they typically have some C as asm output and require some manual intervention: aarch64_simd_vec_set, aarch64_get_lane, aarch64_cmdi, aarch64_cmdi, aarch64_cmtstdi, *aarch64_movv8di, *aarch64_be_mov, *aarch64_be_movci, *aarch64_be_mov, *aarch64_be_movxi, *aarch64_sve_mov_le, *aarch64_sve_mov_be, @aarch64_pred_mov, @aarch64_sve_gather_prefetch, @aarch64_sve_gather_prefetch, *aarch64_sve_gather_prefetch_sxtw, *aarch64_sve_gather_prefetch_uxtw, @aarch64_vec_duplicate_vq_le, *vec_extract_0, *vec_extract_v128, *cmp_and, *fcm_and_combine, @aarch64_sve_ext, @aarch64_sve2_aba, *sibcall_insn, *sibcall_value_insn, *xor_one_cmpl3, *insv_reg_, *aarch64_bfi_, *aarch64_bfidi_subreg_, *aarch64_bfxil, *aarch64_bfxilsi_uxtw, *aarch64_cvtf2_mult, atomic_store. Bootstraped and reg tested on aarch64-unknown-linux-gnu, also I analysed tmp-mddump.md (from 'make mddump') and could not find effective differences, okay for trunk? Bests Andrea gcc/ChangeLog: * config/aarch64/aarch64.md (@ccmp) (@ccmp_rev, *call_insn, *call_value_insn) (*mov_aarch64, load_pair_sw_) (load_pair_dw_) (store_pair_sw_) (store_pair_dw_, *extendsidi2_aarch64) (*zero_extendsidi2_aarch64, *load_pair_zero_extendsidi2_aarch64) (*extend2_aarch64) (*zero_extend2_aarch64) (*extendqihi2_aarch64, *zero_extendqihi2_aarch64) (*add3_aarch64, *addsi3_aarch64_uxtw, *add3_poly_1) (add3_compare0, *addsi3_compare0_uxtw) (*add3_compareC_cconly, add3_compareC) (*add3_compareV_cconly_imm, add3_compareV_imm) (*add3nr_compare0, subdi3, subv_imm) (*cmpv_insn, sub3_compare1_imm, neg2) (cmp, fcmp, fcmpe, *cmov_insn) (*cmovsi_insn_uxtw, 3, *si3_uxtw) (*and3_compare0, *andsi3_compare0_uxtw, one_cmpl2) (*_one_cmpl3, *and3nr_compare0) (*aarch64_ashl_sisd_or_int_3) (*aarch64_lshr_sisd_or_int_3) (*aarch64_ashr_sisd_or_int_3, *ror3_insn) (*si3_insn_uxtw, _trunc2) (2) (3) (3) (*aarch64_3_cssc, copysign3_insn): Update to new syntax. * config/aarch64/aarch64-sve2.md (@aarch64_scatter_stnt) (@aarch64_scatter_stnt_) (*aarch64_mul_unpredicated_) (@aarch64_pred_, *cond__2) (*cond__3, *cond__any) (*cond__z, @aarch64_pred_) (*cond__2, *cond__3) (*cond__any, @aarch64_sve_) (@aarch64_sve__lane_) (@aarch64_sve_add_mul_lane_) (@aarch64_sve_sub_mul_lane_, @aarch64_sve2_xar) (*aarch64_sve2_bcax, @aarch64_sve2_eor3) (*aarch64_sve2_nor, *aarch64_sve2_nand) (*aarch64_sve2_bsl, *aarch64_sve2_nbsl) (*aarch64_sve2_bsl1n, *aarch64_sve2_bsl2n) (*aarch64_sve2_sra, @aarch64_sve_add_) (*aarch64_sve2_aba, @aarch64_sve_add_) (@aarch64_sve_add__lane_) (@aarch64_sve_qadd_) (@aarch64_sve_qadd__lane_) (@aarch64_sve_sub_) (@aarch64_sve_sub__lane_) (@aarch64_sve_qsub_) (@aarch64_sve_qsub__lane_) (@aarch64_sve_, @aarch64__lane_) (@aarch64_pred_) (@aarch64_pred_, *cond__2) (*cond__z, @aarch64_sve_) (@aarch64__lane_, @aarch64_sve_) (@aarch64__lane_, @aarch64_pred_) (*cond__any_relaxed) (*cond__any_strict) (@aarch64_pred_, *cond_) (@aarch64_pred_, *cond_) (*cond__strict): Update to new syntax. * config/aarch64/aarch64-sve.md (*aarch64_sve_mov_ldr_str) (*aarch64_sve_mov_no_ldr_str, @aarch64_pred_mov) (*aarch64_sve_mov, aarch64_wrffr) (mask_scatter_store) (*mask_scatter_store_xtw_unpacked) (*mask_scatter_store_sxtw) (*mask_scatter_store_uxtw) (@aarch64_scatter_store_trunc) (@aarch64_scatter_store_trunc) (*aarch64_scatter_store_trunc_sxtw) (*aarch64_scatter_store_trunc_uxtw) (*vec_duplicate_reg, vec_shl_insert_) (vec_series, @extract__) (@aarch64_pred_, *cond__2) (*cond__any, @aarch64_pred_) (@aarch64_sve_revbhw_) (@cond_) (*2) (@aarch64_pred_sxt) (@aarch64_cond_sxt) (*cond_uxt_2, *cond_uxt_any, *cnot) (*cond_cnot_2, *cond_cnot_any) (@aarch64_pred_, *cond__2_relaxed) (*cond__2_strict, *cond__any_relaxed) (*cond__any_strict, @aarch64_pred_) (*cond__2, *cond__3) (*cond__any, add3, sub3) (@aarch64_pred_abd, *aarch64_cond_abd_2) (*aarch64_cond_abd_3, *aarch64_cond_abd_any) (@aarch64_sve_, @aarch64_pred_) (*cond__2, *cond__z) (@aarch64_pred_, *cond__2) (*cond__3, *cond__any, 3) (*cond_bic_2, *cond_bic_any) (@aarch64_pred_, *cond__2_const) (*cond__any_const, *cond__m) (*cond__z, *sdiv_pow23) (*cond__2, *cond__any) (@aarch64_pred_, *cond__2_relaxed) (*cond__2_strict, *cond__any_relaxed) (*cond__any_strict, @aarch64_pred_) (*cond__2_relaxed, *cond__2_strict) (*cond__2_const_relaxed) (*cond__2_const_strict) (*cond__3_relaxed, *cond__3_strict) (*cond__any_relaxed, *cond__any_strict) (*cond__any_const_relaxed) (*cond__any_const_strict) (@aarch64_pred_, *cond_add_2_const_relaxed) (*cond_add_2_const_strict) (*cond_add_any_const_relaxed) (*cond_add_any_const_strict, @aarch64_pred_) (*cond__2_relaxed, *cond__2_strict) (*cond__any_relaxed, *cond__any_strict) (@aarch64_pred_, *cond_sub_3_const_relaxed) (*cond_sub_3_const_strict, *cond_sub_const_relaxed) (*cond_sub_const_strict, *aarch64_pred_abd_relaxed) (*aarch64_pred_abd_strict) (*aarch64_cond_abd_2_relaxed) (*aarch64_cond_abd_2_strict) (*aarch64_cond_abd_3_relaxed) (*aarch64_cond_abd_3_strict) (*aarch64_cond_abd_any_relaxed) (*aarch64_cond_abd_any_strict, @aarch64_pred_) (@aarch64_pred_fma, *cond_fma_2, *cond_fma_4) (*cond_fma_any, @aarch64_pred_fnma) (*cond_fnma_2, *cond_fnma_4, *cond_fnma_any) (dot_prod, @aarch64_dot_prod_lane) (@dot_prod, @aarch64_dot_prod_lane) (@aarch64_sve_add_, @aarch64_pred_) (*cond__2_relaxed, *cond__2_strict) (*cond__4_relaxed, *cond__4_strict) (*cond__any_relaxed, *cond__any_strict) (@aarch64__lane_, @aarch64_pred_) (*cond__4_relaxed, *cond__4_strict) (*cond__any_relaxed, *cond__any_strict) (@aarch64__lane_, @aarch64_sve_tmad) (@aarch64_sve_vnx4sf) (@aarch64_sve__lanevnx4sf) (@aarch64_sve_, *vcond_mask_) (@aarch64_sel_dup, @aarch64_pred_cmp) (*cmp_cc, *cmp_ptest) (@aarch64_pred_fcm, @fold_extract__) (@aarch64_fold_extract_vector__) (@aarch64_sve_splice) (@aarch64_sve__nontrunc) (@aarch64_sve__trunc) (*cond__nontrunc_relaxed) (*cond__nontrunc_strict) (*cond__trunc) (@aarch64_sve__nonextend) (@aarch64_sve__extend) (*cond__nonextend_relaxed) (*cond__nonextend_strict) (*cond__extend) (@aarch64_sve__trunc) (*cond__trunc) (@aarch64_sve__trunc) (*cond__trunc) (@aarch64_sve__nontrunc) (*cond__nontrunc) (@aarch64_brk, *aarch64_sve__cntp): Update to new syntax. * config/aarch64/aarch64-simd.md (aarch64_simd_dup) (load_pair) (vec_store_pair, aarch64_simd_stp) (aarch64_simd_mov_from_low) (aarch64_simd_mov_from_high, and3) (ior3, aarch64_simd_ashr) (aarch64_simd_bsl_internal) (*aarch64_simd_bsl_alt) (aarch64_simd_bsldi_internal, aarch64_simd_bsldi_alt) (store_pair_lanes, *aarch64_combine_internal) (*aarch64_combine_internal_be, *aarch64_combinez) (*aarch64_combinez_be) (aarch64_cm, *aarch64_cmdi) (aarch64_cm, *aarch64_mov) (*aarch64_be_mov, *aarch64_be_movoi): Update to new syntax. --- gcc/config/aarch64/aarch64-simd.md | 429 ++-- gcc/config/aarch64/aarch64-sve.md | 2973 ++++++++++++++-------------- gcc/config/aarch64/aarch64-sve2.md | 922 ++++----- gcc/config/aarch64/aarch64.md | 959 +++++---- 4 files changed, 2655 insertions(+), 2628 deletions(-) diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index f67eb70577d..4c1658d4b73 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -91,25 +91,25 @@ (define_expand "movmisalign" }) (define_insn "aarch64_simd_dup" - [(set (match_operand:VDQ_I 0 "register_operand" "=w, w") + [(set (match_operand:VDQ_I 0 "register_operand") (vec_duplicate:VDQ_I - (match_operand: 1 "register_operand" "w,?r")))] + (match_operand: 1 "register_operand")))] "TARGET_SIMD" - "@ - dup\\t%0., %1.[0] - dup\\t%0., %1" - [(set_attr "type" "neon_dup, neon_from_gp")] + {@ [ cons: =0 , 1 ; attrs: type ] + [ w , w ; neon_dup ] dup\t%0., %1.[0] + [ w , ?r ; neon_from_gp ] dup\t%0., %1 + } ) (define_insn "aarch64_simd_dup" - [(set (match_operand:VDQF_F16 0 "register_operand" "=w,w") + [(set (match_operand:VDQF_F16 0 "register_operand") (vec_duplicate:VDQF_F16 - (match_operand: 1 "register_operand" "w,r")))] + (match_operand: 1 "register_operand")))] "TARGET_SIMD" - "@ - dup\\t%0., %1.[0] - dup\\t%0., %1" - [(set_attr "type" "neon_dup, neon_from_gp")] + {@ [ cons: =0 , 1 ; attrs: type ] + [ w , w ; neon_dup ] dup\t%0., %1.[0] + [ w , r ; neon_from_gp ] dup\t%0., %1 + } ) (define_insn "aarch64_dup_lane" @@ -207,45 +207,45 @@ (define_insn "aarch64_store_lane0" ) (define_insn "load_pair" - [(set (match_operand:DREG 0 "register_operand" "=w,r") - (match_operand:DREG 1 "aarch64_mem_pair_operand" "Ump,Ump")) - (set (match_operand:DREG2 2 "register_operand" "=w,r") - (match_operand:DREG2 3 "memory_operand" "m,m"))] + [(set (match_operand:DREG 0 "register_operand") + (match_operand:DREG 1 "aarch64_mem_pair_operand")) + (set (match_operand:DREG2 2 "register_operand") + (match_operand:DREG2 3 "memory_operand"))] "TARGET_FLOAT && rtx_equal_p (XEXP (operands[3], 0), plus_constant (Pmode, XEXP (operands[1], 0), GET_MODE_SIZE (mode)))" - "@ - ldp\t%d0, %d2, %z1 - ldp\t%x0, %x2, %z1" - [(set_attr "type" "neon_ldp,load_16")] + {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type ] + [ w , Ump , w , m ; neon_ldp ] ldp\t%d0, %d2, %z1 + [ r , Ump , r , m ; load_16 ] ldp\t%x0, %x2, %z1 + } ) (define_insn "vec_store_pair" - [(set (match_operand:DREG 0 "aarch64_mem_pair_operand" "=Ump,Ump") - (match_operand:DREG 1 "register_operand" "w,r")) - (set (match_operand:DREG2 2 "memory_operand" "=m,m") - (match_operand:DREG2 3 "register_operand" "w,r"))] + [(set (match_operand:DREG 0 "aarch64_mem_pair_operand") + (match_operand:DREG 1 "register_operand")) + (set (match_operand:DREG2 2 "memory_operand") + (match_operand:DREG2 3 "register_operand"))] "TARGET_FLOAT && rtx_equal_p (XEXP (operands[2], 0), plus_constant (Pmode, XEXP (operands[0], 0), GET_MODE_SIZE (mode)))" - "@ - stp\t%d1, %d3, %z0 - stp\t%x1, %x3, %z0" - [(set_attr "type" "neon_stp,store_16")] + {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type ] + [ Ump , w , m , w ; neon_stp ] stp\t%d1, %d3, %z0 + [ Ump , r , m , r ; store_16 ] stp\t%x1, %x3, %z0 + } ) (define_insn "aarch64_simd_stp" - [(set (match_operand:VP_2E 0 "aarch64_mem_pair_lanes_operand" "=Umn,Umn") - (vec_duplicate:VP_2E (match_operand: 1 "register_operand" "w,r")))] + [(set (match_operand:VP_2E 0 "aarch64_mem_pair_lanes_operand") + (vec_duplicate:VP_2E (match_operand: 1 "register_operand")))] "TARGET_SIMD" - "@ - stp\\t%1, %1, %y0 - stp\\t%1, %1, %y0" - [(set_attr "type" "neon_stp, store_")] + {@ [ cons: =0 , 1 ; attrs: type ] + [ Umn , w ; neon_stp ] stp\t%1, %1, %y0 + [ Umn , r ; store_ ] stp\t%1, %1, %y0 + } ) (define_insn "load_pair" @@ -372,35 +372,37 @@ (define_expand "aarch64_get_high" ) (define_insn_and_split "aarch64_simd_mov_from_low" - [(set (match_operand: 0 "register_operand" "=w,?r") + [(set (match_operand: 0 "register_operand") (vec_select: - (match_operand:VQMOV_NO2E 1 "register_operand" "w,w") - (match_operand:VQMOV_NO2E 2 "vect_par_cnst_lo_half" "")))] + (match_operand:VQMOV_NO2E 1 "register_operand") + (match_operand:VQMOV_NO2E 2 "vect_par_cnst_lo_half")))] "TARGET_SIMD" - "@ - # - umov\t%0, %1.d[0]" + {@ [ cons: =0 , 1 ; attrs: type ] + [ w , w ; mov_reg ] # + [ ?r , w ; neon_to_gp ] umov\t%0, %1.d[0] + } "&& reload_completed && aarch64_simd_register (operands[0], mode)" [(set (match_dup 0) (match_dup 1))] { operands[1] = aarch64_replace_reg_mode (operands[1], mode); } - [(set_attr "type" "mov_reg,neon_to_gp") + [ (set_attr "length" "4")] ) (define_insn "aarch64_simd_mov_from_high" - [(set (match_operand: 0 "register_operand" "=w,?r,?r") + [(set (match_operand: 0 "register_operand") (vec_select: - (match_operand:VQMOV_NO2E 1 "register_operand" "w,w,w") - (match_operand:VQMOV_NO2E 2 "vect_par_cnst_hi_half" "")))] + (match_operand:VQMOV_NO2E 1 "register_operand") + (match_operand:VQMOV_NO2E 2 "vect_par_cnst_hi_half")))] "TARGET_FLOAT" - "@ - dup\t%d0, %1.d[1] - umov\t%0, %1.d[1] - fmov\t%0, %1.d[1]" - [(set_attr "type" "neon_dup,neon_to_gp,f_mrc") - (set_attr "arch" "simd,simd,*") + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ w , w ; neon_dup , simd ] dup\t%d0, %1.d[1] + [ ?r , w ; neon_to_gp , simd ] umov\t%0, %1.d[1] + [ ?r , w ; f_mrc , * ] fmov\t%0, %1.d[1] + } + [ + (set_attr "length" "4")] ) @@ -1204,27 +1206,27 @@ (define_insn "fabd3" ;; For AND (vector, register) and BIC (vector, immediate) (define_insn "and3" - [(set (match_operand:VDQ_I 0 "register_operand" "=w,w") - (and:VDQ_I (match_operand:VDQ_I 1 "register_operand" "w,0") - (match_operand:VDQ_I 2 "aarch64_reg_or_bic_imm" "w,Db")))] + [(set (match_operand:VDQ_I 0 "register_operand") + (and:VDQ_I (match_operand:VDQ_I 1 "register_operand") + (match_operand:VDQ_I 2 "aarch64_reg_or_bic_imm")))] "TARGET_SIMD" - "@ - and\t%0., %1., %2. - * return aarch64_output_simd_mov_immediate (operands[2], ,\ - AARCH64_CHECK_BIC);" + {@ [ cons: =0 , 1 , 2 ] + [ w , w , w ] and\t%0., %1., %2. + [ w , 0 , Db ] << aarch64_output_simd_mov_immediate (operands[2], , AARCH64_CHECK_BIC); + } [(set_attr "type" "neon_logic")] ) ;; For ORR (vector, register) and ORR (vector, immediate) (define_insn "ior3" - [(set (match_operand:VDQ_I 0 "register_operand" "=w,w") - (ior:VDQ_I (match_operand:VDQ_I 1 "register_operand" "w,0") - (match_operand:VDQ_I 2 "aarch64_reg_or_orr_imm" "w,Do")))] + [(set (match_operand:VDQ_I 0 "register_operand") + (ior:VDQ_I (match_operand:VDQ_I 1 "register_operand") + (match_operand:VDQ_I 2 "aarch64_reg_or_orr_imm")))] "TARGET_SIMD" - "@ - orr\t%0., %1., %2. - * return aarch64_output_simd_mov_immediate (operands[2], ,\ - AARCH64_CHECK_ORR);" + {@ [ cons: =0 , 1 , 2 ] + [ w , w , w ] orr\t%0., %1., %2. + [ w , 0 , Do ] << aarch64_output_simd_mov_immediate (operands[2], , AARCH64_CHECK_ORR); + } [(set_attr "type" "neon_logic")] ) @@ -1353,14 +1355,14 @@ (define_insn "aarch64_simd_lshr" ) (define_insn "aarch64_simd_ashr" - [(set (match_operand:VDQ_I 0 "register_operand" "=w,w") - (ashiftrt:VDQ_I (match_operand:VDQ_I 1 "register_operand" "w,w") - (match_operand:VDQ_I 2 "aarch64_simd_rshift_imm" "D1,Dr")))] + [(set (match_operand:VDQ_I 0 "register_operand") + (ashiftrt:VDQ_I (match_operand:VDQ_I 1 "register_operand") + (match_operand:VDQ_I 2 "aarch64_simd_rshift_imm")))] "TARGET_SIMD" - "@ - cmlt\t%0., %1., #0 - sshr\t%0., %1., %2" - [(set_attr "type" "neon_compare,neon_shift_imm")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ w , w , D1 ; neon_compare ] cmlt\t%0., %1., #0 + [ w , w , Dr ; neon_shift_imm ] sshr\t%0., %1., %2 + } ) (define_insn "aarch64_sra_n_insn" @@ -3701,20 +3703,21 @@ (define_insn "aarch64_reduc__internal" ;; in *aarch64_simd_bsl_alt. (define_insn "aarch64_simd_bsl_internal" - [(set (match_operand:VDQ_I 0 "register_operand" "=w,w,w") + [(set (match_operand:VDQ_I 0 "register_operand") (xor:VDQ_I (and:VDQ_I (xor:VDQ_I - (match_operand: 3 "register_operand" "w,0,w") - (match_operand:VDQ_I 2 "register_operand" "w,w,0")) - (match_operand:VDQ_I 1 "register_operand" "0,w,w")) + (match_operand: 3 "register_operand") + (match_operand:VDQ_I 2 "register_operand")) + (match_operand:VDQ_I 1 "register_operand")) (match_dup: 3) ))] "TARGET_SIMD" - "@ - bsl\\t%0., %2., %3. - bit\\t%0., %2., %1. - bif\\t%0., %3., %1." + {@ [ cons: =0 , 1 , 2 , 3 ] + [ w , 0 , w , w ] bsl\t%0., %2., %3. + [ w , w , w , 0 ] bit\t%0., %2., %1. + [ w , w , 0 , w ] bif\t%0., %3., %1. + } [(set_attr "type" "neon_bsl")] ) @@ -3725,19 +3728,20 @@ (define_insn "aarch64_simd_bsl_internal" ;; permutations of commutative operations, we have to have a separate pattern. (define_insn "*aarch64_simd_bsl_alt" - [(set (match_operand:VDQ_I 0 "register_operand" "=w,w,w") + [(set (match_operand:VDQ_I 0 "register_operand") (xor:VDQ_I (and:VDQ_I (xor:VDQ_I - (match_operand:VDQ_I 3 "register_operand" "w,w,0") - (match_operand: 2 "register_operand" "w,0,w")) - (match_operand:VDQ_I 1 "register_operand" "0,w,w")) + (match_operand:VDQ_I 3 "register_operand") + (match_operand: 2 "register_operand")) + (match_operand:VDQ_I 1 "register_operand")) (match_dup: 2)))] "TARGET_SIMD" - "@ - bsl\\t%0., %3., %2. - bit\\t%0., %3., %1. - bif\\t%0., %2., %1." + {@ [ cons: =0 , 1 , 2 , 3 ] + [ w , 0 , w , w ] bsl\t%0., %3., %2. + [ w , w , 0 , w ] bit\t%0., %3., %1. + [ w , w , w , 0 ] bif\t%0., %2., %1. + } [(set_attr "type" "neon_bsl")] ) @@ -3752,21 +3756,22 @@ (define_insn "*aarch64_simd_bsl_alt" ;; would be better calculated on the integer side. (define_insn_and_split "aarch64_simd_bsldi_internal" - [(set (match_operand:DI 0 "register_operand" "=w,w,w,&r") + [(set (match_operand:DI 0 "register_operand") (xor:DI (and:DI (xor:DI - (match_operand:DI 3 "register_operand" "w,0,w,r") - (match_operand:DI 2 "register_operand" "w,w,0,r")) - (match_operand:DI 1 "register_operand" "0,w,w,r")) + (match_operand:DI 3 "register_operand") + (match_operand:DI 2 "register_operand")) + (match_operand:DI 1 "register_operand")) (match_dup:DI 3) ))] "TARGET_SIMD" - "@ - bsl\\t%0.8b, %2.8b, %3.8b - bit\\t%0.8b, %2.8b, %1.8b - bif\\t%0.8b, %3.8b, %1.8b - #" + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: type , length ] + [ w , 0 , w , w ; neon_bsl , 4 ] bsl\t%0.8b, %2.8b, %3.8b + [ w , w , w , 0 ; neon_bsl , 4 ] bit\t%0.8b, %2.8b, %1.8b + [ w , w , 0 , w ; neon_bsl , 4 ] bif\t%0.8b, %3.8b, %1.8b + [ &r , r , r , r ; multiple , 12 ] # + } "&& REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" [(match_dup 1) (match_dup 1) (match_dup 2) (match_dup 3)] { @@ -3789,26 +3794,25 @@ (define_insn_and_split "aarch64_simd_bsldi_internal" emit_insn (gen_xordi3 (operands[0], scratch, operands[3])); DONE; } - [(set_attr "type" "neon_bsl,neon_bsl,neon_bsl,multiple") - (set_attr "length" "4,4,4,12")] ) (define_insn_and_split "aarch64_simd_bsldi_alt" - [(set (match_operand:DI 0 "register_operand" "=w,w,w,&r") + [(set (match_operand:DI 0 "register_operand") (xor:DI (and:DI (xor:DI - (match_operand:DI 3 "register_operand" "w,w,0,r") - (match_operand:DI 2 "register_operand" "w,0,w,r")) - (match_operand:DI 1 "register_operand" "0,w,w,r")) + (match_operand:DI 3 "register_operand") + (match_operand:DI 2 "register_operand")) + (match_operand:DI 1 "register_operand")) (match_dup:DI 2) ))] "TARGET_SIMD" - "@ - bsl\\t%0.8b, %3.8b, %2.8b - bit\\t%0.8b, %3.8b, %1.8b - bif\\t%0.8b, %2.8b, %1.8b - #" + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: type , length ] + [ w , 0 , w , w ; neon_bsl , 4 ] bsl\t%0.8b, %3.8b, %2.8b + [ w , w , 0 , w ; neon_bsl , 4 ] bit\t%0.8b, %3.8b, %1.8b + [ w , w , w , 0 ; neon_bsl , 4 ] bif\t%0.8b, %2.8b, %1.8b + [ &r , r , r , r ; multiple , 12 ] # + } "&& REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" [(match_dup 0) (match_dup 1) (match_dup 2) (match_dup 3)] { @@ -3831,8 +3835,6 @@ (define_insn_and_split "aarch64_simd_bsldi_alt" emit_insn (gen_xordi3 (operands[0], scratch, operands[2])); DONE; } - [(set_attr "type" "neon_bsl,neon_bsl,neon_bsl,multiple") - (set_attr "length" "4,4,4,12")] ) (define_expand "aarch64_simd_bsl" @@ -4385,15 +4387,15 @@ (define_insn "load_pair_lanes" ;; This dedicated pattern must come first. (define_insn "store_pair_lanes" - [(set (match_operand: 0 "aarch64_mem_pair_lanes_operand" "=Umn, Umn") + [(set (match_operand: 0 "aarch64_mem_pair_lanes_operand") (vec_concat: - (match_operand:VDCSIF 1 "register_operand" "w, r") - (match_operand:VDCSIF 2 "register_operand" "w, r")))] + (match_operand:VDCSIF 1 "register_operand") + (match_operand:VDCSIF 2 "register_operand")))] "TARGET_FLOAT" - "@ - stp\t%1, %2, %y0 - stp\t%1, %2, %y0" - [(set_attr "type" "neon_stp, store_16")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ Umn , w , w ; neon_stp ] stp\t%1, %2, %y0 + [ Umn , r , r ; store_16 ] stp\t%1, %2, %y0 + } ) ;; Form a vector whose least significant half comes from operand 1 and whose @@ -4404,73 +4406,70 @@ (define_insn "store_pair_lanes" ;; the register alternatives either don't accept or themselves disparage. (define_insn "*aarch64_combine_internal" - [(set (match_operand: 0 "aarch64_reg_or_mem_pair_operand" "=w, w, w, w, Umn, Umn") + [(set (match_operand: 0 "aarch64_reg_or_mem_pair_operand") (vec_concat: - (match_operand:VDCSIF 1 "register_operand" "0, 0, 0, 0, ?w, ?r") - (match_operand:VDCSIF 2 "aarch64_simd_nonimmediate_operand" "w, ?r, ?r, Utv, w, ?r")))] + (match_operand:VDCSIF 1 "register_operand") + (match_operand:VDCSIF 2 "aarch64_simd_nonimmediate_operand")))] "TARGET_FLOAT && !BYTES_BIG_ENDIAN && (register_operand (operands[0], mode) || register_operand (operands[2], mode))" - "@ - ins\t%0.[1], %2.[0] - ins\t%0.[1], %2 - fmov\t%0.d[1], %2 - ld1\t{%0.}[1], %2 - stp\t%1, %2, %y0 - stp\t%1, %2, %y0" - [(set_attr "type" "neon_ins, neon_from_gp, f_mcr, - neon_load1_one_lane, neon_stp, store_16") - (set_attr "arch" "simd,simd,*,simd,*,*")] + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ w , 0 , w ; neon_ins , simd ] ins\t%0.[1], %2.[0] + [ w , 0 , ?r ; neon_from_gp , simd ] ins\t%0.[1], %2 + [ w , 0 , ?r ; f_mcr , * ] fmov\t%0.d[1], %2 + [ w , 0 , Utv ; neon_load1_one_lane , simd ] ld1\t{%0.}[1], %2 + [ Umn , ?w , w ; neon_stp , * ] stp\t%1, %2, %y0 + [ Umn , ?r , ?r ; store_16 , * ] stp\t%1, %2, %y0 + } ) (define_insn "*aarch64_combine_internal_be" - [(set (match_operand: 0 "aarch64_reg_or_mem_pair_operand" "=w, w, w, w, Umn, Umn") + [(set (match_operand: 0 "aarch64_reg_or_mem_pair_operand") (vec_concat: - (match_operand:VDCSIF 2 "aarch64_simd_nonimmediate_operand" "w, ?r, ?r, Utv, ?w, ?r") - (match_operand:VDCSIF 1 "register_operand" "0, 0, 0, 0, ?w, ?r")))] + (match_operand:VDCSIF 2 "aarch64_simd_nonimmediate_operand") + (match_operand:VDCSIF 1 "register_operand")))] "TARGET_FLOAT && BYTES_BIG_ENDIAN && (register_operand (operands[0], mode) || register_operand (operands[2], mode))" - "@ - ins\t%0.[1], %2.[0] - ins\t%0.[1], %2 - fmov\t%0.d[1], %2 - ld1\t{%0.}[1], %2 - stp\t%2, %1, %y0 - stp\t%2, %1, %y0" - [(set_attr "type" "neon_ins, neon_from_gp, f_mcr, neon_load1_one_lane, neon_stp, store_16") - (set_attr "arch" "simd,simd,*,simd,*,*")] + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ w , 0 , w ; neon_ins , simd ] ins\t%0.[1], %2.[0] + [ w , 0 , ?r ; neon_from_gp , simd ] ins\t%0.[1], %2 + [ w , 0 , ?r ; f_mcr , * ] fmov\t%0.d[1], %2 + [ w , 0 , Utv ; neon_load1_one_lane , simd ] ld1\t{%0.}[1], %2 + [ Umn , ?w , ?w ; neon_stp , * ] stp\t%2, %1, %y0 + [ Umn , ?r , ?r ; store_16 , * ] stp\t%2, %1, %y0 + } ) ;; In this insn, operand 1 should be low, and operand 2 the high part of the ;; dest vector. (define_insn "*aarch64_combinez" - [(set (match_operand: 0 "register_operand" "=w,w,w") + [(set (match_operand: 0 "register_operand") (vec_concat: - (match_operand:VDCSIF 1 "nonimmediate_operand" "w,?r,m") + (match_operand:VDCSIF 1 "nonimmediate_operand") (match_operand:VDCSIF 2 "aarch64_simd_or_scalar_imm_zero")))] "TARGET_FLOAT && !BYTES_BIG_ENDIAN" - "@ - fmov\\t%0, %1 - fmov\t%0, %1 - ldr\\t%0, %1" - [(set_attr "type" "neon_move, neon_from_gp, neon_load1_1reg")] + {@ [ cons: =0 , 1 ; attrs: type ] + [ w , w ; neon_move ] fmov\t%0, %1 + [ w , ?r ; neon_from_gp ] fmov\t%0, %1 + [ w , m ; neon_load1_1reg ] ldr\t%0, %1 + } ) (define_insn "*aarch64_combinez_be" - [(set (match_operand: 0 "register_operand" "=w,w,w") + [(set (match_operand: 0 "register_operand") (vec_concat: (match_operand:VDCSIF 2 "aarch64_simd_or_scalar_imm_zero") - (match_operand:VDCSIF 1 "nonimmediate_operand" "w,?r,m")))] + (match_operand:VDCSIF 1 "nonimmediate_operand")))] "TARGET_FLOAT && BYTES_BIG_ENDIAN" - "@ - fmov\\t%0, %1 - fmov\t%0, %1 - ldr\\t%0, %1" - [(set_attr "type" "neon_move, neon_from_gp, neon_load1_1reg")] + {@ [ cons: =0 , 1 ; attrs: type ] + [ w , w ; neon_move ] fmov\t%0, %1 + [ w , ?r ; neon_from_gp ] fmov\t%0, %1 + [ w , m ; neon_load1_1reg ] ldr\t%0, %1 + } ) ;; Form a vector whose first half (in array order) comes from operand 1 @@ -7051,17 +7050,17 @@ (define_expand "aarch64_sqrshrun2_n" ;; have different ideas of what should be passed to this pattern. (define_insn "aarch64_cm" - [(set (match_operand: 0 "register_operand" "=w,w") + [(set (match_operand: 0 "register_operand") (neg: (COMPARISONS: - (match_operand:VDQ_I 1 "register_operand" "w,w") - (match_operand:VDQ_I 2 "aarch64_simd_reg_or_zero" "w,ZDz") + (match_operand:VDQ_I 1 "register_operand") + (match_operand:VDQ_I 2 "aarch64_simd_reg_or_zero") )))] "TARGET_SIMD" - "@ - cm\t%0, %, % - cm\t%0, %1, #0" - [(set_attr "type" "neon_compare, neon_compare_zero")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ w , w , w ; neon_compare ] cm\t%0, %, % + [ w , w , ZDz ; neon_compare_zero ] cm\t%0, %1, #0 + } ) (define_insn_and_split "aarch64_cmdi" @@ -7100,17 +7099,17 @@ (define_insn_and_split "aarch64_cmdi" ) (define_insn "*aarch64_cmdi" - [(set (match_operand:DI 0 "register_operand" "=w,w") + [(set (match_operand:DI 0 "register_operand") (neg:DI (COMPARISONS:DI - (match_operand:DI 1 "register_operand" "w,w") - (match_operand:DI 2 "aarch64_simd_reg_or_zero" "w,ZDz") + (match_operand:DI 1 "register_operand") + (match_operand:DI 2 "aarch64_simd_reg_or_zero") )))] "TARGET_SIMD && reload_completed" - "@ - cm\t%d0, %d, %d - cm\t%d0, %d1, #0" - [(set_attr "type" "neon_compare, neon_compare_zero")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ w , w , w ; neon_compare ] cm\t%d0, %d, %d + [ w , w , ZDz ; neon_compare_zero ] cm\t%d0, %d1, #0 + } ) ;; cm(hs|hi) @@ -7268,16 +7267,17 @@ (define_insn "*aarch64_cmtstdi" ;; fcm(eq|ge|gt|le|lt) (define_insn "aarch64_cm" - [(set (match_operand: 0 "register_operand" "=w,w") + [(set (match_operand: 0 "register_operand") (neg: (COMPARISONS: - (match_operand:VHSDF_HSDF 1 "register_operand" "w,w") - (match_operand:VHSDF_HSDF 2 "aarch64_simd_reg_or_zero" "w,YDz") + (match_operand:VHSDF_HSDF 1 "register_operand") + (match_operand:VHSDF_HSDF 2 "aarch64_simd_reg_or_zero") )))] "TARGET_SIMD" - "@ - fcm\t%0, %, % - fcm\t%0, %1, 0" + {@ [ cons: =0 , 1 , 2 ] + [ w , w , w ] fcm\t%0, %, % + [ w , w , YDz ] fcm\t%0, %1, 0 + } [(set_attr "type" "neon_fp_compare_")] ) @@ -7880,33 +7880,29 @@ (define_insn "aarch64_st1_x4_" ) (define_insn "*aarch64_mov" - [(set (match_operand:VSTRUCT_QD 0 "aarch64_simd_nonimmediate_operand" "=w,Utv,w") - (match_operand:VSTRUCT_QD 1 "aarch64_simd_general_operand" " w,w,Utv"))] + [(set (match_operand:VSTRUCT_QD 0 "aarch64_simd_nonimmediate_operand") + (match_operand:VSTRUCT_QD 1 "aarch64_simd_general_operand"))] "TARGET_SIMD && !BYTES_BIG_ENDIAN && (register_operand (operands[0], mode) || register_operand (operands[1], mode))" - "@ - # - st1\\t{%S1. - %1.}, %0 - ld1\\t{%S0. - %0.}, %1" - [(set_attr "type" "multiple,neon_store_reg_q,\ - neon_load_reg_q") - (set_attr "length" ",4,4")] + {@ [ cons: =0 , 1 ; attrs: type , length ] + [ w , w ; multiple , ] # + [ Utv , w ; neon_store_reg_q , 4 ] st1\t{%S1. - %1.}, %0 + [ w , Utv ; neon_load_reg_q , 4 ] ld1\t{%S0. - %0.}, %1 + } ) (define_insn "*aarch64_mov" - [(set (match_operand:VSTRUCT 0 "aarch64_simd_nonimmediate_operand" "=w,Utv,w") - (match_operand:VSTRUCT 1 "aarch64_simd_general_operand" " w,w,Utv"))] + [(set (match_operand:VSTRUCT 0 "aarch64_simd_nonimmediate_operand") + (match_operand:VSTRUCT 1 "aarch64_simd_general_operand"))] "TARGET_SIMD && !BYTES_BIG_ENDIAN && (register_operand (operands[0], mode) || register_operand (operands[1], mode))" - "@ - # - st1\\t{%S1.16b - %1.16b}, %0 - ld1\\t{%S0.16b - %0.16b}, %1" - [(set_attr "type" "multiple,neon_store_reg_q,\ - neon_load_reg_q") - (set_attr "length" ",4,4")] + {@ [ cons: =0 , 1 ; attrs: type , length ] + [ w , w ; multiple , ] # + [ Utv , w ; neon_store_reg_q , 4 ] st1\t{%S1.16b - %1.16b}, %0 + [ w , Utv ; neon_load_reg_q , 4 ] ld1\t{%S0.16b - %0.16b}, %1 + } ) (define_insn "*aarch64_movv8di" @@ -7939,50 +7935,45 @@ (define_insn "aarch64_be_st1" ) (define_insn "*aarch64_be_mov" - [(set (match_operand:VSTRUCT_2D 0 "nonimmediate_operand" "=w,m,w") - (match_operand:VSTRUCT_2D 1 "general_operand" " w,w,m"))] + [(set (match_operand:VSTRUCT_2D 0 "nonimmediate_operand") + (match_operand:VSTRUCT_2D 1 "general_operand"))] "TARGET_FLOAT && (!TARGET_SIMD || BYTES_BIG_ENDIAN) && (register_operand (operands[0], mode) || register_operand (operands[1], mode))" - "@ - # - stp\\t%d1, %R1, %0 - ldp\\t%d0, %R0, %1" - [(set_attr "type" "multiple,neon_stp,neon_ldp") - (set_attr "length" "8,4,4")] + {@ [ cons: =0 , 1 ; attrs: type , length ] + [ w , w ; multiple , 8 ] # + [ m , w ; neon_stp , 4 ] stp\t%d1, %R1, %0 + [ w , m ; neon_ldp , 4 ] ldp\t%d0, %R0, %1 + } ) (define_insn "*aarch64_be_mov" - [(set (match_operand:VSTRUCT_2Q 0 "nonimmediate_operand" "=w,m,w") - (match_operand:VSTRUCT_2Q 1 "general_operand" " w,w,m"))] + [(set (match_operand:VSTRUCT_2Q 0 "nonimmediate_operand") + (match_operand:VSTRUCT_2Q 1 "general_operand"))] "TARGET_FLOAT && (!TARGET_SIMD || BYTES_BIG_ENDIAN) && (register_operand (operands[0], mode) || register_operand (operands[1], mode))" - "@ - # - stp\\t%q1, %R1, %0 - ldp\\t%q0, %R0, %1" - [(set_attr "type" "multiple,neon_stp_q,neon_ldp_q") - (set_attr "arch" "simd,*,*") - (set_attr "length" "8,4,4")] + {@ [ cons: =0 , 1 ; attrs: type , arch , length ] + [ w , w ; multiple , simd , 8 ] # + [ m , w ; neon_stp_q , * , 4 ] stp\t%q1, %R1, %0 + [ w , m ; neon_ldp_q , * , 4 ] ldp\t%q0, %R0, %1 + } ) (define_insn "*aarch64_be_movoi" - [(set (match_operand:OI 0 "nonimmediate_operand" "=w,m,w") - (match_operand:OI 1 "general_operand" " w,w,m"))] + [(set (match_operand:OI 0 "nonimmediate_operand") + (match_operand:OI 1 "general_operand"))] "TARGET_FLOAT && (!TARGET_SIMD || BYTES_BIG_ENDIAN) && (register_operand (operands[0], OImode) || register_operand (operands[1], OImode))" - "@ - # - stp\\t%q1, %R1, %0 - ldp\\t%q0, %R0, %1" - [(set_attr "type" "multiple,neon_stp_q,neon_ldp_q") - (set_attr "arch" "simd,*,*") - (set_attr "length" "8,4,4")] + {@ [ cons: =0 , 1 ; attrs: type , arch , length ] + [ w , w ; multiple , simd , 8 ] # + [ m , w ; neon_stp_q , * , 4 ] stp\t%q1, %R1, %0 + [ w , m ; neon_ldp_q , * , 4 ] ldp\t%q0, %R0, %1 + } ) (define_insn "*aarch64_be_mov" diff --git a/gcc/config/aarch64/aarch64-sve.md b/gcc/config/aarch64/aarch64-sve.md index da5534c3e32..a643e78dd8c 100644 --- a/gcc/config/aarch64/aarch64-sve.md +++ b/gcc/config/aarch64/aarch64-sve.md @@ -687,33 +687,35 @@ (define_expand "movmisalign" ;; and after RA; before RA we want the predicated load and store patterns to ;; be used instead. (define_insn "*aarch64_sve_mov_ldr_str" - [(set (match_operand:SVE_FULL 0 "aarch64_sve_nonimmediate_operand" "=w, Utr, w, w") - (match_operand:SVE_FULL 1 "aarch64_sve_general_operand" "Utr, w, w, Dn"))] + [(set (match_operand:SVE_FULL 0 "aarch64_sve_nonimmediate_operand") + (match_operand:SVE_FULL 1 "aarch64_sve_general_operand"))] "TARGET_SVE && (mode == VNx16QImode || !BYTES_BIG_ENDIAN) && ((lra_in_progress || reload_completed) || (register_operand (operands[0], mode) && nonmemory_operand (operands[1], mode)))" - "@ - ldr\t%0, %1 - str\t%1, %0 - mov\t%0.d, %1.d - * return aarch64_output_sve_mov_immediate (operands[1]);" + {@ [ cons: =0 , 1 ] + [ w , Utr ] ldr\t%0, %1 + [ Utr , w ] str\t%1, %0 + [ w , w ] mov\t%0.d, %1.d + [ w , Dn ] << aarch64_output_sve_mov_immediate (operands[1]); + } ) ;; Unpredicated moves that cannot use LDR and STR, i.e. partial vectors ;; or vectors for which little-endian ordering isn't acceptable. Memory ;; accesses require secondary reloads. (define_insn "*aarch64_sve_mov_no_ldr_str" - [(set (match_operand:SVE_ALL 0 "register_operand" "=w, w") - (match_operand:SVE_ALL 1 "aarch64_nonmemory_operand" "w, Dn"))] + [(set (match_operand:SVE_ALL 0 "register_operand") + (match_operand:SVE_ALL 1 "aarch64_nonmemory_operand"))] "TARGET_SVE && mode != VNx16QImode && (BYTES_BIG_ENDIAN || maybe_ne (BYTES_PER_SVE_VECTOR, GET_MODE_SIZE (mode)))" - "@ - mov\t%0.d, %1.d - * return aarch64_output_sve_mov_immediate (operands[1]);" + {@ [ cons: =0 , 1 ] + [ w , w ] mov\t%0.d, %1.d + [ w , Dn ] << aarch64_output_sve_mov_immediate (operands[1]); + } ) ;; Handle memory reloads for modes that can't use LDR and STR. We use @@ -743,18 +745,19 @@ (define_expand "aarch64_sve_reload_mem" ;; Note that this pattern is generated directly by aarch64_emit_sve_pred_move, ;; so changes to this pattern will need changes there as well. (define_insn_and_split "@aarch64_pred_mov" - [(set (match_operand:SVE_ALL 0 "nonimmediate_operand" "=w, w, m") + [(set (match_operand:SVE_ALL 0 "nonimmediate_operand") (unspec:SVE_ALL - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") - (match_operand:SVE_ALL 2 "nonimmediate_operand" "w, m, w")] + [(match_operand: 1 "register_operand") + (match_operand:SVE_ALL 2 "nonimmediate_operand")] UNSPEC_PRED_X))] "TARGET_SVE && (register_operand (operands[0], mode) || register_operand (operands[2], mode))" - "@ - # - ld1\t%0., %1/z, %2 - st1\t%2., %1, %0" + {@ [ cons: =0 , 1 , 2 ] + [ w , Upl , w ] # + [ w , Upl , m ] ld1\t%0., %1/z, %2 + [ m , Upl , w ] st1\t%2., %1, %0 + } "&& register_operand (operands[0], mode) && register_operand (operands[2], mode)" [(set (match_dup 0) (match_dup 2))] @@ -949,16 +952,17 @@ (define_expand "mov" ) (define_insn "*aarch64_sve_mov" - [(set (match_operand:PRED_ALL 0 "nonimmediate_operand" "=Upa, m, Upa, Upa") - (match_operand:PRED_ALL 1 "aarch64_mov_operand" "Upa, Upa, m, Dn"))] + [(set (match_operand:PRED_ALL 0 "nonimmediate_operand") + (match_operand:PRED_ALL 1 "aarch64_mov_operand"))] "TARGET_SVE && (register_operand (operands[0], mode) || register_operand (operands[1], mode))" - "@ - mov\t%0.b, %1.b - str\t%1, %0 - ldr\t%0, %1 - * return aarch64_output_sve_mov_immediate (operands[1]);" + {@ [ cons: =0 , 1 ] + [ Upa , Upa ] mov\t%0.b, %1.b + [ m , Upa ] str\t%1, %0 + [ Upa , m ] ldr\t%0, %1 + [ Upa , Dn ] << aarch64_output_sve_mov_immediate (operands[1]); + } ) ;; Match PTRUES Pn.B when both the predicate and flags are useful. @@ -1079,13 +1083,14 @@ (define_insn_and_rewrite "*aarch64_sve_ptrue_ptest" ;; Write to the FFR and start a new FFRT scheduling region. (define_insn "aarch64_wrffr" [(set (reg:VNx16BI FFR_REGNUM) - (match_operand:VNx16BI 0 "aarch64_simd_reg_or_minus_one" "Dm, Upa")) + (match_operand:VNx16BI 0 "aarch64_simd_reg_or_minus_one")) (set (reg:VNx16BI FFRT_REGNUM) (unspec:VNx16BI [(match_dup 0)] UNSPEC_WRFFR))] "TARGET_SVE" - "@ - setffr - wrffr\t%0.b" + {@ [ cons: 0 ] + [ Dm ] setffr + [ Upa ] wrffr\t%0.b + } ) ;; [L2 in the block comment above about FFR handling] @@ -2331,21 +2336,22 @@ (define_expand "scatter_store" (define_insn "mask_scatter_store" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand:VNx4BI 5 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl") - (match_operand:DI 0 "aarch64_sve_gather_offset_" "Z, vgw, rk, rk, rk, rk") - (match_operand:VNx4SI 1 "register_operand" "w, w, w, w, w, w") - (match_operand:DI 2 "const_int_operand" "Ui1, Ui1, Z, Ui1, Z, Ui1") - (match_operand:DI 3 "aarch64_gather_scale_operand_" "Ui1, Ui1, Ui1, Ui1, i, i") - (match_operand:SVE_4 4 "register_operand" "w, w, w, w, w, w")] + [(match_operand:VNx4BI 5 "register_operand") + (match_operand:DI 0 "aarch64_sve_gather_offset_") + (match_operand:VNx4SI 1 "register_operand") + (match_operand:DI 2 "const_int_operand") + (match_operand:DI 3 "aarch64_gather_scale_operand_") + (match_operand:SVE_4 4 "register_operand")] UNSPEC_ST1_SCATTER))] "TARGET_SVE" - "@ - st1\t%4.s, %5, [%1.s] - st1\t%4.s, %5, [%1.s, #%0] - st1\t%4.s, %5, [%0, %1.s, sxtw] - st1\t%4.s, %5, [%0, %1.s, uxtw] - st1\t%4.s, %5, [%0, %1.s, sxtw %p3] - st1\t%4.s, %5, [%0, %1.s, uxtw %p3]" + {@ [ cons: 0 , 1 , 2 , 3 , 4 , 5 ] + [ Z , w , Ui1 , Ui1 , w , Upl ] st1\t%4.s, %5, [%1.s] + [ vgw , w , Ui1 , Ui1 , w , Upl ] st1\t%4.s, %5, [%1.s, #%0] + [ rk , w , Z , Ui1 , w , Upl ] st1\t%4.s, %5, [%0, %1.s, sxtw] + [ rk , w , Ui1 , Ui1 , w , Upl ] st1\t%4.s, %5, [%0, %1.s, uxtw] + [ rk , w , Z , i , w , Upl ] st1\t%4.s, %5, [%0, %1.s, sxtw %p3] + [ rk , w , Ui1 , i , w , Upl ] st1\t%4.s, %5, [%0, %1.s, uxtw %p3] + } ) ;; Predicated scatter stores for 64-bit elements. The value of operand 2 @@ -2353,40 +2359,42 @@ (define_insn "mask_scatter_store" (define_insn "mask_scatter_store" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand:VNx2BI 5 "register_operand" "Upl, Upl, Upl, Upl") - (match_operand:DI 0 "aarch64_sve_gather_offset_" "Z, vgd, rk, rk") - (match_operand:VNx2DI 1 "register_operand" "w, w, w, w") + [(match_operand:VNx2BI 5 "register_operand") + (match_operand:DI 0 "aarch64_sve_gather_offset_") + (match_operand:VNx2DI 1 "register_operand") (match_operand:DI 2 "const_int_operand") - (match_operand:DI 3 "aarch64_gather_scale_operand_" "Ui1, Ui1, Ui1, i") - (match_operand:SVE_2 4 "register_operand" "w, w, w, w")] + (match_operand:DI 3 "aarch64_gather_scale_operand_") + (match_operand:SVE_2 4 "register_operand")] UNSPEC_ST1_SCATTER))] "TARGET_SVE" - "@ - st1\t%4.d, %5, [%1.d] - st1\t%4.d, %5, [%1.d, #%0] - st1\t%4.d, %5, [%0, %1.d] - st1\t%4.d, %5, [%0, %1.d, lsl %p3]" + {@ [ cons: 0 , 1 , 3 , 4 , 5 ] + [ Z , w , Ui1 , w , Upl ] st1\t%4.d, %5, [%1.d] + [ vgd , w , Ui1 , w , Upl ] st1\t%4.d, %5, [%1.d, #%0] + [ rk , w , Ui1 , w , Upl ] st1\t%4.d, %5, [%0, %1.d] + [ rk , w , i , w , Upl ] st1\t%4.d, %5, [%0, %1.d, lsl %p3] + } ) ;; Likewise, but with the offset being extended from 32 bits. (define_insn_and_rewrite "*mask_scatter_store_xtw_unpacked" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand:VNx2BI 5 "register_operand" "Upl, Upl") - (match_operand:DI 0 "register_operand" "rk, rk") + [(match_operand:VNx2BI 5 "register_operand") + (match_operand:DI 0 "register_operand") (unspec:VNx2DI [(match_operand 6) (ANY_EXTEND:VNx2DI - (match_operand:VNx2SI 1 "register_operand" "w, w"))] + (match_operand:VNx2SI 1 "register_operand"))] UNSPEC_PRED_X) (match_operand:DI 2 "const_int_operand") - (match_operand:DI 3 "aarch64_gather_scale_operand_" "Ui1, i") - (match_operand:SVE_2 4 "register_operand" "w, w")] + (match_operand:DI 3 "aarch64_gather_scale_operand_") + (match_operand:SVE_2 4 "register_operand")] UNSPEC_ST1_SCATTER))] "TARGET_SVE" - "@ - st1\t%4.d, %5, [%0, %1.d, xtw] - st1\t%4.d, %5, [%0, %1.d, xtw %p3]" + {@ [ cons: 0 , 1 , 3 , 4 , 5 ] + [ rk , w , Ui1 , w , Upl ] st1\t%4.d, %5, [%0, %1.d, xtw] + [ rk , w , i , w , Upl ] st1\t%4.d, %5, [%0, %1.d, xtw %p3] + } "&& !CONSTANT_P (operands[6])" { operands[6] = CONSTM1_RTX (mode); @@ -2398,22 +2406,23 @@ (define_insn_and_rewrite "*mask_scatter_store_xtw_unp (define_insn_and_rewrite "*mask_scatter_store_sxtw" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand:VNx2BI 5 "register_operand" "Upl, Upl") - (match_operand:DI 0 "register_operand" "rk, rk") + [(match_operand:VNx2BI 5 "register_operand") + (match_operand:DI 0 "register_operand") (unspec:VNx2DI [(match_operand 6) (sign_extend:VNx2DI (truncate:VNx2SI - (match_operand:VNx2DI 1 "register_operand" "w, w")))] + (match_operand:VNx2DI 1 "register_operand")))] UNSPEC_PRED_X) (match_operand:DI 2 "const_int_operand") - (match_operand:DI 3 "aarch64_gather_scale_operand_" "Ui1, i") - (match_operand:SVE_2 4 "register_operand" "w, w")] + (match_operand:DI 3 "aarch64_gather_scale_operand_") + (match_operand:SVE_2 4 "register_operand")] UNSPEC_ST1_SCATTER))] "TARGET_SVE" - "@ - st1\t%4.d, %5, [%0, %1.d, sxtw] - st1\t%4.d, %5, [%0, %1.d, sxtw %p3]" + {@ [ cons: 0 , 1 , 3 , 4 , 5 ] + [ rk , w , Ui1 , w , Upl ] st1\t%4.d, %5, [%0, %1.d, sxtw] + [ rk , w , i , w , Upl ] st1\t%4.d, %5, [%0, %1.d, sxtw %p3] + } "&& !CONSTANT_P (operands[6])" { operands[6] = CONSTM1_RTX (mode); @@ -2425,19 +2434,20 @@ (define_insn_and_rewrite "*mask_scatter_store_sxtw" (define_insn "*mask_scatter_store_uxtw" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand:VNx2BI 5 "register_operand" "Upl, Upl") - (match_operand:DI 0 "aarch64_reg_or_zero" "rk, rk") + [(match_operand:VNx2BI 5 "register_operand") + (match_operand:DI 0 "aarch64_reg_or_zero") (and:VNx2DI - (match_operand:VNx2DI 1 "register_operand" "w, w") + (match_operand:VNx2DI 1 "register_operand") (match_operand:VNx2DI 6 "aarch64_sve_uxtw_immediate")) (match_operand:DI 2 "const_int_operand") - (match_operand:DI 3 "aarch64_gather_scale_operand_" "Ui1, i") - (match_operand:SVE_2 4 "register_operand" "w, w")] + (match_operand:DI 3 "aarch64_gather_scale_operand_") + (match_operand:SVE_2 4 "register_operand")] UNSPEC_ST1_SCATTER))] "TARGET_SVE" - "@ - st1\t%4.d, %5, [%0, %1.d, uxtw] - st1\t%4.d, %5, [%0, %1.d, uxtw %p3]" + {@ [ cons: 0 , 1 , 3 , 4 , 5 ] + [ rk , w , Ui1 , w , Upl ] st1\t%4.d, %5, [%0, %1.d, uxtw] + [ rk , w , i , w , Upl ] st1\t%4.d, %5, [%0, %1.d, uxtw %p3] + } ) ;; ------------------------------------------------------------------------- @@ -2454,22 +2464,23 @@ (define_insn "*mask_scatter_store_uxtw" (define_insn "@aarch64_scatter_store_trunc" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand:VNx4BI 5 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl") + [(match_operand:VNx4BI 5 "register_operand") (match_operand:DI 0 "aarch64_sve_gather_offset_" "Z, vg, rk, rk, rk, rk") - (match_operand:VNx4SI 1 "register_operand" "w, w, w, w, w, w") - (match_operand:DI 2 "const_int_operand" "Ui1, Ui1, Z, Ui1, Z, Ui1") + (match_operand:VNx4SI 1 "register_operand") + (match_operand:DI 2 "const_int_operand") (match_operand:DI 3 "aarch64_gather_scale_operand_" "Ui1, Ui1, Ui1, Ui1, i, i") (truncate:VNx4_NARROW - (match_operand:VNx4_WIDE 4 "register_operand" "w, w, w, w, w, w"))] + (match_operand:VNx4_WIDE 4 "register_operand"))] UNSPEC_ST1_SCATTER))] "TARGET_SVE" - "@ - st1\t%4.s, %5, [%1.s] - st1\t%4.s, %5, [%1.s, #%0] - st1\t%4.s, %5, [%0, %1.s, sxtw] - st1\t%4.s, %5, [%0, %1.s, uxtw] - st1\t%4.s, %5, [%0, %1.s, sxtw %p3] - st1\t%4.s, %5, [%0, %1.s, uxtw %p3]" + {@ [ cons: 1 , 2 , 4 , 5 ] + [ w , Ui1 , w , Upl ] st1\t%4.s, %5, [%1.s] + [ w , Ui1 , w , Upl ] st1\t%4.s, %5, [%1.s, #%0] + [ w , Z , w , Upl ] st1\t%4.s, %5, [%0, %1.s, sxtw] + [ w , Ui1 , w , Upl ] st1\t%4.s, %5, [%0, %1.s, uxtw] + [ w , Z , w , Upl ] st1\t%4.s, %5, [%0, %1.s, sxtw %p3] + [ w , Ui1 , w , Upl ] st1\t%4.s, %5, [%0, %1.s, uxtw %p3] + } ) ;; Predicated truncating scatter stores for 64-bit elements. The value of @@ -2477,43 +2488,45 @@ (define_insn "@aarch64_scatter_store_trunc" (define_insn "@aarch64_scatter_store_trunc" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand:VNx2BI 5 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand:VNx2BI 5 "register_operand") (match_operand:DI 0 "aarch64_sve_gather_offset_" "Z, vg, rk, rk") - (match_operand:VNx2DI 1 "register_operand" "w, w, w, w") + (match_operand:VNx2DI 1 "register_operand") (match_operand:DI 2 "const_int_operand") (match_operand:DI 3 "aarch64_gather_scale_operand_" "Ui1, Ui1, Ui1, i") (truncate:VNx2_NARROW - (match_operand:VNx2_WIDE 4 "register_operand" "w, w, w, w"))] + (match_operand:VNx2_WIDE 4 "register_operand"))] UNSPEC_ST1_SCATTER))] "TARGET_SVE" - "@ - st1\t%4.d, %5, [%1.d] - st1\t%4.d, %5, [%1.d, #%0] - st1\t%4.d, %5, [%0, %1.d] - st1\t%4.d, %5, [%0, %1.d, lsl %p3]" + {@ [ cons: 1 , 4 , 5 ] + [ w , w , Upl ] st1\t%4.d, %5, [%1.d] + [ w , w , Upl ] st1\t%4.d, %5, [%1.d, #%0] + [ w , w , Upl ] st1\t%4.d, %5, [%0, %1.d] + [ w , w , Upl ] st1\t%4.d, %5, [%0, %1.d, lsl %p3] + } ) ;; Likewise, but with the offset being sign-extended from 32 bits. (define_insn_and_rewrite "*aarch64_scatter_store_trunc_sxtw" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand:VNx2BI 5 "register_operand" "Upl, Upl") - (match_operand:DI 0 "register_operand" "rk, rk") + [(match_operand:VNx2BI 5 "register_operand") + (match_operand:DI 0 "register_operand") (unspec:VNx2DI [(match_operand 6) (sign_extend:VNx2DI (truncate:VNx2SI - (match_operand:VNx2DI 1 "register_operand" "w, w")))] + (match_operand:VNx2DI 1 "register_operand")))] UNSPEC_PRED_X) (match_operand:DI 2 "const_int_operand") (match_operand:DI 3 "aarch64_gather_scale_operand_" "Ui1, i") (truncate:VNx2_NARROW - (match_operand:VNx2_WIDE 4 "register_operand" "w, w"))] + (match_operand:VNx2_WIDE 4 "register_operand"))] UNSPEC_ST1_SCATTER))] "TARGET_SVE" - "@ - st1\t%4.d, %5, [%0, %1.d, sxtw] - st1\t%4.d, %5, [%0, %1.d, sxtw %p3]" + {@ [ cons: 0 , 1 , 4 , 5 ] + [ rk , w , w , Upl ] st1\t%4.d, %5, [%0, %1.d, sxtw] + [ rk , w , w , Upl ] st1\t%4.d, %5, [%0, %1.d, sxtw %p3] + } "&& !rtx_equal_p (operands[5], operands[6])" { operands[6] = copy_rtx (operands[5]); @@ -2524,20 +2537,21 @@ (define_insn_and_rewrite "*aarch64_scatter_store_trunc_uxtw" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand:VNx2BI 5 "register_operand" "Upl, Upl") - (match_operand:DI 0 "aarch64_reg_or_zero" "rk, rk") + [(match_operand:VNx2BI 5 "register_operand") + (match_operand:DI 0 "aarch64_reg_or_zero") (and:VNx2DI - (match_operand:VNx2DI 1 "register_operand" "w, w") + (match_operand:VNx2DI 1 "register_operand") (match_operand:VNx2DI 6 "aarch64_sve_uxtw_immediate")) (match_operand:DI 2 "const_int_operand") (match_operand:DI 3 "aarch64_gather_scale_operand_" "Ui1, i") (truncate:VNx2_NARROW - (match_operand:VNx2_WIDE 4 "register_operand" "w, w"))] + (match_operand:VNx2_WIDE 4 "register_operand"))] UNSPEC_ST1_SCATTER))] "TARGET_SVE" - "@ - st1\t%4.d, %5, [%0, %1.d, uxtw] - st1\t%4.d, %5, [%0, %1.d, uxtw %p3]" + {@ [ cons: 0 , 1 , 4 , 5 ] + [ rk , w , w , Upl ] st1\t%4.d, %5, [%0, %1.d, uxtw] + [ rk , w , w , Upl ] st1\t%4.d, %5, [%0, %1.d, uxtw %p3] + } ) ;; ========================================================================= @@ -2587,15 +2601,16 @@ (define_expand "vec_duplicate" ;; the load at the first opportunity in order to allow the PTRUE to be ;; optimized with surrounding code. (define_insn_and_split "*vec_duplicate_reg" - [(set (match_operand:SVE_ALL 0 "register_operand" "=w, w, w") + [(set (match_operand:SVE_ALL 0 "register_operand") (vec_duplicate:SVE_ALL - (match_operand: 1 "aarch64_sve_dup_operand" "r, w, Uty"))) + (match_operand: 1 "aarch64_sve_dup_operand"))) (clobber (match_scratch:VNx16BI 2 "=X, X, Upl"))] "TARGET_SVE" - "@ - mov\t%0., %1 - mov\t%0., %1 - #" + {@ [ cons: =0 , 1 ; attrs: length ] + [ w , r ; 4 ] mov\t%0., %1 + [ w , w ; 4 ] mov\t%0., %1 + [ w , Uty ; 8 ] # + } "&& MEM_P (operands[1])" [(const_int 0)] { @@ -2607,7 +2622,6 @@ (define_insn_and_split "*vec_duplicate_reg" CONST0_RTX (mode))); DONE; } - [(set_attr "length" "4,4,8")] ) ;; Duplicate an Advanced SIMD vector to fill an SVE vector (LE version). @@ -2726,18 +2740,18 @@ (define_expand "vec_init" ;; Shift an SVE vector left and insert a scalar into element 0. (define_insn "vec_shl_insert_" - [(set (match_operand:SVE_FULL 0 "register_operand" "=?w, w, ??&w, ?&w") + [(set (match_operand:SVE_FULL 0 "register_operand") (unspec:SVE_FULL - [(match_operand:SVE_FULL 1 "register_operand" "0, 0, w, w") - (match_operand: 2 "aarch64_reg_or_zero" "rZ, w, rZ, w")] + [(match_operand:SVE_FULL 1 "register_operand") + (match_operand: 2 "aarch64_reg_or_zero")] UNSPEC_INSR))] "TARGET_SVE" - "@ - insr\t%0., %2 - insr\t%0., %2 - movprfx\t%0, %1\;insr\t%0., %2 - movprfx\t%0, %1\;insr\t%0., %2" - [(set_attr "movprfx" "*,*,yes,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ ?w , 0 , rZ ; * ] insr\t%0., %2 + [ w , 0 , w ; * ] insr\t%0., %2 + [ ??&w , w , rZ ; yes ] movprfx\t%0, %1\;insr\t%0., %2 + [ ?&w , w , w ; yes ] movprfx\t%0, %1\;insr\t%0., %2 + } ) ;; ------------------------------------------------------------------------- @@ -2748,15 +2762,16 @@ (define_insn "vec_shl_insert_" ;; ------------------------------------------------------------------------- (define_insn "vec_series" - [(set (match_operand:SVE_I 0 "register_operand" "=w, w, w") + [(set (match_operand:SVE_I 0 "register_operand") (vec_series:SVE_I - (match_operand: 1 "aarch64_sve_index_operand" "Usi, r, r") - (match_operand: 2 "aarch64_sve_index_operand" "r, Usi, r")))] + (match_operand: 1 "aarch64_sve_index_operand") + (match_operand: 2 "aarch64_sve_index_operand")))] "TARGET_SVE" - "@ - index\t%0., #%1, %2 - index\t%0., %1, #%2 - index\t%0., %1, %2" + {@ [ cons: =0 , 1 , 2 ] + [ w , Usi , r ] index\t%0., #%1, %2 + [ w , r , Usi ] index\t%0., %1, #%2 + [ w , r , r ] index\t%0., %1, %2 + } ) ;; Optimize {x, x, x, x, ...} + {0, n, 2*n, 3*n, ...} if n is in range @@ -2955,15 +2970,16 @@ (define_insn "*vec_extract_ext" ;; Extract the last active element of operand 1 into operand 0. ;; If no elements are active, extract the last inactive element instead. (define_insn "@extract__" - [(set (match_operand: 0 "register_operand" "=?r, w") + [(set (match_operand: 0 "register_operand") (unspec: - [(match_operand: 1 "register_operand" "Upl, Upl") - (match_operand:SVE_FULL 2 "register_operand" "w, w")] + [(match_operand: 1 "register_operand") + (match_operand:SVE_FULL 2 "register_operand")] LAST))] "TARGET_SVE" - "@ - last\t%0, %1, %2. - last\t%0, %1, %2." + {@ [ cons: =0 , 1 , 2 ] + [ ?r , Upl , w ] last\t%0, %1, %2. + [ w , Upl , w ] last\t%0, %1, %2. + } ) ;; ------------------------------------------------------------------------- @@ -3023,17 +3039,17 @@ (define_expand "2" ;; Integer unary arithmetic predicated with a PTRUE. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_UNARY:SVE_I - (match_operand:SVE_I 2 "register_operand" "0, w"))] + (match_operand:SVE_I 2 "register_operand"))] UNSPEC_PRED_X))] "TARGET_SVE" - "@ - \t%0., %1/m, %2. - movprfx\t%0, %2\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %2. + } ) ;; Predicated integer unary arithmetic with merging. @@ -3050,18 +3066,18 @@ (define_expand "@cond_" ;; Predicated integer unary arithmetic, merging with the first input. (define_insn "*cond__2" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_UNARY:SVE_I - (match_operand:SVE_I 2 "register_operand" "0, w")) + (match_operand:SVE_I 2 "register_operand")) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0. - movprfx\t%0, %2\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %0. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %2. + } ) ;; Predicated integer unary arithmetic, merging with an independent value. @@ -3072,19 +3088,19 @@ (define_insn "*cond__2" ;; as earlyclobber helps to make the instruction more regular to the ;; register allocator. (define_insn "*cond__any" - [(set (match_operand:SVE_I 0 "register_operand" "=&w, ?&w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_UNARY:SVE_I - (match_operand:SVE_I 2 "register_operand" "w, w, w")) - (match_operand:SVE_I 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_I 2 "register_operand")) + (match_operand:SVE_I 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[3])" - "@ - \t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. - movprfx\t%0, %3\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %2. + } ) ;; ------------------------------------------------------------------------- @@ -3099,18 +3115,18 @@ (define_insn "*cond__any" ;; Predicated integer unary operations. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "0, w")] + [(match_operand:SVE_FULL_I 2 "register_operand")] SVE_INT_UNARY)] UNSPEC_PRED_X))] "TARGET_SVE && >= " - "@ - \t%0., %1/m, %2. - movprfx\t%0, %2\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %2. + } ) ;; Another way of expressing the REVB, REVH and REVW patterns, with this @@ -3118,36 +3134,36 @@ (define_insn "@aarch64_pred_" ;; of lanes and the data mode decides the granularity of the reversal within ;; each lane. (define_insn "@aarch64_sve_revbhw_" - [(set (match_operand:SVE_ALL 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_ALL 0 "register_operand") (unspec:SVE_ALL - [(match_operand:PRED_HSD 1 "register_operand" "Upl, Upl") + [(match_operand:PRED_HSD 1 "register_operand") (unspec:SVE_ALL - [(match_operand:SVE_ALL 2 "register_operand" "0, w")] + [(match_operand:SVE_ALL 2 "register_operand")] UNSPEC_REVBHW)] UNSPEC_PRED_X))] "TARGET_SVE && > " - "@ - rev\t%0., %1/m, %2. - movprfx\t%0, %2\;rev\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] rev\t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;rev\t%0., %1/m, %2. + } ) ;; Predicated integer unary operations with merging. (define_insn "@cond_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "w, w, w")] + [(match_operand:SVE_FULL_I 2 "register_operand")] SVE_INT_UNARY) - (match_operand:SVE_FULL_I 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_FULL_I 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && >= " - "@ - \t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. - movprfx\t%0, %3\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %2. + } ) ;; ------------------------------------------------------------------------- @@ -3178,53 +3194,53 @@ (define_expand "2" ;; Predicated sign and zero extension from a narrower mode. (define_insn "*2" - [(set (match_operand:SVE_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_HSDI 0 "register_operand") (unspec:SVE_HSDI - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (ANY_EXTEND:SVE_HSDI - (match_operand:SVE_PARTIAL_I 2 "register_operand" "0, w"))] + (match_operand:SVE_PARTIAL_I 2 "register_operand"))] UNSPEC_PRED_X))] "TARGET_SVE && (~ & ) == 0" - "@ - xt\t%0., %1/m, %2. - movprfx\t%0, %2\;xt\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] xt\t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;xt\t%0., %1/m, %2. + } ) ;; Predicated truncate-and-sign-extend operations. (define_insn "@aarch64_pred_sxt" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (sign_extend:SVE_FULL_HSDI (truncate:SVE_PARTIAL_I - (match_operand:SVE_FULL_HSDI 2 "register_operand" "0, w")))] + (match_operand:SVE_FULL_HSDI 2 "register_operand")))] UNSPEC_PRED_X))] "TARGET_SVE && (~ & ) == 0" - "@ - sxt\t%0., %1/m, %2. - movprfx\t%0, %2\;sxt\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] sxt\t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;sxt\t%0., %1/m, %2. + } ) ;; Predicated truncate-and-sign-extend operations with merging. (define_insn "@aarch64_cond_sxt" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (sign_extend:SVE_FULL_HSDI (truncate:SVE_PARTIAL_I - (match_operand:SVE_FULL_HSDI 2 "register_operand" "w, w, w"))) - (match_operand:SVE_FULL_HSDI 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_FULL_HSDI 2 "register_operand"))) + (match_operand:SVE_FULL_HSDI 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && (~ & ) == 0" - "@ - sxt\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;sxt\t%0., %1/m, %2. - movprfx\t%0, %3\;sxt\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] sxt\t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;sxt\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;sxt\t%0., %1/m, %2. + } ) ;; Predicated truncate-and-zero-extend operations, merging with the @@ -3233,19 +3249,19 @@ (define_insn "@aarch64_cond_sxt" ;; The canonical form of this operation is an AND of a constant rather ;; than (zero_extend (truncate ...)). (define_insn "*cond_uxt_2" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (and:SVE_I - (match_operand:SVE_I 2 "register_operand" "0, w") + (match_operand:SVE_I 2 "register_operand") (match_operand:SVE_I 3 "aarch64_sve_uxt_immediate")) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - uxt%e3\t%0., %1/m, %0. - movprfx\t%0, %2\;uxt%e3\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] uxt%e3\t%0., %1/m, %0. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;uxt%e3\t%0., %1/m, %2. + } ) ;; Predicated truncate-and-zero-extend operations, merging with an @@ -3257,20 +3273,20 @@ (define_insn "*cond_uxt_2" ;; as early-clobber helps to make the instruction more regular to the ;; register allocator. (define_insn "*cond_uxt_any" - [(set (match_operand:SVE_I 0 "register_operand" "=&w, ?&w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (and:SVE_I - (match_operand:SVE_I 2 "register_operand" "w, w, w") + (match_operand:SVE_I 2 "register_operand") (match_operand:SVE_I 3 "aarch64_sve_uxt_immediate")) - (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - uxt%e3\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;uxt%e3\t%0., %1/m, %2. - movprfx\t%0, %4\;uxt%e3\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 4 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] uxt%e3\t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;uxt%e3\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %4\;uxt%e3\t%0., %1/m, %2. + } ) ;; ------------------------------------------------------------------------- @@ -3325,23 +3341,23 @@ (define_expand "@aarch64_pred_cnot" ) (define_insn "*cnot" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I [(unspec: - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 5 "aarch64_sve_ptrue_flag") (eq: - (match_operand:SVE_I 2 "register_operand" "0, w") + (match_operand:SVE_I 2 "register_operand") (match_operand:SVE_I 3 "aarch64_simd_imm_zero"))] UNSPEC_PRED_Z) (match_operand:SVE_I 4 "aarch64_simd_imm_one") (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE" - "@ - cnot\t%0., %1/m, %2. - movprfx\t%0, %2\;cnot\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] cnot\t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;cnot\t%0., %1/m, %2. + } ) ;; Predicated logical inverse with merging. @@ -3372,16 +3388,16 @@ (define_expand "@cond_cnot" ;; Predicated logical inverse, merging with the first input. (define_insn_and_rewrite "*cond_cnot_2" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") ;; Logical inverse of operand 2 (as above). (unspec:SVE_I [(unspec: [(match_operand 5) (const_int SVE_KNOWN_PTRUE) (eq: - (match_operand:SVE_I 2 "register_operand" "0, w") + (match_operand:SVE_I 2 "register_operand") (match_operand:SVE_I 3 "aarch64_simd_imm_zero"))] UNSPEC_PRED_Z) (match_operand:SVE_I 4 "aarch64_simd_imm_one") @@ -3390,14 +3406,14 @@ (define_insn_and_rewrite "*cond_cnot_2" (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - cnot\t%0., %1/m, %0. - movprfx\t%0, %2\;cnot\t%0., %1/m, %2." + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] cnot\t%0., %1/m, %0. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;cnot\t%0., %1/m, %2. + } "&& !CONSTANT_P (operands[5])" { operands[5] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Predicated logical inverse, merging with an independent value. @@ -3408,33 +3424,33 @@ (define_insn_and_rewrite "*cond_cnot_2" ;; as earlyclobber helps to make the instruction more regular to the ;; register allocator. (define_insn_and_rewrite "*cond_cnot_any" - [(set (match_operand:SVE_I 0 "register_operand" "=&w, ?&w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") ;; Logical inverse of operand 2 (as above). (unspec:SVE_I [(unspec: [(match_operand 5) (const_int SVE_KNOWN_PTRUE) (eq: - (match_operand:SVE_I 2 "register_operand" "w, w, w") + (match_operand:SVE_I 2 "register_operand") (match_operand:SVE_I 3 "aarch64_simd_imm_zero"))] UNSPEC_PRED_Z) (match_operand:SVE_I 4 "aarch64_simd_imm_one") (match_dup 3)] UNSPEC_SEL) - (match_operand:SVE_I 6 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_I 6 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[6])" - "@ - cnot\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;cnot\t%0., %1/m, %2. - movprfx\t%0, %6\;cnot\t%0., %1/m, %2." + {@ [ cons: =0 , 1 , 2 , 6 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] cnot\t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;cnot\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %6\;cnot\t%0., %1/m, %2. + } "&& !CONSTANT_P (operands[5])" { operands[5] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes,yes")] ) ;; ------------------------------------------------------------------------- @@ -3499,17 +3515,17 @@ (define_expand "2" ;; Predicated floating-point unary operations. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 3 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE_COND_FP_UNARY))] "TARGET_SVE" - "@ - \t%0., %1/m, %2. - movprfx\t%0, %2\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %2. + } ) ;; Predicated floating-point unary arithmetic with merging. @@ -3529,43 +3545,43 @@ (define_expand "@cond_" ;; Predicated floating-point unary arithmetic, merging with the first input. (define_insn_and_rewrite "*cond__2_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 3) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE_COND_FP_UNARY) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0. - movprfx\t%0, %2\;\t%0., %1/m, %2." + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %0. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %2. + } "&& !rtx_equal_p (operands[1], operands[3])" { operands[3] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*cond__2_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE_COND_FP_UNARY) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0. - movprfx\t%0, %2\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %0. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %2. + } ) ;; Predicated floating-point unary arithmetic, merging with an independent @@ -3577,45 +3593,45 @@ (define_insn "*cond__2_strict" ;; as earlyclobber helps to make the instruction more regular to the ;; register allocator. (define_insn_and_rewrite "*cond__any_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE_COND_FP_UNARY) - (match_operand:SVE_FULL_F 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_FULL_F 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[3])" - "@ - \t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. - movprfx\t%0, %3\;\t%0., %1/m, %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %2. + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes,yes")] ) (define_insn "*cond__any_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE_COND_FP_UNARY) - (match_operand:SVE_FULL_F 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_FULL_F 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[3])" - "@ - \t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. - movprfx\t%0, %3\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %2. + } ) ;; ------------------------------------------------------------------------- @@ -3754,19 +3770,20 @@ (define_expand "3" ;; and would make the instruction seem less uniform to the register ;; allocator. (define_insn_and_split "@aarch64_pred_" - [(set (match_operand:SVE_I 0 "register_operand" "=w, w, ?&w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_BINARY_IMM:SVE_I - (match_operand:SVE_I 2 "register_operand" "%0, 0, w, w") - (match_operand:SVE_I 3 "aarch64_sve__operand" ", w, , w"))] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "aarch64_sve__operand"))] UNSPEC_PRED_X))] "TARGET_SVE" - "@ - # - \t%0., %1/m, %0., %3. - # - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , %0 , ; * ] # + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , ; yes ] # + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ; Split the unpredicated form after reload, so that we don't have ; the unnecessary PTRUE. "&& reload_completed @@ -3774,7 +3791,6 @@ (define_insn_and_split "@aarch64_pred_" [(set (match_dup 0) (SVE_INT_BINARY_IMM:SVE_I (match_dup 2) (match_dup 3)))] "" - [(set_attr "movprfx" "*,*,yes,yes")] ) ;; Unpredicated binary operations with a constant (post-RA only). @@ -3807,57 +3823,58 @@ (define_expand "@cond_" ;; Predicated integer operations, merging with the first input. (define_insn "*cond__2" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_BINARY:SVE_I - (match_operand:SVE_I 2 "register_operand" "0, w") - (match_operand:SVE_I 3 "register_operand" "w, w")) + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand")) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; Predicated integer operations, merging with the second input. (define_insn "*cond__3" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_BINARY:SVE_I - (match_operand:SVE_I 2 "register_operand" "w, w") - (match_operand:SVE_I 3 "register_operand" "0, w")) + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand")) (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %2. - movprfx\t%0, %3\;\t%0., %1/m, %0., %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] \t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %0., %2. + } ) ;; Predicated integer operations, merging with an independent value. (define_insn_and_rewrite "*cond__any" - [(set (match_operand:SVE_I 0 "register_operand" "=&w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_BINARY:SVE_I - (match_operand:SVE_I 2 "register_operand" "0, w, w, w, w") - (match_operand:SVE_I 3 "register_operand" "w, 0, w, w, w")) - (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, 0, w")] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand")) + (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4]) && !rtx_equal_p (operands[3], operands[4])" - "@ - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %2. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -3886,19 +3903,19 @@ (define_insn_and_rewrite "*cond__any" ;; ------------------------------------------------------------------------- (define_insn "add3" - [(set (match_operand:SVE_I 0 "register_operand" "=w, w, w, ?w, ?w, w") + [(set (match_operand:SVE_I 0 "register_operand") (plus:SVE_I - (match_operand:SVE_I 1 "register_operand" "%0, 0, 0, w, w, w") - (match_operand:SVE_I 2 "aarch64_sve_add_operand" "vsa, vsn, vsi, vsa, vsn, w")))] + (match_operand:SVE_I 1 "register_operand") + (match_operand:SVE_I 2 "aarch64_sve_add_operand")))] "TARGET_SVE" - "@ - add\t%0., %0., #%D2 - sub\t%0., %0., #%N2 - * return aarch64_output_sve_vector_inc_dec (\"%0.\", operands[2]); - movprfx\t%0, %1\;add\t%0., %0., #%D2 - movprfx\t%0, %1\;sub\t%0., %0., #%N2 - add\t%0., %1., %2." - [(set_attr "movprfx" "*,*,*,yes,yes,*")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , %0 , vsa ; * ] add\t%0., %0., #%D2 + [ w , 0 , vsn ; * ] sub\t%0., %0., #%N2 + [ w , 0 , vsi ; * ] << aarch64_output_sve_vector_inc_dec ("%0.", operands[2]); + [ ?w , w , vsa ; yes ] movprfx\t%0, %1\;add\t%0., %0., #%D2 + [ ?w , w , vsn ; yes ] movprfx\t%0, %1\;sub\t%0., %0., #%N2 + [ w , w , w ; * ] add\t%0., %1., %2. + } ) ;; Merging forms are handled through SVE_INT_BINARY. @@ -3912,16 +3929,16 @@ (define_insn "add3" ;; ------------------------------------------------------------------------- (define_insn "sub3" - [(set (match_operand:SVE_I 0 "register_operand" "=w, w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (minus:SVE_I - (match_operand:SVE_I 1 "aarch64_sve_arith_operand" "w, vsa, vsa") - (match_operand:SVE_I 2 "register_operand" "w, 0, w")))] + (match_operand:SVE_I 1 "aarch64_sve_arith_operand") + (match_operand:SVE_I 2 "register_operand")))] "TARGET_SVE" - "@ - sub\t%0., %1., %2. - subr\t%0., %0., #%D1 - movprfx\t%0, %2\;subr\t%0., %0., #%D1" - [(set_attr "movprfx" "*,*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , w , w ; * ] sub\t%0., %1., %2. + [ w , vsa , 0 ; * ] subr\t%0., %0., #%D1 + [ ?&w , vsa , w ; yes ] movprfx\t%0, %2\;subr\t%0., %0., #%D1 + } ) ;; Merging forms are handled through SVE_INT_BINARY. @@ -4095,13 +4112,13 @@ (define_expand "abd3" ;; Predicated integer absolute difference. (define_insn "@aarch64_pred_abd" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (minus:SVE_I (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (USMAX:SVE_I - (match_operand:SVE_I 2 "register_operand" "%0, w") - (match_operand:SVE_I 3 "register_operand" "w, w"))] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand"))] UNSPEC_PRED_X) (unspec:SVE_I [(match_dup 1) @@ -4110,10 +4127,10 @@ (define_insn "@aarch64_pred_abd" (match_dup 3))] UNSPEC_PRED_X)))] "TARGET_SVE" - "@ - abd\t%0., %1/m, %0., %3. - movprfx\t%0, %2\;abd\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , %0 , w ; * ] abd\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;abd\t%0., %1/m, %0., %3. + } ) (define_expand "@aarch64_cond_abd" @@ -4143,15 +4160,15 @@ (define_expand "@aarch64_cond_abd" ;; Predicated integer absolute difference, merging with the first input. (define_insn_and_rewrite "*aarch64_cond_abd_2" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (minus:SVE_I (unspec:SVE_I [(match_operand 4) (USMAX:SVE_I - (match_operand:SVE_I 2 "register_operand" "0, w") - (match_operand:SVE_I 3 "register_operand" "w, w"))] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand"))] UNSPEC_PRED_X) (unspec:SVE_I [(match_operand 5) @@ -4162,27 +4179,27 @@ (define_insn_and_rewrite "*aarch64_cond_abd_2" (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - abd\t%0., %1/m, %0., %3. - movprfx\t%0, %2\;abd\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] abd\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;abd\t%0., %1/m, %0., %3. + } "&& (!CONSTANT_P (operands[4]) || !CONSTANT_P (operands[5]))" { operands[4] = operands[5] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Predicated integer absolute difference, merging with the second input. (define_insn_and_rewrite "*aarch64_cond_abd_3" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (minus:SVE_I (unspec:SVE_I [(match_operand 4) (USMAX:SVE_I - (match_operand:SVE_I 2 "register_operand" "w, w") - (match_operand:SVE_I 3 "register_operand" "0, w"))] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand"))] UNSPEC_PRED_X) (unspec:SVE_I [(match_operand 5) @@ -4193,27 +4210,27 @@ (define_insn_and_rewrite "*aarch64_cond_abd_3" (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE" - "@ - abd\t%0., %1/m, %0., %2. - movprfx\t%0, %3\;abd\t%0., %1/m, %0., %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] abd\t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;abd\t%0., %1/m, %0., %2. + } "&& (!CONSTANT_P (operands[4]) || !CONSTANT_P (operands[5]))" { operands[4] = operands[5] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Predicated integer absolute difference, merging with an independent value. (define_insn_and_rewrite "*aarch64_cond_abd_any" - [(set (match_operand:SVE_I 0 "register_operand" "=&w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (minus:SVE_I (unspec:SVE_I [(match_operand 5) (USMAX:SVE_I - (match_operand:SVE_I 2 "register_operand" "0, w, w, w, w") - (match_operand:SVE_I 3 "register_operand" "w, 0, w, w, w"))] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand"))] UNSPEC_PRED_X) (unspec:SVE_I [(match_operand 6) @@ -4221,17 +4238,18 @@ (define_insn_and_rewrite "*aarch64_cond_abd_any" (match_dup 2) (match_dup 3))] UNSPEC_PRED_X)) - (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, 0, w")] + (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4]) && !rtx_equal_p (operands[3], operands[4])" - "@ - movprfx\t%0., %1/z, %0.\;abd\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %0.\;abd\t%0., %1/m, %0., %2. - movprfx\t%0., %1/z, %2.\;abd\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;abd\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;abd\t%0., %1/m, %0., %3. + [ &w , Upl , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;abd\t%0., %1/m, %0., %2. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;abd\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;abd\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& 1" { if (!CONSTANT_P (operands[5]) || !CONSTANT_P (operands[6])) @@ -4261,32 +4279,32 @@ (define_insn_and_rewrite "*aarch64_cond_abd_any" ;; Unpredicated saturating signed addition and subtraction. (define_insn "@aarch64_sve_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, w, ?&w, ?&w, w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (SBINQOPS:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" "0, 0, w, w, w") - (match_operand:SVE_FULL_I 2 "aarch64_sve_sqadd_operand" "vsQ, vsS, vsQ, vsS, w")))] + (match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "aarch64_sve_sqadd_operand")))] "TARGET_SVE" - "@ - \t%0., %0., #%D2 - \t%0., %0., #%N2 - movprfx\t%0, %1\;\t%0., %0., #%D2 - movprfx\t%0, %1\;\t%0., %0., #%N2 - \t%0., %1., %2." - [(set_attr "movprfx" "*,*,yes,yes,*")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , vsQ ; * ] \t%0., %0., #%D2 + [ w , 0 , vsS ; * ] \t%0., %0., #%N2 + [ ?&w , w , vsQ ; yes ] movprfx\t%0, %1\;\t%0., %0., #%D2 + [ ?&w , w , vsS ; yes ] movprfx\t%0, %1\;\t%0., %0., #%N2 + [ w , w , w ; * ] \t%0., %1., %2. + } ) ;; Unpredicated saturating unsigned addition and subtraction. (define_insn "@aarch64_sve_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w, w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (UBINQOPS:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" "0, w, w") - (match_operand:SVE_FULL_I 2 "aarch64_sve_arith_operand" "vsa, vsa, w")))] + (match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "aarch64_sve_arith_operand")))] "TARGET_SVE" - "@ - \t%0., %0., #%D2 - movprfx\t%0, %1\;\t%0., %0., #%D2 - \t%0., %1., %2." - [(set_attr "movprfx" "*,yes,*")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , vsa ; * ] \t%0., %0., #%D2 + [ ?&w , w , vsa ; yes ] movprfx\t%0, %1\;\t%0., %0., #%D2 + [ w , w , w ; * ] \t%0., %1., %2. + } ) ;; ------------------------------------------------------------------------- @@ -4315,19 +4333,19 @@ (define_expand "mul3_highpart" ;; Predicated highpart multiplication. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_I - [(match_operand:SVE_I 2 "register_operand" "%0, w") - (match_operand:SVE_I 3 "register_operand" "w, w")] + [(match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand")] MUL_HIGHPART)] UNSPEC_PRED_X))] "TARGET_SVE" - "@ - mulh\t%0., %1/m, %0., %3. - movprfx\t%0, %2\;mulh\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , %0 , w ; * ] mulh\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;mulh\t%0., %1/m, %0., %3. + } ) ;; Predicated highpart multiplications with merging. @@ -4351,36 +4369,38 @@ (define_expand "@cond_" ;; Predicated highpart multiplications, merging with the first input. (define_insn "*cond__2" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "0, w") - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand")] MUL_HIGHPART) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")]) + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } +) ;; Predicated highpart multiplications, merging with zero. (define_insn "*cond__z" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=&w, &w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "%0, w") - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand")] MUL_HIGHPART) (match_operand:SVE_FULL_I 4 "aarch64_simd_imm_zero")] UNSPEC_SEL))] "TARGET_SVE" - "@ - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ] + [ &w , Upl , %0 , w ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + } [(set_attr "movprfx" "yes")]) ;; ------------------------------------------------------------------------- @@ -4410,19 +4430,19 @@ (define_expand "3" ;; Integer division predicated with a PTRUE. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (unspec:SVE_FULL_SDI - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_BINARY_SD:SVE_FULL_SDI - (match_operand:SVE_FULL_SDI 2 "register_operand" "0, w, w") - (match_operand:SVE_FULL_SDI 3 "register_operand" "w, 0, w"))] + (match_operand:SVE_FULL_SDI 2 "register_operand") + (match_operand:SVE_FULL_SDI 3 "register_operand"))] UNSPEC_PRED_X))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3. - r\t%0., %1/m, %0., %2. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ w , Upl , w , 0 ; * ] r\t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; Predicated integer division with merging. @@ -4440,57 +4460,58 @@ (define_expand "@cond_" ;; Predicated integer division, merging with the first input. (define_insn "*cond__2" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (unspec:SVE_FULL_SDI - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_BINARY_SD:SVE_FULL_SDI - (match_operand:SVE_FULL_SDI 2 "register_operand" "0, w") - (match_operand:SVE_FULL_SDI 3 "register_operand" "w, w")) + (match_operand:SVE_FULL_SDI 2 "register_operand") + (match_operand:SVE_FULL_SDI 3 "register_operand")) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; Predicated integer division, merging with the second input. (define_insn "*cond__3" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (unspec:SVE_FULL_SDI - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_BINARY_SD:SVE_FULL_SDI - (match_operand:SVE_FULL_SDI 2 "register_operand" "w, w") - (match_operand:SVE_FULL_SDI 3 "register_operand" "0, w")) + (match_operand:SVE_FULL_SDI 2 "register_operand") + (match_operand:SVE_FULL_SDI 3 "register_operand")) (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %2. - movprfx\t%0, %3\;\t%0., %1/m, %0., %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] \t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %0., %2. + } ) ;; Predicated integer division, merging with an independent value. (define_insn_and_rewrite "*cond__any" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=&w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (unspec:SVE_FULL_SDI - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (SVE_INT_BINARY_SD:SVE_FULL_SDI - (match_operand:SVE_FULL_SDI 2 "register_operand" "0, w, w, w, w") - (match_operand:SVE_FULL_SDI 3 "register_operand" "w, 0, w, w, w")) - (match_operand:SVE_FULL_SDI 4 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, 0, w")] + (match_operand:SVE_FULL_SDI 2 "register_operand") + (match_operand:SVE_FULL_SDI 3 "register_operand")) + (match_operand:SVE_FULL_SDI 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4]) && !rtx_equal_p (operands[3], operands[4])" - "@ - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %2. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -4513,16 +4534,16 @@ (define_insn_and_rewrite "*cond__any" ;; Unpredicated integer binary logical operations. (define_insn "3" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?w, w") + [(set (match_operand:SVE_I 0 "register_operand") (LOGICAL:SVE_I - (match_operand:SVE_I 1 "register_operand" "%0, w, w") - (match_operand:SVE_I 2 "aarch64_sve_logical_operand" "vsl, vsl, w")))] + (match_operand:SVE_I 1 "register_operand") + (match_operand:SVE_I 2 "aarch64_sve_logical_operand")))] "TARGET_SVE" - "@ - \t%0., %0., #%C2 - movprfx\t%0, %1\;\t%0., %0., #%C2 - \t%0.d, %1.d, %2.d" - [(set_attr "movprfx" "*,yes,*")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , %0 , vsl ; * ] \t%0., %0., #%C2 + [ ?w , w , vsl ; yes ] movprfx\t%0, %1\;\t%0., %0., #%C2 + [ w , w , w ; * ] \t%0.d, %1.d, %2.d + } ) ;; Merging forms are handled through SVE_INT_BINARY. @@ -4582,39 +4603,40 @@ (define_expand "@cond_bic" ;; Predicated integer BIC, merging with the first input. (define_insn "*cond_bic_2" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (and:SVE_I (not:SVE_I - (match_operand:SVE_I 3 "register_operand" "w, w")) - (match_operand:SVE_I 2 "register_operand" "0, w")) + (match_operand:SVE_I 3 "register_operand")) + (match_operand:SVE_I 2 "register_operand")) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - bic\t%0., %1/m, %0., %3. - movprfx\t%0, %2\;bic\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] bic\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;bic\t%0., %1/m, %0., %3. + } ) ;; Predicated integer BIC, merging with an independent value. (define_insn_and_rewrite "*cond_bic_any" - [(set (match_operand:SVE_I 0 "register_operand" "=&w, &w, &w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (and:SVE_I (not:SVE_I - (match_operand:SVE_I 3 "register_operand" "w, w, w, w")) - (match_operand:SVE_I 2 "register_operand" "0, w, w, w")) - (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero" "Dz, Dz, 0, w")] + (match_operand:SVE_I 3 "register_operand")) + (match_operand:SVE_I 2 "register_operand")) + (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %0.\;bic\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %2.\;bic\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;bic\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;bic\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;bic\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;bic\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -4684,24 +4706,24 @@ (define_expand "v3" ;; likely to gain much and would make the instruction seem less uniform ;; to the register allocator. (define_insn_and_split "@aarch64_pred_" - [(set (match_operand:SVE_I 0 "register_operand" "=w, w, w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (ASHIFT:SVE_I - (match_operand:SVE_I 2 "register_operand" "w, 0, w, w") - (match_operand:SVE_I 3 "aarch64_sve_shift_operand" "D, w, 0, w"))] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "aarch64_sve_shift_operand"))] UNSPEC_PRED_X))] "TARGET_SVE" - "@ - # - \t%0., %1/m, %0., %3. - r\t%0., %1/m, %3., %2. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , D ; * ] # + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ w , Upl , w , 0 ; * ] r\t%0., %1/m, %3., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } "&& reload_completed && !register_operand (operands[3], mode)" [(set (match_dup 0) (ASHIFT:SVE_I (match_dup 2) (match_dup 3)))] "" - [(set_attr "movprfx" "*,*,*,yes")] ) ;; Unpredicated shift operations by a constant (post-RA only). @@ -4718,36 +4740,37 @@ (define_insn "*post_ra_v3" ;; Predicated integer shift, merging with the first input. (define_insn "*cond__2_const" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (ASHIFT:SVE_I - (match_operand:SVE_I 2 "register_operand" "0, w") + (match_operand:SVE_I 2 "register_operand") (match_operand:SVE_I 3 "aarch64_simd_shift_imm")) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;\t%0., %1/m, %0., #%3" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %0., #%3 + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 + } ) ;; Predicated integer shift, merging with an independent value. (define_insn_and_rewrite "*cond__any_const" - [(set (match_operand:SVE_I 0 "register_operand" "=w, &w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (ASHIFT:SVE_I - (match_operand:SVE_I 2 "register_operand" "w, w, w") + (match_operand:SVE_I 2 "register_operand") (match_operand:SVE_I 3 "aarch64_simd_shift_imm")) - (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero" "Dz, 0, w")] + (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., #%3 - #" + {@ [ cons: =0 , 1 , 2 , 4 ] + [ w , Upl , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., #%3 + [ &w , Upl , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., #%3 + [ ?&w , Upl , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -4787,36 +4810,38 @@ (define_expand "@cond_" ;; Predicated shifts of narrow elements by 64-bit amounts, merging with ;; the first input. (define_insn "*cond__m" - [(set (match_operand:SVE_FULL_BHSI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_BHSI 0 "register_operand") (unspec:SVE_FULL_BHSI - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_BHSI - [(match_operand:SVE_FULL_BHSI 2 "register_operand" "0, w") - (match_operand:VNx2DI 3 "register_operand" "w, w")] + [(match_operand:SVE_FULL_BHSI 2 "register_operand") + (match_operand:VNx2DI 3 "register_operand")] SVE_SHIFT_WIDE) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3.d - movprfx\t%0, %2\;\t%0., %1/m, %0., %3.d" - [(set_attr "movprfx" "*, yes")]) + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3.d + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3.d + } +) ;; Predicated shifts of narrow elements by 64-bit amounts, merging with zero. (define_insn "*cond__z" - [(set (match_operand:SVE_FULL_BHSI 0 "register_operand" "=&w, &w") + [(set (match_operand:SVE_FULL_BHSI 0 "register_operand") (unspec:SVE_FULL_BHSI - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_BHSI - [(match_operand:SVE_FULL_BHSI 2 "register_operand" "0, w") - (match_operand:VNx2DI 3 "register_operand" "w, w")] + [(match_operand:SVE_FULL_BHSI 2 "register_operand") + (match_operand:VNx2DI 3 "register_operand")] SVE_SHIFT_WIDE) (match_operand:SVE_FULL_BHSI 4 "aarch64_simd_imm_zero")] UNSPEC_SEL))] "TARGET_SVE" - "@ - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3.d - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3.d" + {@ [ cons: =0 , 1 , 2 , 3 ] + [ &w , Upl , 0 , w ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3.d + [ &w , Upl , w , w ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3.d + } [(set_attr "movprfx" "yes")]) ;; ------------------------------------------------------------------------- @@ -4847,19 +4872,20 @@ (define_expand "sdiv_pow23" ;; Predicated ASRD. (define_insn "*sdiv_pow23" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_I - [(match_operand:SVE_I 2 "register_operand" "0, w") + [(match_operand:SVE_I 2 "register_operand") (match_operand:SVE_I 3 "aarch64_simd_rshift_imm")] UNSPEC_ASRD)] UNSPEC_PRED_X))] "TARGET_SVE" - "@ - asrd\t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;asrd\t%0., %1/m, %0., #%3" - [(set_attr "movprfx" "*,yes")]) + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] asrd\t%0., %1/m, %0., #%3 + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;asrd\t%0., %1/m, %0., #%3 + } +) ;; Predicated shift with merging. (define_expand "@cond_" @@ -4883,47 +4909,49 @@ (define_expand "@cond_" ;; Predicated shift, merging with the first input. (define_insn_and_rewrite "*cond__2" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_I [(match_operand 4) (unspec:SVE_I - [(match_operand:SVE_I 2 "register_operand" "0, w") + [(match_operand:SVE_I 2 "register_operand") (match_operand:SVE_I 3 "aarch64_simd_shift_imm")] SVE_INT_SHIFT_IMM)] UNSPEC_PRED_X) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;\t%0., %1/m, %0., #%3" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %0., #%3 + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")]) +) ;; Predicated shift, merging with an independent value. (define_insn_and_rewrite "*cond__any" - [(set (match_operand:SVE_I 0 "register_operand" "=w, &w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_I [(match_operand 5) (unspec:SVE_I - [(match_operand:SVE_I 2 "register_operand" "w, w, w") + [(match_operand:SVE_I 2 "register_operand") (match_operand:SVE_I 3 "aarch64_simd_shift_imm")] SVE_INT_SHIFT_IMM)] UNSPEC_PRED_X) - (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero" "Dz, 0, w")] + (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., #%3 - #" + {@ [ cons: =0 , 1 , 2 , 4 ] + [ w , Upl , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., #%3 + [ &w , Upl , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., #%3 + [ ?&w , Upl , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -4959,18 +4987,18 @@ (define_insn "@aarch64_sve_" ;; Predicated floating-point binary operations that take an integer ;; as their second operand. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 4 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand: 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand: 3 "register_operand")] SVE_COND_FP_BINARY_INT))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; Predicated floating-point binary operations with merging, taking an @@ -4993,68 +5021,69 @@ (define_expand "@cond_" ;; Predicated floating-point binary operations that take an integer as their ;; second operand, with inactive lanes coming from the first operand. (define_insn_and_rewrite "*cond__2_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand: 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand: 3 "register_operand")] SVE_COND_FP_BINARY_INT) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*cond__2_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand: 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand: 3 "register_operand")] SVE_COND_FP_BINARY_INT) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; Predicated floating-point binary operations that take an integer as ;; their second operand, with the values of inactive lanes being distinct ;; from the other inputs. (define_insn_and_rewrite "*cond__any_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w, w, w") - (match_operand: 3 "register_operand" "w, w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand: 3 "register_operand")] SVE_COND_FP_BINARY_INT) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& 1" { if (reload_completed @@ -5074,23 +5103,24 @@ (define_insn_and_rewrite "*cond__any_relaxed" ) (define_insn_and_rewrite "*cond__any_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w, w, w") - (match_operand: 3 "register_operand" "w, w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand: 3 "register_operand")] SVE_COND_FP_BINARY_INT) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -5170,19 +5200,19 @@ (define_expand "3" ;; Predicated floating-point binary operations that have no immediate forms. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 4 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "0, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, 0, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FP_BINARY_REG))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3. - \t%0., %1/m, %0., %2. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ w , Upl , w , 0 ; * ] \t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; Predicated floating-point operations with merging. @@ -5203,155 +5233,156 @@ (define_expand "@cond_" ;; Predicated floating-point operations, merging with the first input. (define_insn_and_rewrite "*cond__2_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FP_BINARY) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*cond__2_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FP_BINARY) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; Same for operations that take a 1-bit constant. (define_insn_and_rewrite "*cond__2_const_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") + (match_operand:SVE_FULL_F 2 "register_operand") (match_operand:SVE_FULL_F 3 "")] SVE_COND_FP_BINARY_I1) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;\t%0., %1/m, %0., #%3" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %0., #%3 + [ ?w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*cond__2_const_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") + (match_operand:SVE_FULL_F 2 "register_operand") (match_operand:SVE_FULL_F 3 "")] SVE_COND_FP_BINARY_I1) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;\t%0., %1/m, %0., #%3" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %0., #%3 + [ ?w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 + } ) ;; Predicated floating-point operations, merging with the second input. (define_insn_and_rewrite "*cond__3_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FP_BINARY) (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %2. - movprfx\t%0, %3\;\t%0., %1/m, %0., %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] \t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %0., %2. + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*cond__3_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FP_BINARY) (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., %2. - movprfx\t%0, %3\;\t%0., %1/m, %0., %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] \t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %0., %2. + } ) ;; Predicated floating-point operations, merging with an independent value. (define_insn_and_rewrite "*cond__any_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w, w, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, 0, w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FP_BINARY) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4]) && !rtx_equal_p (operands[3], operands[4])" - "@ - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %2. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& 1" { if (reload_completed @@ -5371,26 +5402,27 @@ (define_insn_and_rewrite "*cond__any_relaxed" ) (define_insn_and_rewrite "*cond__any_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w, w, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, 0, w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FP_BINARY) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4]) && !rtx_equal_p (operands[3], operands[4])" - "@ - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %2. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -5404,22 +5436,23 @@ (define_insn_and_rewrite "*cond__any_strict" ;; Same for operations that take a 1-bit constant. (define_insn_and_rewrite "*cond__any_const_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w") + (match_operand:SVE_FULL_F 2 "register_operand") (match_operand:SVE_FULL_F 3 "")] SVE_COND_FP_BINARY_I1) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., #%3 - #" + {@ [ cons: =0 , 1 , 2 , 4 ] + [ w , Upl , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., #%3 + [ w , Upl , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., #%3 + [ ?w , Upl , w , w ] # + } "&& 1" { if (reload_completed @@ -5439,22 +5472,23 @@ (define_insn_and_rewrite "*cond__any_const_relaxed" ) (define_insn_and_rewrite "*cond__any_const_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w") + (match_operand:SVE_FULL_F 2 "register_operand") (match_operand:SVE_FULL_F 3 "")] SVE_COND_FP_BINARY_I1) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., #%3 - #" + {@ [ cons: =0 , 1 , 2 , 4 ] + [ w , Upl , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., #%3 + [ w , Upl , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., #%3 + [ ?w , Upl , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -5476,22 +5510,23 @@ (define_insn_and_rewrite "*cond__any_const_strict" ;; Predicated floating-point addition. (define_insn_and_split "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, w, w, ?&w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl, Upl") - (match_operand:SI 4 "aarch64_sve_gp_strictness" "i, i, Z, Ui1, i, i, Ui1") - (match_operand:SVE_FULL_F 2 "register_operand" "%0, 0, w, 0, w, w, w") - (match_operand:SVE_FULL_F 3 "aarch64_sve_float_arith_with_sub_operand" "vsA, vsN, w, w, vsA, vsN, w")] + [(match_operand: 1 "register_operand") + (match_operand:SI 4 "aarch64_sve_gp_strictness") + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "aarch64_sve_float_arith_with_sub_operand")] SVE_COND_FP_ADD))] "TARGET_SVE" - "@ - fadd\t%0., %1/m, %0., #%3 - fsub\t%0., %1/m, %0., #%N3 - # - fadd\t%0., %1/m, %0., %3. - movprfx\t%0, %2\;fadd\t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;fsub\t%0., %1/m, %0., #%N3 - movprfx\t%0, %2\;fadd\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , %0 , vsA , i ; * ] fadd\t%0., %1/m, %0., #%3 + [ w , Upl , 0 , vsN , i ; * ] fsub\t%0., %1/m, %0., #%N3 + [ w , Upl , w , w , Z ; * ] # + [ w , Upl , 0 , w , Ui1 ; * ] fadd\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , vsA , i ; yes ] movprfx\t%0, %2\;fadd\t%0., %1/m, %0., #%3 + [ ?&w , Upl , w , vsN , i ; yes ] movprfx\t%0, %2\;fsub\t%0., %1/m, %0., #%N3 + [ ?&w , Upl , w , w , Ui1 ; yes ] movprfx\t%0, %2\;fadd\t%0., %1/m, %0., %3. + } ; Split the unpredicated form after reload, so that we don't have ; the unnecessary PTRUE. "&& reload_completed @@ -5499,79 +5534,79 @@ (define_insn_and_split "@aarch64_pred_" && INTVAL (operands[4]) == SVE_RELAXED_GP" [(set (match_dup 0) (plus:SVE_FULL_F (match_dup 2) (match_dup 3)))] "" - [(set_attr "movprfx" "*,*,*,*,yes,yes,yes")] ) ;; Predicated floating-point addition of a constant, merging with the ;; first input. (define_insn_and_rewrite "*cond_add_2_const_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, ?w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, 0, w, w") - (match_operand:SVE_FULL_F 3 "aarch64_sve_float_arith_with_sub_immediate" "vsA, vsN, vsA, vsN")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "aarch64_sve_float_arith_with_sub_immediate")] UNSPEC_COND_FADD) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fadd\t%0., %1/m, %0., #%3 - fsub\t%0., %1/m, %0., #%N3 - movprfx\t%0, %2\;fadd\t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;fsub\t%0., %1/m, %0., #%N3" + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , vsA ; * ] fadd\t%0., %1/m, %0., #%3 + [ w , Upl , 0 , vsN ; * ] fsub\t%0., %1/m, %0., #%N3 + [ ?w , Upl , w , vsA ; yes ] movprfx\t%0, %2\;fadd\t%0., %1/m, %0., #%3 + [ ?w , Upl , w , vsN ; yes ] movprfx\t%0, %2\;fsub\t%0., %1/m, %0., #%N3 + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,*,yes,yes")] ) (define_insn "*cond_add_2_const_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, ?w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, 0, w, w") - (match_operand:SVE_FULL_F 3 "aarch64_sve_float_arith_with_sub_immediate" "vsA, vsN, vsA, vsN")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "aarch64_sve_float_arith_with_sub_immediate")] UNSPEC_COND_FADD) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fadd\t%0., %1/m, %0., #%3 - fsub\t%0., %1/m, %0., #%N3 - movprfx\t%0, %2\;fadd\t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;fsub\t%0., %1/m, %0., #%N3" - [(set_attr "movprfx" "*,*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , vsA ; * ] fadd\t%0., %1/m, %0., #%3 + [ w , Upl , 0 , vsN ; * ] fsub\t%0., %1/m, %0., #%N3 + [ ?w , Upl , w , vsA ; yes ] movprfx\t%0, %2\;fadd\t%0., %1/m, %0., #%3 + [ ?w , Upl , w , vsN ; yes ] movprfx\t%0, %2\;fsub\t%0., %1/m, %0., #%N3 + } ) ;; Predicated floating-point addition of a constant, merging with an ;; independent value. (define_insn_and_rewrite "*cond_add_any_const_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, w, w, ?w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w, w, w, w") - (match_operand:SVE_FULL_F 3 "aarch64_sve_float_arith_with_sub_immediate" "vsA, vsN, vsA, vsN, vsA, vsN")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "aarch64_sve_float_arith_with_sub_immediate")] UNSPEC_COND_FADD) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, Dz, 0, 0, w, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %2.\;fadd\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/z, %2.\;fsub\t%0., %1/m, %0., #%N3 - movprfx\t%0., %1/m, %2.\;fadd\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/m, %2.\;fsub\t%0., %1/m, %0., #%N3 - # - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ w , Upl , w , vsA , Dz ] movprfx\t%0., %1/z, %2.\;fadd\t%0., %1/m, %0., #%3 + [ w , Upl , w , vsN , Dz ] movprfx\t%0., %1/z, %2.\;fsub\t%0., %1/m, %0., #%N3 + [ w , Upl , w , vsA , 0 ] movprfx\t%0., %1/m, %2.\;fadd\t%0., %1/m, %0., #%3 + [ w , Upl , w , vsN , 0 ] movprfx\t%0., %1/m, %2.\;fsub\t%0., %1/m, %0., #%N3 + [ ?w , Upl , w , vsA , w ] # + [ ?w , Upl , w , vsN , w ] # + } "&& 1" { if (reload_completed @@ -5591,25 +5626,26 @@ (define_insn_and_rewrite "*cond_add_any_const_relaxed" ) (define_insn_and_rewrite "*cond_add_any_const_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, w, w, ?w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w, w, w, w") - (match_operand:SVE_FULL_F 3 "aarch64_sve_float_arith_with_sub_immediate" "vsA, vsN, vsA, vsN, vsA, vsN")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "aarch64_sve_float_arith_with_sub_immediate")] UNSPEC_COND_FADD) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, Dz, 0, 0, w, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %2.\;fadd\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/z, %2.\;fsub\t%0., %1/m, %0., #%N3 - movprfx\t%0., %1/m, %2.\;fadd\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/m, %2.\;fsub\t%0., %1/m, %0., #%N3 - # - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ w , Upl , w , vsA , Dz ] movprfx\t%0., %1/z, %2.\;fadd\t%0., %1/m, %0., #%3 + [ w , Upl , w , vsN , Dz ] movprfx\t%0., %1/z, %2.\;fsub\t%0., %1/m, %0., #%N3 + [ w , Upl , w , vsA , 0 ] movprfx\t%0., %1/m, %2.\;fadd\t%0., %1/m, %0., #%3 + [ w , Upl , w , vsN , 0 ] movprfx\t%0., %1/m, %2.\;fsub\t%0., %1/m, %0., #%N3 + [ ?w , Upl , w , vsA , w ] # + [ ?w , Upl , w , vsN , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -5632,18 +5668,18 @@ (define_insn_and_rewrite "*cond_add_any_const_strict" ;; Predicated FCADD. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 4 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FCADD))] "TARGET_SVE" - "@ - fcadd\t%0., %1/m, %0., %3., # - movprfx\t%0, %2\;fcadd\t%0., %1/m, %0., %3., #" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] fcadd\t%0., %1/m, %0., %3., # + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;fcadd\t%0., %1/m, %0., %3., # + } ) ;; Predicated FCADD with merging. @@ -5678,66 +5714,67 @@ (define_expand "@cadd3" ;; Predicated FCADD, merging with the first input. (define_insn_and_rewrite "*cond__2_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FCADD) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fcadd\t%0., %1/m, %0., %3., # - movprfx\t%0, %2\;fcadd\t%0., %1/m, %0., %3., #" + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] fcadd\t%0., %1/m, %0., %3., # + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;fcadd\t%0., %1/m, %0., %3., # + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*cond__2_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FCADD) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fcadd\t%0., %1/m, %0., %3., # - movprfx\t%0, %2\;fcadd\t%0., %1/m, %0., %3., #" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] fcadd\t%0., %1/m, %0., %3., # + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;fcadd\t%0., %1/m, %0., %3., # + } ) ;; Predicated FCADD, merging with an independent value. (define_insn_and_rewrite "*cond__any_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, 0, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FCADD) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %2.\;fcadd\t%0., %1/m, %0., %3., # - movprfx\t%0., %1/z, %0.\;fcadd\t%0., %1/m, %0., %3., # - movprfx\t%0., %1/m, %2.\;fcadd\t%0., %1/m, %0., %3., # - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;fcadd\t%0., %1/m, %0., %3., # + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;fcadd\t%0., %1/m, %0., %3., # + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;fcadd\t%0., %1/m, %0., %3., # + [ ?&w , Upl , w , w , w ] # + } "&& 1" { if (reload_completed @@ -5757,23 +5794,24 @@ (define_insn_and_rewrite "*cond__any_relaxed" ) (define_insn_and_rewrite "*cond__any_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, 0, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FCADD) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" - "@ - movprfx\t%0., %1/z, %2.\;fcadd\t%0., %1/m, %0., %3., # - movprfx\t%0., %1/z, %0.\;fcadd\t%0., %1/m, %0., %3., # - movprfx\t%0., %1/m, %2.\;fcadd\t%0., %1/m, %0., %3., # - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;fcadd\t%0., %1/m, %0., %3., # + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;fcadd\t%0., %1/m, %0., %3., # + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;fcadd\t%0., %1/m, %0., %3., # + [ ?&w , Upl , w , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -5795,21 +5833,22 @@ (define_insn_and_rewrite "*cond__any_strict" ;; Predicated floating-point subtraction. (define_insn_and_split "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, w, w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl") - (match_operand:SI 4 "aarch64_sve_gp_strictness" "i, Z, Ui1, Ui1, i, Ui1") - (match_operand:SVE_FULL_F 2 "aarch64_sve_float_arith_operand" "vsA, w, 0, w, vsA, w") - (match_operand:SVE_FULL_F 3 "register_operand" "0, w, w, 0, w, w")] + [(match_operand: 1 "register_operand") + (match_operand:SI 4 "aarch64_sve_gp_strictness") + (match_operand:SVE_FULL_F 2 "aarch64_sve_float_arith_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE_COND_FP_SUB))] "TARGET_SVE" - "@ - fsubr\t%0., %1/m, %0., #%2 - # - fsub\t%0., %1/m, %0., %3. - fsubr\t%0., %1/m, %0., %2. - movprfx\t%0, %3\;fsubr\t%0., %1/m, %0., #%2 - movprfx\t%0, %2\;fsub\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , vsA , 0 , i ; * ] fsubr\t%0., %1/m, %0., #%2 + [ w , Upl , w , w , Z ; * ] # + [ w , Upl , 0 , w , Ui1 ; * ] fsub\t%0., %1/m, %0., %3. + [ w , Upl , w , 0 , Ui1 ; * ] fsubr\t%0., %1/m, %0., %2. + [ ?&w , Upl , vsA , w , i ; yes ] movprfx\t%0, %3\;fsubr\t%0., %1/m, %0., #%2 + [ ?&w , Upl , w , w , Ui1 ; yes ] movprfx\t%0, %2\;fsub\t%0., %1/m, %0., %3. + } ; Split the unpredicated form after reload, so that we don't have ; the unnecessary PTRUE. "&& reload_completed @@ -5817,72 +5856,72 @@ (define_insn_and_split "@aarch64_pred_" && INTVAL (operands[4]) == SVE_RELAXED_GP" [(set (match_dup 0) (minus:SVE_FULL_F (match_dup 2) (match_dup 3)))] "" - [(set_attr "movprfx" "*,*,*,*,yes,yes")] ) ;; Predicated floating-point subtraction from a constant, merging with the ;; second input. (define_insn_and_rewrite "*cond_sub_3_const_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) (match_operand:SVE_FULL_F 2 "aarch64_sve_float_arith_immediate") - (match_operand:SVE_FULL_F 3 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB) (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fsubr\t%0., %1/m, %0., #%2 - movprfx\t%0, %3\;fsubr\t%0., %1/m, %0., #%2" + {@ [ cons: =0 , 1 , 3 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] fsubr\t%0., %1/m, %0., #%2 + [ ?w , Upl , w ; yes ] movprfx\t%0, %3\;fsubr\t%0., %1/m, %0., #%2 + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*cond_sub_3_const_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) (match_operand:SVE_FULL_F 2 "aarch64_sve_float_arith_immediate") - (match_operand:SVE_FULL_F 3 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB) (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fsubr\t%0., %1/m, %0., #%2 - movprfx\t%0, %3\;fsubr\t%0., %1/m, %0., #%2" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 3 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] fsubr\t%0., %1/m, %0., #%2 + [ ?w , Upl , w ; yes ] movprfx\t%0, %3\;fsubr\t%0., %1/m, %0., #%2 + } ) ;; Predicated floating-point subtraction from a constant, merging with an ;; independent value. (define_insn_and_rewrite "*cond_sub_const_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) (match_operand:SVE_FULL_F 2 "aarch64_sve_float_arith_immediate") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[3], operands[4])" - "@ - movprfx\t%0., %1/z, %3.\;fsubr\t%0., %1/m, %0., #%2 - movprfx\t%0., %1/m, %3.\;fsubr\t%0., %1/m, %0., #%2 - #" + {@ [ cons: =0 , 1 , 3 , 4 ] + [ w , Upl , w , Dz ] movprfx\t%0., %1/z, %3.\;fsubr\t%0., %1/m, %0., #%2 + [ w , Upl , w , 0 ] movprfx\t%0., %1/m, %3.\;fsubr\t%0., %1/m, %0., #%2 + [ ?w , Upl , w , w ] # + } "&& 1" { if (reload_completed @@ -5902,22 +5941,23 @@ (define_insn_and_rewrite "*cond_sub_const_relaxed" ) (define_insn_and_rewrite "*cond_sub_const_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, ?w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) (match_operand:SVE_FULL_F 2 "aarch64_sve_float_arith_immediate") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[3], operands[4])" - "@ - movprfx\t%0., %1/z, %3.\;fsubr\t%0., %1/m, %0., #%2 - movprfx\t%0., %1/m, %3.\;fsubr\t%0., %1/m, %0., #%2 - #" + {@ [ cons: =0 , 1 , 3 , 4 ] + [ w , Upl , w , Dz ] movprfx\t%0., %1/z, %3.\;fsubr\t%0., %1/m, %0., #%2 + [ w , Upl , w , 0 ] movprfx\t%0., %1/m, %3.\;fsubr\t%0., %1/m, %0., #%2 + [ ?w , Upl , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -5955,45 +5995,45 @@ (define_expand "@aarch64_pred_abd" ;; Predicated floating-point absolute difference. (define_insn_and_rewrite "*aarch64_pred_abd_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 4 "aarch64_sve_gp_strictness") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "%0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB)] UNSPEC_COND_FABS))] "TARGET_SVE" - "@ - fabd\t%0., %1/m, %0., %3. - movprfx\t%0, %2\;fabd\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , %0 , w ; * ] fabd\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;fabd\t%0., %1/m, %0., %3. + } "&& !rtx_equal_p (operands[1], operands[5])" { operands[5] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*aarch64_pred_abd_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 4 "aarch64_sve_gp_strictness") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "%0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB)] UNSPEC_COND_FABS))] "TARGET_SVE" - "@ - fabd\t%0., %1/m, %0., %3. - movprfx\t%0, %2\;fabd\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , %0 , w ; * ] fabd\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;fabd\t%0., %1/m, %0., %3. + } ) (define_expand "@aarch64_cond_abd" @@ -6021,138 +6061,139 @@ (define_expand "@aarch64_cond_abd" ;; Predicated floating-point absolute difference, merging with the first ;; input. (define_insn_and_rewrite "*aarch64_cond_abd_2_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB)] UNSPEC_COND_FABS) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fabd\t%0., %1/m, %0., %3. - movprfx\t%0, %2\;fabd\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] fabd\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;fabd\t%0., %1/m, %0., %3. + } "&& (!rtx_equal_p (operands[1], operands[4]) || !rtx_equal_p (operands[1], operands[5]))" { operands[4] = copy_rtx (operands[1]); operands[5] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*aarch64_cond_abd_2_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (match_operand:SI 4 "aarch64_sve_gp_strictness") (unspec:SVE_FULL_F [(match_dup 1) (match_operand:SI 5 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB)] UNSPEC_COND_FABS) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fabd\t%0., %1/m, %0., %3. - movprfx\t%0, %2\;fabd\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] fabd\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;fabd\t%0., %1/m, %0., %3. + } ) ;; Predicated floating-point absolute difference, merging with the second ;; input. (define_insn_and_rewrite "*aarch64_cond_abd_3_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB)] UNSPEC_COND_FABS) (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fabd\t%0., %1/m, %0., %2. - movprfx\t%0, %3\;fabd\t%0., %1/m, %0., %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] fabd\t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;fabd\t%0., %1/m, %0., %2. + } "&& (!rtx_equal_p (operands[1], operands[4]) || !rtx_equal_p (operands[1], operands[5]))" { operands[4] = copy_rtx (operands[1]); operands[5] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*aarch64_cond_abd_3_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (match_operand:SI 4 "aarch64_sve_gp_strictness") (unspec:SVE_FULL_F [(match_dup 1) (match_operand:SI 5 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB)] UNSPEC_COND_FABS) (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fabd\t%0., %1/m, %0., %2. - movprfx\t%0, %3\;fabd\t%0., %1/m, %0., %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] fabd\t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;fabd\t%0., %1/m, %0., %2. + } ) ;; Predicated floating-point absolute difference, merging with an ;; independent value. (define_insn_and_rewrite "*aarch64_cond_abd_any_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) (unspec:SVE_FULL_F [(match_operand 6) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w, w, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, 0, w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB)] UNSPEC_COND_FABS) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4]) && !rtx_equal_p (operands[3], operands[4])" - "@ - movprfx\t%0., %1/z, %0.\;fabd\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %0.\;fabd\t%0., %1/m, %0., %2. - movprfx\t%0., %1/z, %2.\;fabd\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;fabd\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;fabd\t%0., %1/m, %0., %3. + [ &w , Upl , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;fabd\t%0., %1/m, %0., %2. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;fabd\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;fabd\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& 1" { if (reload_completed @@ -6176,30 +6217,31 @@ (define_insn_and_rewrite "*aarch64_cond_abd_any_relaxed" ) (define_insn_and_rewrite "*aarch64_cond_abd_any_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (match_operand:SI 5 "aarch64_sve_gp_strictness") (unspec:SVE_FULL_F [(match_dup 1) (match_operand:SI 6 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "0, w, w, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, 0, w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] UNSPEC_COND_FSUB)] UNSPEC_COND_FABS) - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[4]) && !rtx_equal_p (operands[3], operands[4])" - "@ - movprfx\t%0., %1/z, %0.\;fabd\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %0.\;fabd\t%0., %1/m, %0., %2. - movprfx\t%0., %1/z, %2.\;fabd\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;fabd\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;fabd\t%0., %1/m, %0., %3. + [ &w , Upl , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;fabd\t%0., %1/m, %0., %2. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;fabd\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;fabd\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& reload_completed && register_operand (operands[4], mode) && !rtx_equal_p (operands[0], operands[4])" @@ -6220,20 +6262,21 @@ (define_insn_and_rewrite "*aarch64_cond_abd_any_strict" ;; Predicated floating-point multiplication. (define_insn_and_split "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl") - (match_operand:SI 4 "aarch64_sve_gp_strictness" "i, Z, Ui1, i, Ui1") - (match_operand:SVE_FULL_F 2 "register_operand" "%0, w, 0, w, w") - (match_operand:SVE_FULL_F 3 "aarch64_sve_float_mul_operand" "vsM, w, w, vsM, w")] + [(match_operand: 1 "register_operand") + (match_operand:SI 4 "aarch64_sve_gp_strictness") + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "aarch64_sve_float_mul_operand")] SVE_COND_FP_MUL))] "TARGET_SVE" - "@ - fmul\t%0., %1/m, %0., #%3 - # - fmul\t%0., %1/m, %0., %3. - movprfx\t%0, %2\;fmul\t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;fmul\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , %0 , vsM , i ; * ] fmul\t%0., %1/m, %0., #%3 + [ w , Upl , w , w , Z ; * ] # + [ w , Upl , 0 , w , Ui1 ; * ] fmul\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , vsM , i ; yes ] movprfx\t%0, %2\;fmul\t%0., %1/m, %0., #%3 + [ ?&w , Upl , w , w , Ui1 ; yes ] movprfx\t%0, %2\;fmul\t%0., %1/m, %0., %3. + } ; Split the unpredicated form after reload, so that we don't have ; the unnecessary PTRUE. "&& reload_completed @@ -6241,7 +6284,6 @@ (define_insn_and_split "@aarch64_pred_" && INTVAL (operands[4]) == SVE_RELAXED_GP" [(set (match_dup 0) (mult:SVE_FULL_F (match_dup 2) (match_dup 3)))] "" - [(set_attr "movprfx" "*,*,*,yes,yes")] ) ;; Merging forms are handled through SVE_COND_FP_BINARY and @@ -6428,20 +6470,20 @@ (define_expand "cond_" ;; Predicated floating-point maximum/minimum. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 4 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "%0, 0, w, w") - (match_operand:SVE_FULL_F 3 "aarch64_sve_float_maxmin_operand" "vsB, w, vsB, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "aarch64_sve_float_maxmin_operand")] SVE_COND_FP_MAXMIN))] "TARGET_SVE" - "@ - \t%0., %1/m, %0., #%3 - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , %0 , vsB ; * ] \t%0., %1/m, %0., #%3 + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , vsB ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; Merging forms are handled through SVE_COND_FP_BINARY and @@ -6695,21 +6737,21 @@ (define_expand "fma4" ;; Predicated integer addition of product. (define_insn "@aarch64_pred_fma" - [(set (match_operand:SVE_I 0 "register_operand" "=w, w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (plus:SVE_I (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (mult:SVE_I - (match_operand:SVE_I 2 "register_operand" "%0, w, w") - (match_operand:SVE_I 3 "register_operand" "w, w, w"))] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand"))] UNSPEC_PRED_X) - (match_operand:SVE_I 4 "register_operand" "w, 0, w")))] + (match_operand:SVE_I 4 "register_operand")))] "TARGET_SVE" - "@ - mad\t%0., %1/m, %3., %4. - mla\t%0., %1/m, %2., %3. - movprfx\t%0, %4\;mla\t%0., %1/m, %2., %3." - [(set_attr "movprfx" "*,*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , %0 , w , w ; * ] mad\t%0., %1/m, %3., %4. + [ w , Upl , w , w , 0 ; * ] mla\t%0., %1/m, %2., %3. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %4\;mla\t%0., %1/m, %2., %3. + } ) ;; Predicated integer addition of product with merging. @@ -6737,65 +6779,66 @@ (define_expand "cond_fma" ;; Predicated integer addition of product, merging with the first input. (define_insn "*cond_fma_2" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (plus:SVE_I (mult:SVE_I - (match_operand:SVE_I 2 "register_operand" "0, w") - (match_operand:SVE_I 3 "register_operand" "w, w")) - (match_operand:SVE_I 4 "register_operand" "w, w")) + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand")) + (match_operand:SVE_I 4 "register_operand")) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - mad\t%0., %1/m, %3., %4. - movprfx\t%0, %2\;mad\t%0., %1/m, %3., %4." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , 0 , w , w ; * ] mad\t%0., %1/m, %3., %4. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %2\;mad\t%0., %1/m, %3., %4. + } ) ;; Predicated integer addition of product, merging with the third input. (define_insn "*cond_fma_4" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (plus:SVE_I (mult:SVE_I - (match_operand:SVE_I 2 "register_operand" "w, w") - (match_operand:SVE_I 3 "register_operand" "w, w")) - (match_operand:SVE_I 4 "register_operand" "0, w")) + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand")) + (match_operand:SVE_I 4 "register_operand")) (match_dup 4)] UNSPEC_SEL))] "TARGET_SVE" - "@ - mla\t%0., %1/m, %2., %3. - movprfx\t%0, %4\;mla\t%0., %1/m, %2., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , w , w , 0 ; * ] mla\t%0., %1/m, %2., %3. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %4\;mla\t%0., %1/m, %2., %3. + } ) ;; Predicated integer addition of product, merging with an independent value. (define_insn_and_rewrite "*cond_fma_any" - [(set (match_operand:SVE_I 0 "register_operand" "=&w, &w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (plus:SVE_I (mult:SVE_I - (match_operand:SVE_I 2 "register_operand" "w, w, 0, w, w, w") - (match_operand:SVE_I 3 "register_operand" "w, w, w, 0, w, w")) - (match_operand:SVE_I 4 "register_operand" "w, 0, w, w, w, w")) - (match_operand:SVE_I 5 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, Dz, 0, w")] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand")) + (match_operand:SVE_I 4 "register_operand")) + (match_operand:SVE_I 5 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[5]) && !rtx_equal_p (operands[3], operands[5]) && !rtx_equal_p (operands[4], operands[5])" - "@ - movprfx\t%0., %1/z, %4.\;mla\t%0., %1/m, %2., %3. - movprfx\t%0., %1/z, %0.\;mla\t%0., %1/m, %2., %3. - movprfx\t%0., %1/z, %0.\;mad\t%0., %1/m, %3., %4. - movprfx\t%0., %1/z, %0.\;mad\t%0., %1/m, %2., %4. - movprfx\t%0., %1/m, %4.\;mla\t%0., %1/m, %2., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 , 5 ] + [ &w , Upl , w , w , w , Dz ] movprfx\t%0., %1/z, %4.\;mla\t%0., %1/m, %2., %3. + [ &w , Upl , w , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;mla\t%0., %1/m, %2., %3. + [ &w , Upl , 0 , w , w , Dz ] movprfx\t%0., %1/z, %0.\;mad\t%0., %1/m, %3., %4. + [ &w , Upl , w , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;mad\t%0., %1/m, %2., %4. + [ &w , Upl , w , w , w , 0 ] movprfx\t%0., %1/m, %4.\;mla\t%0., %1/m, %2., %3. + [ ?&w , Upl , w , w , w , w ] # + } "&& reload_completed && register_operand (operands[5], mode) && !rtx_equal_p (operands[0], operands[5])" @@ -6836,21 +6879,21 @@ (define_expand "fnma4" ;; Predicated integer subtraction of product. (define_insn "@aarch64_pred_fnma" - [(set (match_operand:SVE_I 0 "register_operand" "=w, w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (minus:SVE_I - (match_operand:SVE_I 4 "register_operand" "w, 0, w") + (match_operand:SVE_I 4 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (mult:SVE_I - (match_operand:SVE_I 2 "register_operand" "%0, w, w") - (match_operand:SVE_I 3 "register_operand" "w, w, w"))] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand"))] UNSPEC_PRED_X)))] "TARGET_SVE" - "@ - msb\t%0., %1/m, %3., %4. - mls\t%0., %1/m, %2., %3. - movprfx\t%0, %4\;mls\t%0., %1/m, %2., %3." - [(set_attr "movprfx" "*,*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , %0 , w , w ; * ] msb\t%0., %1/m, %3., %4. + [ w , Upl , w , w , 0 ; * ] mls\t%0., %1/m, %2., %3. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %4\;mls\t%0., %1/m, %2., %3. + } ) ;; Predicated integer subtraction of product with merging. @@ -6878,66 +6921,67 @@ (define_expand "cond_fnma" ;; Predicated integer subtraction of product, merging with the first input. (define_insn "*cond_fnma_2" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (minus:SVE_I - (match_operand:SVE_I 4 "register_operand" "w, w") + (match_operand:SVE_I 4 "register_operand") (mult:SVE_I - (match_operand:SVE_I 2 "register_operand" "0, w") - (match_operand:SVE_I 3 "register_operand" "w, w"))) + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand"))) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - msb\t%0., %1/m, %3., %4. - movprfx\t%0, %2\;msb\t%0., %1/m, %3., %4." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , 0 , w , w ; * ] msb\t%0., %1/m, %3., %4. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %2\;msb\t%0., %1/m, %3., %4. + } ) ;; Predicated integer subtraction of product, merging with the third input. (define_insn "*cond_fnma_4" - [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (minus:SVE_I - (match_operand:SVE_I 4 "register_operand" "0, w") + (match_operand:SVE_I 4 "register_operand") (mult:SVE_I - (match_operand:SVE_I 2 "register_operand" "w, w") - (match_operand:SVE_I 3 "register_operand" "w, w"))) + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand"))) (match_dup 4)] UNSPEC_SEL))] "TARGET_SVE" - "@ - mls\t%0., %1/m, %2., %3. - movprfx\t%0, %4\;mls\t%0., %1/m, %2., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , w , w , 0 ; * ] mls\t%0., %1/m, %2., %3. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %4\;mls\t%0., %1/m, %2., %3. + } ) ;; Predicated integer subtraction of product, merging with an ;; independent value. (define_insn_and_rewrite "*cond_fnma_any" - [(set (match_operand:SVE_I 0 "register_operand" "=&w, &w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_I 0 "register_operand") (unspec:SVE_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (minus:SVE_I - (match_operand:SVE_I 4 "register_operand" "w, 0, w, w, w, w") + (match_operand:SVE_I 4 "register_operand") (mult:SVE_I - (match_operand:SVE_I 2 "register_operand" "w, w, 0, w, w, w") - (match_operand:SVE_I 3 "register_operand" "w, w, w, 0, w, w"))) - (match_operand:SVE_I 5 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, Dz, 0, w")] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "register_operand"))) + (match_operand:SVE_I 5 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[5]) && !rtx_equal_p (operands[3], operands[5]) && !rtx_equal_p (operands[4], operands[5])" - "@ - movprfx\t%0., %1/z, %4.\;mls\t%0., %1/m, %2., %3. - movprfx\t%0., %1/z, %0.\;mls\t%0., %1/m, %2., %3. - movprfx\t%0., %1/z, %0.\;msb\t%0., %1/m, %3., %4. - movprfx\t%0., %1/z, %0.\;msb\t%0., %1/m, %2., %4. - movprfx\t%0., %1/m, %4.\;mls\t%0., %1/m, %2., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 , 5 ] + [ &w , Upl , w , w , w , Dz ] movprfx\t%0., %1/z, %4.\;mls\t%0., %1/m, %2., %3. + [ &w , Upl , w , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;mls\t%0., %1/m, %2., %3. + [ &w , Upl , 0 , w , w , Dz ] movprfx\t%0., %1/z, %0.\;msb\t%0., %1/m, %3., %4. + [ &w , Upl , w , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;msb\t%0., %1/m, %2., %4. + [ &w , Upl , w , w , w , 0 ] movprfx\t%0., %1/m, %4.\;mls\t%0., %1/m, %2., %3. + [ ?&w , Upl , w , w , w , w ] # + } "&& reload_completed && register_operand (operands[5], mode) && !rtx_equal_p (operands[0], operands[5])" @@ -6961,70 +7005,70 @@ (define_insn_and_rewrite "*cond_fnma_any" ;; Four-element integer dot-product with accumulation. (define_insn "dot_prod" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (plus:SVE_FULL_SDI (unspec:SVE_FULL_SDI - [(match_operand: 1 "register_operand" "w, w") - (match_operand: 2 "register_operand" "w, w")] + [(match_operand: 1 "register_operand") + (match_operand: 2 "register_operand")] DOTPROD) - (match_operand:SVE_FULL_SDI 3 "register_operand" "0, w")))] + (match_operand:SVE_FULL_SDI 3 "register_operand")))] "TARGET_SVE" - "@ - dot\\t%0., %1., %2. - movprfx\t%0, %3\;dot\\t%0., %1., %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , w , w , 0 ; * ] dot\t%0., %1., %2. + [ ?&w , w , w , w ; yes ] movprfx\t%0, %3\;dot\t%0., %1., %2. + } ) ;; Four-element integer dot-product by selected lanes with accumulation. (define_insn "@aarch64_dot_prod_lane" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (plus:SVE_FULL_SDI (unspec:SVE_FULL_SDI - [(match_operand: 1 "register_operand" "w, w") + [(match_operand: 1 "register_operand") (unspec: - [(match_operand: 2 "register_operand" ", ") + [(match_operand: 2 "register_operand") (match_operand:SI 3 "const_int_operand")] UNSPEC_SVE_LANE_SELECT)] DOTPROD) - (match_operand:SVE_FULL_SDI 4 "register_operand" "0, w")))] + (match_operand:SVE_FULL_SDI 4 "register_operand")))] "TARGET_SVE" - "@ - dot\\t%0., %1., %2.[%3] - movprfx\t%0, %4\;dot\\t%0., %1., %2.[%3]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 4 ; attrs: movprfx ] + [ w , w , , 0 ; * ] dot\t%0., %1., %2.[%3] + [ ?&w , w , , w ; yes ] movprfx\t%0, %4\;dot\t%0., %1., %2.[%3] + } ) (define_insn "@dot_prod" - [(set (match_operand:VNx4SI_ONLY 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SI_ONLY 0 "register_operand") (plus:VNx4SI_ONLY (unspec:VNx4SI_ONLY - [(match_operand: 1 "register_operand" "w, w") - (match_operand: 2 "register_operand" "w, w")] + [(match_operand: 1 "register_operand") + (match_operand: 2 "register_operand")] DOTPROD_US_ONLY) - (match_operand:VNx4SI_ONLY 3 "register_operand" "0, w")))] + (match_operand:VNx4SI_ONLY 3 "register_operand")))] "TARGET_SVE_I8MM" - "@ - dot\\t%0.s, %1.b, %2.b - movprfx\t%0, %3\;dot\\t%0.s, %1.b, %2.b" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , w , w , 0 ; * ] dot\t%0.s, %1.b, %2.b + [ ?&w , w , w , w ; yes ] movprfx\t%0, %3\;dot\t%0.s, %1.b, %2.b + } ) (define_insn "@aarch64_dot_prod_lane" - [(set (match_operand:VNx4SI_ONLY 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SI_ONLY 0 "register_operand") (plus:VNx4SI_ONLY (unspec:VNx4SI_ONLY - [(match_operand: 1 "register_operand" "w, w") + [(match_operand: 1 "register_operand") (unspec: - [(match_operand: 2 "register_operand" "y, y") + [(match_operand: 2 "register_operand") (match_operand:SI 3 "const_int_operand")] UNSPEC_SVE_LANE_SELECT)] DOTPROD_I8MM) - (match_operand:VNx4SI_ONLY 4 "register_operand" "0, w")))] + (match_operand:VNx4SI_ONLY 4 "register_operand")))] "TARGET_SVE_I8MM" - "@ - dot\\t%0.s, %1.b, %2.b[%3] - movprfx\t%0, %4\;dot\\t%0.s, %1.b, %2.b[%3]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 4 ; attrs: movprfx ] + [ w , w , y , 0 ; * ] dot\t%0.s, %1.b, %2.b[%3] + [ ?&w , w , y , w ; yes ] movprfx\t%0, %4\;dot\t%0.s, %1.b, %2.b[%3] + } ) ;; ------------------------------------------------------------------------- @@ -7067,18 +7111,18 @@ (define_expand "sad" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_sve_add_" - [(set (match_operand:VNx4SI_ONLY 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SI_ONLY 0 "register_operand") (plus:VNx4SI_ONLY (unspec:VNx4SI_ONLY - [(match_operand: 2 "register_operand" "w, w") - (match_operand: 3 "register_operand" "w, w")] + [(match_operand: 2 "register_operand") + (match_operand: 3 "register_operand")] MATMUL) - (match_operand:VNx4SI_ONLY 1 "register_operand" "0, w")))] + (match_operand:VNx4SI_ONLY 1 "register_operand")))] "TARGET_SVE_I8MM" - "@ - mmla\\t%0.s, %2.b, %3.b - movprfx\t%0, %1\;mmla\\t%0.s, %2.b, %3.b" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] mmla\t%0.s, %2.b, %3.b + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;mmla\t%0.s, %2.b, %3.b + } ) ;; ------------------------------------------------------------------------- @@ -7113,20 +7157,20 @@ (define_expand "4" ;; Predicated floating-point ternary operations. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 5 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "%w, 0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "0, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FP_TERNARY))] "TARGET_SVE" - "@ - \t%0., %1/m, %2., %3. - \t%0., %1/m, %3., %4. - movprfx\t%0, %4\;\t%0., %1/m, %2., %3." - [(set_attr "movprfx" "*,*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , %w , w , 0 ; * ] \t%0., %1/m, %2., %3. + [ w , Upl , 0 , w , w ; * ] \t%0., %1/m, %3., %4. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %4\;\t%0., %1/m, %2., %3. + } ) ;; Predicated floating-point ternary operations with merging. @@ -7154,121 +7198,122 @@ (define_expand "@cond_" ;; Predicated floating-point ternary operations, merging with the ;; first input. (define_insn_and_rewrite "*cond__2_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FP_TERNARY) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %3., %4. - movprfx\t%0, %2\;\t%0., %1/m, %3., %4." + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , 0 , w , w ; * ] \t%0., %1/m, %3., %4. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %3., %4. + } "&& !rtx_equal_p (operands[1], operands[5])" { operands[5] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*cond__2_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FP_TERNARY) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %3., %4. - movprfx\t%0, %2\;\t%0., %1/m, %3., %4." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , 0 , w , w ; * ] \t%0., %1/m, %3., %4. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %3., %4. + } ) ;; Predicated floating-point ternary operations, merging with the ;; third input. (define_insn_and_rewrite "*cond__4_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FP_TERNARY) (match_dup 4)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %2., %3. - movprfx\t%0, %4\;\t%0., %1/m, %2., %3." + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , w , w , 0 ; * ] \t%0., %1/m, %2., %3. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %4\;\t%0., %1/m, %2., %3. + } "&& !rtx_equal_p (operands[1], operands[5])" { operands[5] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*cond__4_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FP_TERNARY) (match_dup 4)] UNSPEC_SEL))] "TARGET_SVE" - "@ - \t%0., %1/m, %2., %3. - movprfx\t%0, %4\;\t%0., %1/m, %2., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , w , w , 0 ; * ] \t%0., %1/m, %2., %3. + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %4\;\t%0., %1/m, %2., %3. + } ) ;; Predicated floating-point ternary operations, merging with an ;; independent value. (define_insn_and_rewrite "*cond__any_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 6) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, 0, w, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w, w, 0, w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "w, 0, w, w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FP_TERNARY) - (match_operand:SVE_FULL_F 5 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 5 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[5]) && !rtx_equal_p (operands[3], operands[5]) && !rtx_equal_p (operands[4], operands[5])" - "@ - movprfx\t%0., %1/z, %4.\;\t%0., %1/m, %2., %3. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %2., %3. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %3., %4. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %2., %4. - movprfx\t%0., %1/m, %4.\;\t%0., %1/m, %2., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 , 5 ] + [ &w , Upl , w , w , w , Dz ] movprfx\t%0., %1/z, %4.\;\t%0., %1/m, %2., %3. + [ &w , Upl , w , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %2., %3. + [ &w , Upl , 0 , w , w , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %3., %4. + [ &w , Upl , w , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %2., %4. + [ &w , Upl , w , w , w , 0 ] movprfx\t%0., %1/m, %4.\;\t%0., %1/m, %2., %3. + [ ?&w , Upl , w , w , w , w ] # + } "&& 1" { if (reload_completed @@ -7288,29 +7333,30 @@ (define_insn_and_rewrite "*cond__any_relaxed" ) (define_insn_and_rewrite "*cond__any_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, 0, w, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w, w, 0, w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "w, 0, w, w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FP_TERNARY) - (match_operand:SVE_FULL_F 5 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 5 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[2], operands[5]) && !rtx_equal_p (operands[3], operands[5]) && !rtx_equal_p (operands[4], operands[5])" - "@ - movprfx\t%0., %1/z, %4.\;\t%0., %1/m, %2., %3. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %2., %3. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %3., %4. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %2., %4. - movprfx\t%0., %1/m, %4.\;\t%0., %1/m, %2., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 , 5 ] + [ &w , Upl , w , w , w , Dz ] movprfx\t%0., %1/z, %4.\;\t%0., %1/m, %2., %3. + [ &w , Upl , w , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %2., %3. + [ &w , Upl , 0 , w , w , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %3., %4. + [ &w , Upl , w , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %2., %4. + [ &w , Upl , w , w , w , 0 ] movprfx\t%0., %1/m, %4.\;\t%0., %1/m, %2., %3. + [ ?&w , Upl , w , w , w , w ] # + } "&& reload_completed && register_operand (operands[5], mode) && !rtx_equal_p (operands[0], operands[5])" @@ -7325,20 +7371,20 @@ (define_insn_and_rewrite "*cond__any_strict" ;; Unpredicated FMLA and FMLS by selected lanes. It doesn't seem worth using ;; (fma ...) since target-independent code won't understand the indexing. (define_insn "@aarch64__lane_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand:SVE_FULL_F 1 "register_operand" "w, w") + [(match_operand:SVE_FULL_F 1 "register_operand") (unspec:SVE_FULL_F - [(match_operand:SVE_FULL_F 2 "register_operand" ", ") + [(match_operand:SVE_FULL_F 2 "register_operand") (match_operand:SI 3 "const_int_operand")] UNSPEC_SVE_LANE_SELECT) - (match_operand:SVE_FULL_F 4 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_FP_TERNARY_LANE))] "TARGET_SVE" - "@ - \t%0., %1., %2.[%3] - movprfx\t%0, %4\;\t%0., %1., %2.[%3]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 4 ; attrs: movprfx ] + [ w , w , , 0 ; * ] \t%0., %1., %2.[%3] + [ ?&w , w , , w ; yes ] movprfx\t%0, %4\;\t%0., %1., %2.[%3] + } ) ;; ------------------------------------------------------------------------- @@ -7350,19 +7396,19 @@ (define_insn "@aarch64__lane_" ;; Predicated FCMLA. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 5 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FCMLA))] "TARGET_SVE" - "@ - fcmla\t%0., %1/m, %2., %3., # - movprfx\t%0, %4\;fcmla\t%0., %1/m, %2., %3., #" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , w , w , 0 ; * ] fcmla\t%0., %1/m, %2., %3., # + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %4\;fcmla\t%0., %1/m, %2., %3., # + } ) ;; unpredicated optab pattern for auto-vectorizer @@ -7440,69 +7486,70 @@ (define_expand "@cond_" ;; Predicated FCMLA, merging with the third input. (define_insn_and_rewrite "*cond__4_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 5) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FCMLA) (match_dup 4)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fcmla\t%0., %1/m, %2., %3., # - movprfx\t%0, %4\;fcmla\t%0., %1/m, %2., %3., #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , w , w , 0 ; * ] fcmla\t%0., %1/m, %2., %3., # + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %4\;fcmla\t%0., %1/m, %2., %3., # + } "&& !rtx_equal_p (operands[1], operands[5])" { operands[5] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes")] ) (define_insn "*cond__4_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FCMLA) (match_dup 4)] UNSPEC_SEL))] "TARGET_SVE" - "@ - fcmla\t%0., %1/m, %2., %3., # - movprfx\t%0, %4\;fcmla\t%0., %1/m, %2., %3., #" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 , 4 ; attrs: movprfx ] + [ w , Upl , w , w , 0 ; * ] fcmla\t%0., %1/m, %2., %3., # + [ ?&w , Upl , w , w , w ; yes ] movprfx\t%0, %4\;fcmla\t%0., %1/m, %2., %3., # + } ) ;; Predicated FCMLA, merging with an independent value. (define_insn_and_rewrite "*cond__any_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 6) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w, w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "w, 0, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FCMLA) - (match_operand:SVE_FULL_F 5 "aarch64_simd_reg_or_zero" "Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 5 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[4], operands[5])" - "@ - movprfx\t%0., %1/z, %4.\;fcmla\t%0., %1/m, %2., %3., # - movprfx\t%0., %1/z, %0.\;fcmla\t%0., %1/m, %2., %3., # - movprfx\t%0., %1/m, %4.\;fcmla\t%0., %1/m, %2., %3., # - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 , 5 ] + [ &w , Upl , w , w , w , Dz ] movprfx\t%0., %1/z, %4.\;fcmla\t%0., %1/m, %2., %3., # + [ &w , Upl , w , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;fcmla\t%0., %1/m, %2., %3., # + [ &w , Upl , w , w , w , 0 ] movprfx\t%0., %1/m, %4.\;fcmla\t%0., %1/m, %2., %3., # + [ ?&w , Upl , w , w , w , w ] # + } "&& 1" { if (reload_completed @@ -7522,24 +7569,25 @@ (define_insn_and_rewrite "*cond__any_relaxed" ) (define_insn_and_rewrite "*cond__any_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w, w, w") - (match_operand:SVE_FULL_F 4 "register_operand" "w, 0, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "register_operand")] SVE_COND_FCMLA) - (match_operand:SVE_FULL_F 5 "aarch64_simd_reg_or_zero" "Dz, Dz, 0, w")] + (match_operand:SVE_FULL_F 5 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && !rtx_equal_p (operands[4], operands[5])" - "@ - movprfx\t%0., %1/z, %4.\;fcmla\t%0., %1/m, %2., %3., # - movprfx\t%0., %1/z, %0.\;fcmla\t%0., %1/m, %2., %3., # - movprfx\t%0., %1/m, %4.\;fcmla\t%0., %1/m, %2., %3., # - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 , 5 ] + [ &w , Upl , w , w , w , Dz ] movprfx\t%0., %1/z, %4.\;fcmla\t%0., %1/m, %2., %3., # + [ &w , Upl , w , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;fcmla\t%0., %1/m, %2., %3., # + [ &w , Upl , w , w , w , 0 ] movprfx\t%0., %1/m, %4.\;fcmla\t%0., %1/m, %2., %3., # + [ ?&w , Upl , w , w , w , w ] # + } "&& reload_completed && register_operand (operands[5], mode) && !rtx_equal_p (operands[0], operands[5])" @@ -7553,20 +7601,20 @@ (define_insn_and_rewrite "*cond__any_strict" ;; Unpredicated FCMLA with indexing. (define_insn "@aarch64__lane_" - [(set (match_operand:SVE_FULL_HSF 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSF 0 "register_operand") (unspec:SVE_FULL_HSF - [(match_operand:SVE_FULL_HSF 1 "register_operand" "w, w") + [(match_operand:SVE_FULL_HSF 1 "register_operand") (unspec:SVE_FULL_HSF - [(match_operand:SVE_FULL_HSF 2 "register_operand" ", ") + [(match_operand:SVE_FULL_HSF 2 "register_operand") (match_operand:SI 3 "const_int_operand")] UNSPEC_SVE_LANE_SELECT) - (match_operand:SVE_FULL_HSF 4 "register_operand" "0, w")] + (match_operand:SVE_FULL_HSF 4 "register_operand")] FCMLA))] "TARGET_SVE" - "@ - fcmla\t%0., %1., %2.[%3], # - movprfx\t%0, %4\;fcmla\t%0., %1., %2.[%3], #" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 4 ; attrs: movprfx ] + [ w , w , , 0 ; * ] fcmla\t%0., %1., %2.[%3], # + [ ?&w , w , , w ; yes ] movprfx\t%0, %4\;fcmla\t%0., %1., %2.[%3], # + } ) ;; ------------------------------------------------------------------------- @@ -7577,17 +7625,17 @@ (define_insn "@aarch64__lane_" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_sve_tmad" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand:SVE_FULL_F 1 "register_operand" "0, w") - (match_operand:SVE_FULL_F 2 "register_operand" "w, w") + [(match_operand:SVE_FULL_F 1 "register_operand") + (match_operand:SVE_FULL_F 2 "register_operand") (match_operand:DI 3 "const_int_operand")] UNSPEC_FTMAD))] "TARGET_SVE" - "@ - ftmad\t%0., %0., %2., #%3 - movprfx\t%0, %1\;ftmad\t%0., %0., %2., #%3" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , w ; * ] ftmad\t%0., %0., %2., #%3 + [ ?&w , w , w ; yes ] movprfx\t%0, %1\;ftmad\t%0., %0., %2., #%3 + } ) ;; ------------------------------------------------------------------------- @@ -7601,33 +7649,33 @@ (define_insn "@aarch64_sve_tmad" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_sve_vnx4sf" - [(set (match_operand:VNx4SF 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SF 0 "register_operand") (unspec:VNx4SF - [(match_operand:VNx4SF 1 "register_operand" "0, w") - (match_operand:VNx8BF 2 "register_operand" "w, w") - (match_operand:VNx8BF 3 "register_operand" "w, w")] + [(match_operand:VNx4SF 1 "register_operand") + (match_operand:VNx8BF 2 "register_operand") + (match_operand:VNx8BF 3 "register_operand")] SVE_BFLOAT_TERNARY_LONG))] "TARGET_SVE_BF16" - "@ - \t%0.s, %2.h, %3.h - movprfx\t%0, %1\;\t%0.s, %2.h, %3.h" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] \t%0.s, %2.h, %3.h + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;\t%0.s, %2.h, %3.h + } ) ;; The immediate range is enforced before generating the instruction. (define_insn "@aarch64_sve__lanevnx4sf" - [(set (match_operand:VNx4SF 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SF 0 "register_operand") (unspec:VNx4SF - [(match_operand:VNx4SF 1 "register_operand" "0, w") - (match_operand:VNx8BF 2 "register_operand" "w, w") - (match_operand:VNx8BF 3 "register_operand" "y, y") + [(match_operand:VNx4SF 1 "register_operand") + (match_operand:VNx8BF 2 "register_operand") + (match_operand:VNx8BF 3 "register_operand") (match_operand:SI 4 "const_int_operand")] SVE_BFLOAT_TERNARY_LONG_LANE))] "TARGET_SVE_BF16" - "@ - \t%0.s, %2.h, %3.h[%4] - movprfx\t%0, %1\;\t%0.s, %2.h, %3.h[%4]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , y ; * ] \t%0.s, %2.h, %3.h[%4] + [ ?&w , w , w , y ; yes ] movprfx\t%0, %1\;\t%0.s, %2.h, %3.h[%4] + } ) ;; ------------------------------------------------------------------------- @@ -7639,17 +7687,17 @@ (define_insn "@aarch64_sve__lanevnx4sf" ;; The mode iterator enforces the target requirements. (define_insn "@aarch64_sve_" - [(set (match_operand:SVE_MATMULF 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_MATMULF 0 "register_operand") (unspec:SVE_MATMULF - [(match_operand:SVE_MATMULF 2 "register_operand" "w, w") - (match_operand:SVE_MATMULF 3 "register_operand" "w, w") - (match_operand:SVE_MATMULF 1 "register_operand" "0, w")] + [(match_operand:SVE_MATMULF 2 "register_operand") + (match_operand:SVE_MATMULF 3 "register_operand") + (match_operand:SVE_MATMULF 1 "register_operand")] FMMLA))] "TARGET_SVE" - "@ - \\t%0., %2., %3. - movprfx\t%0, %1\;\\t%0., %2., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] \t%0., %2., %3. + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;\t%0., %2., %3. + } ) ;; ========================================================================= @@ -7700,24 +7748,24 @@ (define_expand "@vcond_mask_" ;; For the other instructions, using the element size is more natural, ;; so we do that for SEL as well. (define_insn "*vcond_mask_" - [(set (match_operand:SVE_ALL 0 "register_operand" "=w, w, w, w, ?w, ?&w, ?&w") + [(set (match_operand:SVE_ALL 0 "register_operand") (unspec:SVE_ALL - [(match_operand: 3 "register_operand" "Upa, Upa, Upa, Upa, Upl, Upa, Upa") - (match_operand:SVE_ALL 1 "aarch64_sve_reg_or_dup_imm" "w, vss, vss, Ufc, Ufc, vss, Ufc") - (match_operand:SVE_ALL 2 "aarch64_simd_reg_or_zero" "w, 0, Dz, 0, Dz, w, w")] + [(match_operand: 3 "register_operand") + (match_operand:SVE_ALL 1 "aarch64_sve_reg_or_dup_imm") + (match_operand:SVE_ALL 2 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && (!register_operand (operands[1], mode) || register_operand (operands[2], mode))" - "@ - sel\t%0., %3, %1., %2. - mov\t%0., %3/m, #%I1 - mov\t%0., %3/z, #%I1 - fmov\t%0., %3/m, #%1 - movprfx\t%0., %3/z, %0.\;fmov\t%0., %3/m, #%1 - movprfx\t%0, %2\;mov\t%0., %3/m, #%I1 - movprfx\t%0, %2\;fmov\t%0., %3/m, #%1" - [(set_attr "movprfx" "*,*,*,*,yes,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , w , w , Upa ; * ] sel\t%0., %3, %1., %2. + [ w , vss , 0 , Upa ; * ] mov\t%0., %3/m, #%I1 + [ w , vss , Dz , Upa ; * ] mov\t%0., %3/z, #%I1 + [ w , Ufc , 0 , Upa ; * ] fmov\t%0., %3/m, #%1 + [ ?w , Ufc , Dz , Upl ; yes ] movprfx\t%0., %3/z, %0.\;fmov\t%0., %3/m, #%1 + [ ?&w , vss , w , Upa ; yes ] movprfx\t%0, %2\;mov\t%0., %3/m, #%I1 + [ ?&w , Ufc , w , Upa ; yes ] movprfx\t%0, %2\;fmov\t%0., %3/m, #%1 + } ) ;; Optimize selects between a duplicated scalar variable and another vector, @@ -7725,22 +7773,22 @@ (define_insn "*vcond_mask_" ;; of GPRs as being more expensive than duplicates of FPRs, since they ;; involve a cross-file move. (define_insn "@aarch64_sel_dup" - [(set (match_operand:SVE_ALL 0 "register_operand" "=?w, w, ??w, ?&w, ??&w, ?&w") + [(set (match_operand:SVE_ALL 0 "register_operand") (unspec:SVE_ALL - [(match_operand: 3 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl") + [(match_operand: 3 "register_operand") (vec_duplicate:SVE_ALL - (match_operand: 1 "register_operand" "r, w, r, w, r, w")) - (match_operand:SVE_ALL 2 "aarch64_simd_reg_or_zero" "0, 0, Dz, Dz, w, w")] + (match_operand: 1 "register_operand")) + (match_operand:SVE_ALL 2 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE" - "@ - mov\t%0., %3/m, %1 - mov\t%0., %3/m, %1 - movprfx\t%0., %3/z, %0.\;mov\t%0., %3/m, %1 - movprfx\t%0., %3/z, %0.\;mov\t%0., %3/m, %1 - movprfx\t%0, %2\;mov\t%0., %3/m, %1 - movprfx\t%0, %2\;mov\t%0., %3/m, %1" - [(set_attr "movprfx" "*,*,yes,yes,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ ?w , r , 0 , Upl ; * ] mov\t%0., %3/m, %1 + [ w , w , 0 , Upl ; * ] mov\t%0., %3/m, %1 + [ ??w , r , Dz , Upl ; yes ] movprfx\t%0., %3/z, %0.\;mov\t%0., %3/m, %1 + [ ?&w , w , Dz , Upl ; yes ] movprfx\t%0., %3/z, %0.\;mov\t%0., %3/m, %1 + [ ??&w , r , w , Upl ; yes ] movprfx\t%0, %2\;mov\t%0., %3/m, %1 + [ ?&w , w , w , Upl ; yes ] movprfx\t%0, %2\;mov\t%0., %3/m, %1 + } ) ;; ------------------------------------------------------------------------- @@ -7878,19 +7926,20 @@ (define_expand "vec_cmpu" ;; - the predicate result bit is in the undefined part of a VNx2BI, ;; so its value doesn't matter anyway. (define_insn "@aarch64_pred_cmp" - [(set (match_operand: 0 "register_operand" "=Upa, Upa") + [(set (match_operand: 0 "register_operand") (unspec: - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 2 "aarch64_sve_ptrue_flag") (SVE_INT_CMP: - (match_operand:SVE_I 3 "register_operand" "w, w") - (match_operand:SVE_I 4 "aarch64_sve_cmp__operand" ", w"))] + (match_operand:SVE_I 3 "register_operand") + (match_operand:SVE_I 4 "aarch64_sve_cmp__operand"))] UNSPEC_PRED_Z)) (clobber (reg:CC_NZC CC_REGNUM))] "TARGET_SVE" - "@ - cmp\t%0., %1/z, %3., #%4 - cmp\t%0., %1/z, %3., %4." + {@ [ cons: =0 , 1 , 3 , 4 ] + [ Upa , Upl , w , ] cmp\t%0., %1/z, %3., #%4 + [ Upa , Upl , w , w ] cmp\t%0., %1/z, %3., %4. + } ) ;; Predicated integer comparisons in which both the flag and predicate @@ -7898,18 +7947,18 @@ (define_insn "@aarch64_pred_cmp" (define_insn_and_rewrite "*cmp_cc" [(set (reg:CC_NZC CC_REGNUM) (unspec:CC_NZC - [(match_operand:VNx16BI 1 "register_operand" "Upl, Upl") + [(match_operand:VNx16BI 1 "register_operand") (match_operand 4) (match_operand:SI 5 "aarch64_sve_ptrue_flag") (unspec: [(match_operand 6) (match_operand:SI 7 "aarch64_sve_ptrue_flag") (SVE_INT_CMP: - (match_operand:SVE_I 2 "register_operand" "w, w") - (match_operand:SVE_I 3 "aarch64_sve_cmp__operand" ", w"))] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "aarch64_sve_cmp__operand"))] UNSPEC_PRED_Z)] UNSPEC_PTEST)) - (set (match_operand: 0 "register_operand" "=Upa, Upa") + (set (match_operand: 0 "register_operand") (unspec: [(match_dup 6) (match_dup 7) @@ -7919,9 +7968,10 @@ (define_insn_and_rewrite "*cmp_cc" UNSPEC_PRED_Z))] "TARGET_SVE && aarch64_sve_same_pred_for_ptest_p (&operands[4], &operands[6])" - "@ - cmp\t%0., %1/z, %2., #%3 - cmp\t%0., %1/z, %2., %3." + {@ [ cons: =0 , 1 , 2 , 3 ] + [ Upa , Upl , w , ] cmp\t%0., %1/z, %2., #%3 + [ Upa , Upl , w , w ] cmp\t%0., %1/z, %2., %3. + } "&& !rtx_equal_p (operands[4], operands[6])" { operands[6] = copy_rtx (operands[4]); @@ -7934,23 +7984,24 @@ (define_insn_and_rewrite "*cmp_cc" (define_insn_and_rewrite "*cmp_ptest" [(set (reg:CC_NZC CC_REGNUM) (unspec:CC_NZC - [(match_operand:VNx16BI 1 "register_operand" "Upl, Upl") + [(match_operand:VNx16BI 1 "register_operand") (match_operand 4) (match_operand:SI 5 "aarch64_sve_ptrue_flag") (unspec: [(match_operand 6) (match_operand:SI 7 "aarch64_sve_ptrue_flag") (SVE_INT_CMP: - (match_operand:SVE_I 2 "register_operand" "w, w") - (match_operand:SVE_I 3 "aarch64_sve_cmp__operand" ", w"))] + (match_operand:SVE_I 2 "register_operand") + (match_operand:SVE_I 3 "aarch64_sve_cmp__operand"))] UNSPEC_PRED_Z)] UNSPEC_PTEST)) (clobber (match_scratch: 0 "=Upa, Upa"))] "TARGET_SVE && aarch64_sve_same_pred_for_ptest_p (&operands[4], &operands[6])" - "@ - cmp\t%0., %1/z, %2., #%3 - cmp\t%0., %1/z, %2., %3." + {@ [ cons: 1 , 2 , 3 ] + [ Upl , w , ] cmp\t%0., %1/z, %2., #%3 + [ Upl , w , w ] cmp\t%0., %1/z, %2., %3. + } "&& !rtx_equal_p (operands[4], operands[6])" { operands[6] = copy_rtx (operands[4]); @@ -8171,17 +8222,18 @@ (define_expand "vec_cmp" ;; Predicated floating-point comparisons. (define_insn "@aarch64_pred_fcm" - [(set (match_operand: 0 "register_operand" "=Upa, Upa") + [(set (match_operand: 0 "register_operand") (unspec: - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 2 "aarch64_sve_ptrue_flag") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w") - (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero" "Dz, w")] + (match_operand:SVE_FULL_F 3 "register_operand") + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] SVE_COND_FP_CMP_I0))] "TARGET_SVE" - "@ - fcm\t%0., %1/z, %3., #0.0 - fcm\t%0., %1/z, %3., %4." + {@ [ cons: =0 , 1 , 3 , 4 ] + [ Upa , Upl , w , Dz ] fcm\t%0., %1/z, %3., #0.0 + [ Upa , Upl , w , w ] fcm\t%0., %1/z, %3., %4. + } ) ;; Same for unordered comparisons. @@ -8563,29 +8615,31 @@ (define_insn "aarch64_ptest" ;; Set operand 0 to the last active element in operand 3, or to tied ;; operand 1 if no elements are active. (define_insn "@fold_extract__" - [(set (match_operand: 0 "register_operand" "=?r, w") + [(set (match_operand: 0 "register_operand") (unspec: - [(match_operand: 1 "register_operand" "0, 0") - (match_operand: 2 "register_operand" "Upl, Upl") - (match_operand:SVE_FULL 3 "register_operand" "w, w")] + [(match_operand: 1 "register_operand") + (match_operand: 2 "register_operand") + (match_operand:SVE_FULL 3 "register_operand")] CLAST))] "TARGET_SVE" - "@ - clast\t%0, %2, %0, %3. - clast\t%0, %2, %0, %3." + {@ [ cons: =0 , 1 , 2 , 3 ] + [ ?r , 0 , Upl , w ] clast\t%0, %2, %0, %3. + [ w , 0 , Upl , w ] clast\t%0, %2, %0, %3. + } ) (define_insn "@aarch64_fold_extract_vector__" - [(set (match_operand:SVE_FULL 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL 0 "register_operand") (unspec:SVE_FULL - [(match_operand:SVE_FULL 1 "register_operand" "0, w") - (match_operand: 2 "register_operand" "Upl, Upl") - (match_operand:SVE_FULL 3 "register_operand" "w, w")] + [(match_operand:SVE_FULL 1 "register_operand") + (match_operand: 2 "register_operand") + (match_operand:SVE_FULL 3 "register_operand")] CLAST))] "TARGET_SVE" - "@ - clast\t%0., %2, %0., %3. - movprfx\t%0, %1\;clast\t%0., %2, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ] + [ w , 0 , Upl , w ] clast\t%0., %2, %0., %3. + [ ?&w , w , Upl , w ] movprfx\t%0, %1\;clast\t%0., %2, %0., %3. + } ) ;; ------------------------------------------------------------------------- @@ -8852,17 +8906,17 @@ (define_insn "@aarch64_sve_rev" ;; Like EXT, but start at the first active element. (define_insn "@aarch64_sve_splice" - [(set (match_operand:SVE_FULL 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL 0 "register_operand") (unspec:SVE_FULL - [(match_operand: 1 "register_operand" "Upl, Upl") - (match_operand:SVE_FULL 2 "register_operand" "0, w") - (match_operand:SVE_FULL 3 "register_operand" "w, w")] + [(match_operand: 1 "register_operand") + (match_operand:SVE_FULL 2 "register_operand") + (match_operand:SVE_FULL 3 "register_operand")] UNSPEC_SVE_SPLICE))] "TARGET_SVE" - "@ - splice\t%0., %1, %0., %3. - movprfx\t%0, %2\;splice\t%0., %1, %0., %3." - [(set_attr "movprfx" "*, yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] splice\t%0., %1, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;splice\t%0., %1, %0., %3. + } ) ;; Permutes that take half the elements from one vector and half the @@ -9044,32 +9098,32 @@ (define_expand "2" ;; Predicated float-to-integer conversion, either to the same width or wider. (define_insn "@aarch64_sve__nontrunc" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 3 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE_COND_FCVTI))] "TARGET_SVE && >= " - "@ - fcvtz\t%0., %1/m, %2. - movprfx\t%0, %2\;fcvtz\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] fcvtz\t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;fcvtz\t%0., %1/m, %2. + } ) ;; Predicated narrowing float-to-integer conversion. (define_insn "@aarch64_sve__trunc" - [(set (match_operand:VNx4SI_ONLY 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SI_ONLY 0 "register_operand") (unspec:VNx4SI_ONLY - [(match_operand:VNx2BI 1 "register_operand" "Upl, Upl") + [(match_operand:VNx2BI 1 "register_operand") (match_operand:SI 3 "aarch64_sve_gp_strictness") - (match_operand:VNx2DF_ONLY 2 "register_operand" "0, w")] + (match_operand:VNx2DF_ONLY 2 "register_operand")] SVE_COND_FCVTI))] "TARGET_SVE" - "@ - fcvtz\t%0., %1/m, %2. - movprfx\t%0, %2\;fcvtz\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] fcvtz\t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;fcvtz\t%0., %1/m, %2. + } ) ;; Predicated float-to-integer conversion with merging, either to the same @@ -9094,45 +9148,45 @@ (define_expand "@cond__nontrunc" ;; alternatives earlyclobber makes things more consistent for the ;; register allocator. (define_insn_and_rewrite "*cond__nontrunc_relaxed" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=&w, &w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_HSDI [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE_COND_FCVTI) - (match_operand:SVE_FULL_HSDI 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_FULL_HSDI 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && >= " - "@ - fcvtz\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;fcvtz\t%0., %1/m, %2. - movprfx\t%0, %3\;fcvtz\t%0., %1/m, %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] fcvtz\t%0., %1/m, %2. + [ &w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;fcvtz\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;fcvtz\t%0., %1/m, %2. + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes,yes")] ) (define_insn "*cond__nontrunc_strict" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=&w, &w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_HSDI [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE_COND_FCVTI) - (match_operand:SVE_FULL_HSDI 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_FULL_HSDI 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && >= " - "@ - fcvtz\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;fcvtz\t%0., %1/m, %2. - movprfx\t%0, %3\;fcvtz\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] fcvtz\t%0., %1/m, %2. + [ &w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;fcvtz\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;fcvtz\t%0., %1/m, %2. + } ) ;; Predicated narrowing float-to-integer conversion with merging. @@ -9151,22 +9205,22 @@ (define_expand "@cond__trunc" ) (define_insn "*cond__trunc" - [(set (match_operand:VNx4SI_ONLY 0 "register_operand" "=&w, &w, ?&w") + [(set (match_operand:VNx4SI_ONLY 0 "register_operand") (unspec:VNx4SI_ONLY - [(match_operand:VNx2BI 1 "register_operand" "Upl, Upl, Upl") + [(match_operand:VNx2BI 1 "register_operand") (unspec:VNx4SI_ONLY [(match_dup 1) (match_operand:SI 4 "aarch64_sve_gp_strictness") - (match_operand:VNx2DF_ONLY 2 "register_operand" "w, w, w")] + (match_operand:VNx2DF_ONLY 2 "register_operand")] SVE_COND_FCVTI) - (match_operand:VNx4SI_ONLY 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:VNx4SI_ONLY 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE" - "@ - fcvtz\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;fcvtz\t%0., %1/m, %2. - movprfx\t%0, %3\;fcvtz\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] fcvtz\t%0., %1/m, %2. + [ &w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;fcvtz\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;fcvtz\t%0., %1/m, %2. + } ) ;; ------------------------------------------------------------------------- @@ -9231,32 +9285,32 @@ (define_expand "2" ;; Predicated integer-to-float conversion, either to the same width or ;; narrower. (define_insn "@aarch64_sve__nonextend" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 3 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_HSDI 2 "register_operand" "0, w")] + (match_operand:SVE_FULL_HSDI 2 "register_operand")] SVE_COND_ICVTF))] "TARGET_SVE && >= " - "@ - cvtf\t%0., %1/m, %2. - movprfx\t%0, %2\;cvtf\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] cvtf\t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;cvtf\t%0., %1/m, %2. + } ) ;; Predicated widening integer-to-float conversion. (define_insn "@aarch64_sve__extend" - [(set (match_operand:VNx2DF_ONLY 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx2DF_ONLY 0 "register_operand") (unspec:VNx2DF_ONLY - [(match_operand:VNx2BI 1 "register_operand" "Upl, Upl") + [(match_operand:VNx2BI 1 "register_operand") (match_operand:SI 3 "aarch64_sve_gp_strictness") - (match_operand:VNx4SI_ONLY 2 "register_operand" "0, w")] + (match_operand:VNx4SI_ONLY 2 "register_operand")] SVE_COND_ICVTF))] "TARGET_SVE" - "@ - cvtf\t%0., %1/m, %2. - movprfx\t%0, %2\;cvtf\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] cvtf\t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;cvtf\t%0., %1/m, %2. + } ) ;; Predicated integer-to-float conversion with merging, either to the same @@ -9281,45 +9335,45 @@ (define_expand "@cond__nonextend" ;; alternatives earlyclobber makes things more consistent for the ;; register allocator. (define_insn_and_rewrite "*cond__nonextend_relaxed" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_HSDI 2 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_HSDI 2 "register_operand")] SVE_COND_ICVTF) - (match_operand:SVE_FULL_F 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_FULL_F 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && >= " - "@ - cvtf\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;cvtf\t%0., %1/m, %2. - movprfx\t%0, %3\;cvtf\t%0., %1/m, %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] cvtf\t%0., %1/m, %2. + [ &w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;cvtf\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;cvtf\t%0., %1/m, %2. + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes,yes")] ) (define_insn "*cond__nonextend_strict" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=&w, &w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_F [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_HSDI 2 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_HSDI 2 "register_operand")] SVE_COND_ICVTF) - (match_operand:SVE_FULL_F 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_FULL_F 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && >= " - "@ - cvtf\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;cvtf\t%0., %1/m, %2. - movprfx\t%0, %3\;cvtf\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] cvtf\t%0., %1/m, %2. + [ &w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;cvtf\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;cvtf\t%0., %1/m, %2. + } ) ;; Predicated widening integer-to-float conversion with merging. @@ -9338,22 +9392,22 @@ (define_expand "@cond__extend" ) (define_insn "*cond__extend" - [(set (match_operand:VNx2DF_ONLY 0 "register_operand" "=w, ?&w, ?&w") + [(set (match_operand:VNx2DF_ONLY 0 "register_operand") (unspec:VNx2DF_ONLY - [(match_operand:VNx2BI 1 "register_operand" "Upl, Upl, Upl") + [(match_operand:VNx2BI 1 "register_operand") (unspec:VNx2DF_ONLY [(match_dup 1) (match_operand:SI 4 "aarch64_sve_gp_strictness") - (match_operand:VNx4SI_ONLY 2 "register_operand" "w, w, w")] + (match_operand:VNx4SI_ONLY 2 "register_operand")] SVE_COND_ICVTF) - (match_operand:VNx2DF_ONLY 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:VNx2DF_ONLY 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE" - "@ - cvtf\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;cvtf\t%0., %1/m, %2. - movprfx\t%0, %3\;cvtf\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] cvtf\t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;cvtf\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;cvtf\t%0., %1/m, %2. + } ) ;; ------------------------------------------------------------------------- @@ -9429,17 +9483,17 @@ (define_expand "vec_pack_trunc_" ;; Predicated float-to-float truncation. (define_insn "@aarch64_sve__trunc" - [(set (match_operand:SVE_FULL_HSF 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSF 0 "register_operand") (unspec:SVE_FULL_HSF - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 3 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_SDF 2 "register_operand" "0, w")] + (match_operand:SVE_FULL_SDF 2 "register_operand")] SVE_COND_FCVT))] "TARGET_SVE && > " - "@ - fcvt\t%0., %1/m, %2. - movprfx\t%0, %2\;fcvt\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] fcvt\t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;fcvt\t%0., %1/m, %2. + } ) ;; Predicated float-to-float truncation with merging. @@ -9458,22 +9512,22 @@ (define_expand "@cond__trunc" ) (define_insn "*cond__trunc" - [(set (match_operand:SVE_FULL_HSF 0 "register_operand" "=w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_HSF 0 "register_operand") (unspec:SVE_FULL_HSF - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_HSF [(match_dup 1) (match_operand:SI 4 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_SDF 2 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_SDF 2 "register_operand")] SVE_COND_FCVT) - (match_operand:SVE_FULL_HSF 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_FULL_HSF 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && > " - "@ - fcvt\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;fcvt\t%0., %1/m, %2. - movprfx\t%0, %3\;fcvt\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] fcvt\t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;fcvt\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;fcvt\t%0., %1/m, %2. + } ) ;; ------------------------------------------------------------------------- @@ -9486,17 +9540,17 @@ (define_insn "*cond__trunc" ;; Predicated BFCVT. (define_insn "@aarch64_sve__trunc" - [(set (match_operand:VNx8BF_ONLY 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx8BF_ONLY 0 "register_operand") (unspec:VNx8BF_ONLY - [(match_operand:VNx4BI 1 "register_operand" "Upl, Upl") + [(match_operand:VNx4BI 1 "register_operand") (match_operand:SI 3 "aarch64_sve_gp_strictness") - (match_operand:VNx4SF_ONLY 2 "register_operand" "0, w")] + (match_operand:VNx4SF_ONLY 2 "register_operand")] SVE_COND_FCVT))] "TARGET_SVE_BF16" - "@ - bfcvt\t%0.h, %1/m, %2.s - movprfx\t%0, %2\;bfcvt\t%0.h, %1/m, %2.s" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] bfcvt\t%0.h, %1/m, %2.s + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;bfcvt\t%0.h, %1/m, %2.s + } ) ;; Predicated BFCVT with merging. @@ -9515,22 +9569,22 @@ (define_expand "@cond__trunc" ) (define_insn "*cond__trunc" - [(set (match_operand:VNx8BF_ONLY 0 "register_operand" "=w, ?&w, ?&w") + [(set (match_operand:VNx8BF_ONLY 0 "register_operand") (unspec:VNx8BF_ONLY - [(match_operand:VNx4BI 1 "register_operand" "Upl, Upl, Upl") + [(match_operand:VNx4BI 1 "register_operand") (unspec:VNx8BF_ONLY [(match_dup 1) (match_operand:SI 4 "aarch64_sve_gp_strictness") - (match_operand:VNx4SF_ONLY 2 "register_operand" "w, w, w")] + (match_operand:VNx4SF_ONLY 2 "register_operand")] SVE_COND_FCVT) - (match_operand:VNx8BF_ONLY 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:VNx8BF_ONLY 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE_BF16" - "@ - bfcvt\t%0.h, %1/m, %2.s - movprfx\t%0.s, %1/z, %2.s\;bfcvt\t%0.h, %1/m, %2.s - movprfx\t%0, %3\;bfcvt\t%0.h, %1/m, %2.s" - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] bfcvt\t%0.h, %1/m, %2.s + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0.s, %1/z, %2.s\;bfcvt\t%0.h, %1/m, %2.s + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;bfcvt\t%0.h, %1/m, %2.s + } ) ;; Predicated BFCVTNT. This doesn't give a natural aarch64_pred_*/cond_* @@ -9586,17 +9640,17 @@ (define_expand "vec_unpacks__" ;; Predicated float-to-float extension. (define_insn "@aarch64_sve__nontrunc" - [(set (match_operand:SVE_FULL_SDF 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDF 0 "register_operand") (unspec:SVE_FULL_SDF - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 3 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_HSF 2 "register_operand" "0, w")] + (match_operand:SVE_FULL_HSF 2 "register_operand")] SVE_COND_FCVT))] "TARGET_SVE && > " - "@ - fcvt\t%0., %1/m, %2. - movprfx\t%0, %2\;fcvt\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] fcvt\t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;fcvt\t%0., %1/m, %2. + } ) ;; Predicated float-to-float extension with merging. @@ -9615,22 +9669,22 @@ (define_expand "@cond__nontrunc" ) (define_insn "*cond__nontrunc" - [(set (match_operand:SVE_FULL_SDF 0 "register_operand" "=w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_SDF 0 "register_operand") (unspec:SVE_FULL_SDF - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_SDF [(match_dup 1) (match_operand:SI 4 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_HSF 2 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_HSF 2 "register_operand")] SVE_COND_FCVT) - (match_operand:SVE_FULL_SDF 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:SVE_FULL_SDF 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE && > " - "@ - fcvt\t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;fcvt\t%0., %1/m, %2. - movprfx\t%0, %3\;fcvt\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] fcvt\t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;fcvt\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;fcvt\t%0., %1/m, %2. + } ) ;; ------------------------------------------------------------------------- @@ -9703,16 +9757,17 @@ (define_insn "@aarch64_sve_punpk_" ;; zeroing forms, these instructions don't operate elementwise and so ;; don't fit the IFN_COND model. (define_insn "@aarch64_brk" - [(set (match_operand:VNx16BI 0 "register_operand" "=Upa, Upa") + [(set (match_operand:VNx16BI 0 "register_operand") (unspec:VNx16BI - [(match_operand:VNx16BI 1 "register_operand" "Upa, Upa") - (match_operand:VNx16BI 2 "register_operand" "Upa, Upa") - (match_operand:VNx16BI 3 "aarch64_simd_reg_or_zero" "Dz, 0")] + [(match_operand:VNx16BI 1 "register_operand") + (match_operand:VNx16BI 2 "register_operand") + (match_operand:VNx16BI 3 "aarch64_simd_reg_or_zero")] SVE_BRK_UNARY))] "TARGET_SVE" - "@ - brk\t%0.b, %1/z, %2.b - brk\t%0.b, %1/m, %2.b" + {@ [ cons: =0 , 1 , 2 , 3 ] + [ Upa , Upa , Upa , Dz ] brk\t%0.b, %1/z, %2.b + [ Upa , Upa , Upa , 0 ] brk\t%0.b, %1/m, %2.b + } ) ;; Same, but also producing a flags result. @@ -10433,25 +10488,25 @@ (define_expand "@aarch64_sve__cntp" ) (define_insn_and_rewrite "*aarch64_sve__cntp" - [(set (match_operand:VNx2DI 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx2DI 0 "register_operand") (ANY_PLUS:VNx2DI (vec_duplicate:VNx2DI (zero_extend:DI (unspec:SI [(match_operand 3) (const_int SVE_KNOWN_PTRUE) - (match_operand: 2 "register_operand" "Upa, Upa")] + (match_operand: 2 "register_operand")] UNSPEC_CNTP))) - (match_operand:VNx2DI_ONLY 1 "register_operand" "0, w")))] + (match_operand:VNx2DI_ONLY 1 "register_operand")))] "TARGET_SVE" - "@ - p\t%0.d, %2 - movprfx\t%0, %1\;p\t%0.d, %2" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , Upa ; * ] p\t%0.d, %2 + [ ?&w , w , Upa ; yes ] movprfx\t%0, %1\;p\t%0.d, %2 + } "&& !CONSTANT_P (operands[3])" { operands[3] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Increment a vector of SIs by the number of set bits in a predicate. @@ -10473,24 +10528,24 @@ (define_expand "@aarch64_sve__cntp" ) (define_insn_and_rewrite "*aarch64_sve__cntp" - [(set (match_operand:VNx4SI 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SI 0 "register_operand") (ANY_PLUS:VNx4SI (vec_duplicate:VNx4SI (unspec:SI [(match_operand 3) (const_int SVE_KNOWN_PTRUE) - (match_operand: 2 "register_operand" "Upa, Upa")] + (match_operand: 2 "register_operand")] UNSPEC_CNTP)) - (match_operand:VNx4SI_ONLY 1 "register_operand" "0, w")))] + (match_operand:VNx4SI_ONLY 1 "register_operand")))] "TARGET_SVE" - "@ - p\t%0.s, %2 - movprfx\t%0, %1\;p\t%0.s, %2" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , Upa ; * ] p\t%0.s, %2 + [ ?&w , w , Upa ; yes ] movprfx\t%0, %1\;p\t%0.s, %2 + } "&& !CONSTANT_P (operands[3])" { operands[3] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Increment a vector of HIs by the number of set bits in a predicate. @@ -10513,25 +10568,25 @@ (define_expand "@aarch64_sve__cntp" ) (define_insn_and_rewrite "*aarch64_sve__cntp" - [(set (match_operand:VNx8HI 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx8HI 0 "register_operand") (ANY_PLUS:VNx8HI (vec_duplicate:VNx8HI (match_operator:HI 3 "subreg_lowpart_operator" [(unspec:SI [(match_operand 4) (const_int SVE_KNOWN_PTRUE) - (match_operand: 2 "register_operand" "Upa, Upa")] + (match_operand: 2 "register_operand")] UNSPEC_CNTP)])) - (match_operand:VNx8HI_ONLY 1 "register_operand" "0, w")))] + (match_operand:VNx8HI_ONLY 1 "register_operand")))] "TARGET_SVE" - "@ - p\t%0.h, %2 - movprfx\t%0, %1\;p\t%0.h, %2" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , Upa ; * ] p\t%0.h, %2 + [ ?&w , w , Upa ; yes ] movprfx\t%0, %1\;p\t%0.h, %2 + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; ------------------------------------------------------------------------- @@ -10666,25 +10721,25 @@ (define_expand "@aarch64_sve__cntp" ) (define_insn_and_rewrite "*aarch64_sve__cntp" - [(set (match_operand:VNx2DI 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx2DI 0 "register_operand") (ANY_MINUS:VNx2DI - (match_operand:VNx2DI_ONLY 1 "register_operand" "0, w") + (match_operand:VNx2DI_ONLY 1 "register_operand") (vec_duplicate:VNx2DI (zero_extend:DI (unspec:SI [(match_operand 3) (const_int SVE_KNOWN_PTRUE) - (match_operand: 2 "register_operand" "Upa, Upa")] + (match_operand: 2 "register_operand")] UNSPEC_CNTP)))))] "TARGET_SVE" - "@ - p\t%0.d, %2 - movprfx\t%0, %1\;p\t%0.d, %2" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , Upa ; * ] p\t%0.d, %2 + [ ?&w , w , Upa ; yes ] movprfx\t%0, %1\;p\t%0.d, %2 + } "&& !CONSTANT_P (operands[3])" { operands[3] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Decrement a vector of SIs by the number of set bits in a predicate. @@ -10706,24 +10761,24 @@ (define_expand "@aarch64_sve__cntp" ) (define_insn_and_rewrite "*aarch64_sve__cntp" - [(set (match_operand:VNx4SI 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SI 0 "register_operand") (ANY_MINUS:VNx4SI - (match_operand:VNx4SI_ONLY 1 "register_operand" "0, w") + (match_operand:VNx4SI_ONLY 1 "register_operand") (vec_duplicate:VNx4SI (unspec:SI [(match_operand 3) (const_int SVE_KNOWN_PTRUE) - (match_operand: 2 "register_operand" "Upa, Upa")] + (match_operand: 2 "register_operand")] UNSPEC_CNTP))))] "TARGET_SVE" - "@ - p\t%0.s, %2 - movprfx\t%0, %1\;p\t%0.s, %2" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , Upa ; * ] p\t%0.s, %2 + [ ?&w , w , Upa ; yes ] movprfx\t%0, %1\;p\t%0.s, %2 + } "&& !CONSTANT_P (operands[3])" { operands[3] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Decrement a vector of HIs by the number of set bits in a predicate. @@ -10746,23 +10801,23 @@ (define_expand "@aarch64_sve__cntp" ) (define_insn_and_rewrite "*aarch64_sve__cntp" - [(set (match_operand:VNx8HI 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx8HI 0 "register_operand") (ANY_MINUS:VNx8HI - (match_operand:VNx8HI_ONLY 1 "register_operand" "0, w") + (match_operand:VNx8HI_ONLY 1 "register_operand") (vec_duplicate:VNx8HI (match_operator:HI 3 "subreg_lowpart_operator" [(unspec:SI [(match_operand 4) (const_int SVE_KNOWN_PTRUE) - (match_operand: 2 "register_operand" "Upa, Upa")] + (match_operand: 2 "register_operand")] UNSPEC_CNTP)]))))] "TARGET_SVE" - "@ - p\t%0.h, %2 - movprfx\t%0, %1\;p\t%0.h, %2" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , Upa ; * ] p\t%0.h, %2 + [ ?&w , w , Upa ; yes ] movprfx\t%0, %1\;p\t%0.h, %2 + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) diff --git a/gcc/config/aarch64/aarch64-sve2.md b/gcc/config/aarch64/aarch64-sve2.md index 7a77e9b7502..ffa964d6060 100644 --- a/gcc/config/aarch64/aarch64-sve2.md +++ b/gcc/config/aarch64/aarch64-sve2.md @@ -159,33 +159,35 @@ (define_insn_and_rewrite "@aarch64_gather_ldnt_" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand: 0 "register_operand" "Upl, Upl") - (match_operand:DI 1 "aarch64_reg_or_zero" "Z, r") - (match_operand: 2 "register_operand" "w, w") - (match_operand:SVE_FULL_SD 3 "register_operand" "w, w")] + [(match_operand: 0 "register_operand") + (match_operand:DI 1 "aarch64_reg_or_zero") + (match_operand: 2 "register_operand") + (match_operand:SVE_FULL_SD 3 "register_operand")] UNSPEC_STNT1_SCATTER))] "TARGET_SVE" - "@ - stnt1\t%3., %0, [%2.] - stnt1\t%3., %0, [%2., %1]" + {@ [ cons: 0 , 1 , 2 , 3 ] + [ Upl , Z , w , w ] stnt1\t%3., %0, [%2.] + [ Upl , r , w , w ] stnt1\t%3., %0, [%2., %1] + } ) ;; Truncating stores. (define_insn "@aarch64_scatter_stnt_" [(set (mem:BLK (scratch)) (unspec:BLK - [(match_operand: 0 "register_operand" "Upl, Upl") - (match_operand:DI 1 "aarch64_reg_or_zero" "Z, r") - (match_operand: 2 "register_operand" "w, w") + [(match_operand: 0 "register_operand") + (match_operand:DI 1 "aarch64_reg_or_zero") + (match_operand: 2 "register_operand") (truncate:SVE_PARTIAL_I - (match_operand:SVE_FULL_SDI 3 "register_operand" "w, w"))] + (match_operand:SVE_FULL_SDI 3 "register_operand"))] UNSPEC_STNT1_SCATTER))] "TARGET_SVE2 && (~ & ) == 0" - "@ - stnt1\t%3., %0, [%2.] - stnt1\t%3., %0, [%2., %1]" + {@ [ cons: 0 , 1 , 2 , 3 ] + [ Upl , Z , w , w ] stnt1\t%3., %0, [%2.] + [ Upl , r , w , w ] stnt1\t%3., %0, [%2., %1] + } ) ;; ========================================================================= @@ -214,16 +216,16 @@ (define_insn "@aarch64_mul_lane_" ;; The 2nd and 3rd alternatives are valid for just TARGET_SVE as well but ;; we include them here to allow matching simpler, unpredicated RTL. (define_insn "*aarch64_mul_unpredicated_" - [(set (match_operand:SVE_I 0 "register_operand" "=w,w,?&w") + [(set (match_operand:SVE_I 0 "register_operand") (mult:SVE_I - (match_operand:SVE_I 1 "register_operand" "w,0,w") - (match_operand:SVE_I 2 "aarch64_sve_vsm_operand" "w,vsm,vsm")))] + (match_operand:SVE_I 1 "register_operand") + (match_operand:SVE_I 2 "aarch64_sve_vsm_operand")))] "TARGET_SVE2" - "@ - mul\t%0., %1., %2. - mul\t%0., %0., #%2 - movprfx\t%0, %1\;mul\t%0., %0., #%2" - [(set_attr "movprfx" "*,*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , w , w ; * ] mul\t%0., %1., %2. + [ w , 0 , vsm ; * ] mul\t%0., %0., #%2 + [ ?&w , w , vsm ; yes ] movprfx\t%0, %1\;mul\t%0., %0., #%2 + } ) ;; ------------------------------------------------------------------------- @@ -349,20 +351,20 @@ (define_insn "@aarch64_sve_suqadd_const" ;; General predicated binary arithmetic. All operations handled here ;; are commutative or have a reversed form. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "0, w, w") - (match_operand:SVE_FULL_I 3 "register_operand" "w, 0, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand")] SVE2_COND_INT_BINARY_REV)] UNSPEC_PRED_X))] "TARGET_SVE2" - "@ - \t%0., %1/m, %0., %3. - \t%0., %1/m, %0., %2. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ w , Upl , w , 0 ; * ] \t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; Predicated binary arithmetic with merging. @@ -387,77 +389,78 @@ (define_expand "@cond_" ;; Predicated binary arithmetic, merging with the first input. (define_insn_and_rewrite "*cond__2" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I [(match_operand 4) (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "0, w") - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand")] SVE2_COND_INT_BINARY)] UNSPEC_PRED_X) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE2" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Predicated binary arithmetic, merging with the second input. (define_insn_and_rewrite "*cond__3" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I [(match_operand 4) (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "w, w") - (match_operand:SVE_FULL_I 3 "register_operand" "0, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand")] SVE2_COND_INT_BINARY_REV)] UNSPEC_PRED_X) (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE2" - "@ - \t%0., %1/m, %0., %2. - movprfx\t%0, %3\;\t%0., %1/m, %0., %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] \t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %0., %2. + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Predicated binary operations, merging with an independent value. (define_insn_and_rewrite "*cond__any" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=&w, &w, &w, &w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I [(match_operand 5) (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "0, w, w, w, w") - (match_operand:SVE_FULL_I 3 "register_operand" "w, 0, w, w, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand")] SVE2_COND_INT_BINARY_REV)] UNSPEC_PRED_X) - (match_operand:SVE_FULL_I 4 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, 0, w")] + (match_operand:SVE_FULL_I 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE2 && !rtx_equal_p (operands[2], operands[4]) && !rtx_equal_p (operands[3], operands[4])" - "@ - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %2. + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w , w ] # + } "&& 1" { if (reload_completed @@ -481,22 +484,23 @@ (define_insn_and_rewrite "*cond__any" ;; so there's no correctness requirement to handle merging with an ;; independent value. (define_insn_and_rewrite "*cond__z" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=&w, &w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I [(match_operand 5) (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "0, w") - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand")] SVE2_COND_INT_BINARY_NOREV)] UNSPEC_PRED_X) (match_operand:SVE_FULL_I 4 "aarch64_simd_imm_zero")] UNSPEC_SEL))] "TARGET_SVE2" - "@ - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ] + [ &w , Upl , 0 , w ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , w ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + } "&& !CONSTANT_P (operands[5])" { operands[5] = CONSTM1_RTX (mode); @@ -547,22 +551,22 @@ (define_insn "@aarch64_sve__lane_" ;; Predicated left shifts. (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, w, w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "0, 0, w, w, w") - (match_operand:SVE_FULL_I 3 "aarch64_sve_shift_operand" "D, w, 0, D, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "aarch64_sve_shift_operand")] SVE2_COND_INT_SHIFT)] UNSPEC_PRED_X))] "TARGET_SVE2" - "@ - \t%0., %1/m, %0., #%3 - \t%0., %1/m, %0., %3. - r\t%0., %1/m, %0., %2. - movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,*,*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , D ; * ] \t%0., %1/m, %0., #%3 + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ w , Upl , w , 0 ; * ] r\t%0., %1/m, %0., %2. + [ ?&w , Upl , w , D ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; Predicated left shifts with merging. @@ -587,83 +591,84 @@ (define_expand "@cond_" ;; Predicated left shifts, merging with the first input. (define_insn_and_rewrite "*cond__2" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I [(match_operand 4) (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "0, 0, w, w") - (match_operand:SVE_FULL_I 3 "aarch64_sve_shift_operand" "D, w, D, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "aarch64_sve_shift_operand")] SVE2_COND_INT_SHIFT)] UNSPEC_PRED_X) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE2" - "@ - \t%0., %1/m, %0., #%3 - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , D ; * ] \t%0., %1/m, %0., #%3 + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , D ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., #%3 + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,*,yes,yes")] ) ;; Predicated left shifts, merging with the second input. (define_insn_and_rewrite "*cond__3" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I [(match_operand 4) (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "w, w") - (match_operand:SVE_FULL_I 3 "register_operand" "0, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand")] SVE2_COND_INT_SHIFT)] UNSPEC_PRED_X) (match_dup 3)] UNSPEC_SEL))] "TARGET_SVE2" - "@ - r\t%0., %1/m, %0., %2. - movprfx\t%0, %3\;r\t%0., %1/m, %0., %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] r\t%0., %1/m, %0., %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;r\t%0., %1/m, %0., %2. + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Predicated left shifts, merging with an independent value. (define_insn_and_rewrite "*cond__any" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=&w, &w, &w, &w, &w, &w, &w, ?&w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl, Upl, Upl, Upl, Upl, Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_I [(match_operand 5) (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "0, 0, w, w, w, w, w, w, w") - (match_operand:SVE_FULL_I 3 "aarch64_sve_shift_operand" "D, w, 0, D, w, D, w, D, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "aarch64_sve_shift_operand")] SVE2_COND_INT_SHIFT)] UNSPEC_PRED_X) - (match_operand:SVE_FULL_I 4 "aarch64_simd_reg_or_zero" "Dz, Dz, Dz, Dz, Dz, 0, 0, w, w")] + (match_operand:SVE_FULL_I 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE2 && !rtx_equal_p (operands[2], operands[4]) && (CONSTANT_P (operands[4]) || !rtx_equal_p (operands[3], operands[4]))" - "@ - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/z, %0.\;r\t%0., %1/m, %0., %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., #%3 - movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. - # - #" + {@ [ cons: =0 , 1 , 2 , 3 , 4 ] + [ &w , Upl , 0 , D , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., #%3 + [ &w , Upl , 0 , w , Dz ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , 0 , Dz ] movprfx\t%0., %1/z, %0.\;r\t%0., %1/m, %0., %2. + [ &w , Upl , w , D , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., #%3 + [ &w , Upl , w , w , Dz ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %0., %3. + [ &w , Upl , w , D , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., #%3 + [ &w , Upl , w , w , 0 ] movprfx\t%0., %1/m, %2.\;\t%0., %1/m, %0., %3. + [ ?&w , Upl , w , D , w ] # + [ ?&w , Upl , w , w , w ] # + } "&& 1" { if (reload_completed @@ -701,34 +706,34 @@ (define_insn_and_rewrite "*cond__any" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_sve_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "w, w") - (match_operand:SVE_FULL_I 3 "register_operand" "w, w") - (match_operand:SVE_FULL_I 1 "register_operand" "0, w")] + [(match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand") + (match_operand:SVE_FULL_I 1 "register_operand")] SVE2_INT_TERNARY))] "TARGET_SVE2" - "@ - \t%0., %2., %3. - movprfx\t%0, %1\;\t%0., %2., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] \t%0., %2., %3. + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;\t%0., %2., %3. + } ) (define_insn "@aarch64_sve__lane_" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand:SVE_FULL_HSDI 2 "register_operand" "w, w") + [(match_operand:SVE_FULL_HSDI 2 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand:SVE_FULL_HSDI 3 "register_operand" ", ") + [(match_operand:SVE_FULL_HSDI 3 "register_operand") (match_operand:SI 4 "const_int_operand")] UNSPEC_SVE_LANE_SELECT) - (match_operand:SVE_FULL_HSDI 1 "register_operand" "0, w")] + (match_operand:SVE_FULL_HSDI 1 "register_operand")] SVE2_INT_TERNARY_LANE))] "TARGET_SVE2" - "@ - \t%0., %2., %3.[%4] - movprfx\t%0, %1\;\t%0., %2., %3.[%4]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , ; * ] \t%0., %2., %3.[%4] + [ ?&w , w , w , ; yes ] movprfx\t%0, %1\;\t%0., %2., %3.[%4] + } ) ;; ------------------------------------------------------------------------- @@ -740,37 +745,37 @@ (define_insn "@aarch64_sve__lane_" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_sve_add_mul_lane_" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (plus:SVE_FULL_HSDI (mult:SVE_FULL_HSDI (unspec:SVE_FULL_HSDI - [(match_operand:SVE_FULL_HSDI 3 "register_operand" ", ") + [(match_operand:SVE_FULL_HSDI 3 "register_operand") (match_operand:SI 4 "const_int_operand")] UNSPEC_SVE_LANE_SELECT) - (match_operand:SVE_FULL_HSDI 2 "register_operand" "w, w")) - (match_operand:SVE_FULL_HSDI 1 "register_operand" "0, w")))] + (match_operand:SVE_FULL_HSDI 2 "register_operand")) + (match_operand:SVE_FULL_HSDI 1 "register_operand")))] "TARGET_SVE2" - "@ - mla\t%0., %2., %3.[%4] - movprfx\t%0, %1\;mla\t%0., %2., %3.[%4]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , ; * ] mla\t%0., %2., %3.[%4] + [ ?&w , w , w , ; yes ] movprfx\t%0, %1\;mla\t%0., %2., %3.[%4] + } ) (define_insn "@aarch64_sve_sub_mul_lane_" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (minus:SVE_FULL_HSDI - (match_operand:SVE_FULL_HSDI 1 "register_operand" "0, w") + (match_operand:SVE_FULL_HSDI 1 "register_operand") (mult:SVE_FULL_HSDI (unspec:SVE_FULL_HSDI - [(match_operand:SVE_FULL_HSDI 3 "register_operand" ", ") + [(match_operand:SVE_FULL_HSDI 3 "register_operand") (match_operand:SI 4 "const_int_operand")] UNSPEC_SVE_LANE_SELECT) - (match_operand:SVE_FULL_HSDI 2 "register_operand" "w, w"))))] + (match_operand:SVE_FULL_HSDI 2 "register_operand"))))] "TARGET_SVE2" - "@ - mls\t%0., %2., %3.[%4] - movprfx\t%0, %1\;mls\t%0., %2., %3.[%4]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , ; * ] mls\t%0., %2., %3.[%4] + [ ?&w , w , w , ; yes ] movprfx\t%0, %1\;mls\t%0., %2., %3.[%4] + } ) ;; ------------------------------------------------------------------------- @@ -781,17 +786,17 @@ (define_insn "@aarch64_sve_sub_mul_lane_" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_sve2_xar" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (rotatert:SVE_FULL_I (xor:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" "%0, w") - (match_operand:SVE_FULL_I 2 "register_operand" "w, w")) + (match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "register_operand")) (match_operand:SVE_FULL_I 3 "aarch64_simd_rshift_imm")))] "TARGET_SVE2" - "@ - xar\t%0., %0., %2., #%3 - movprfx\t%0, %1\;xar\t%0., %0., %2., #%3" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , %0 , w ; * ] xar\t%0., %0., %2., #%3 + [ ?&w , w , w ; yes ] movprfx\t%0, %1\;xar\t%0., %0., %2., #%3 + } ) ;; ------------------------------------------------------------------------- @@ -825,86 +830,86 @@ (define_expand "@aarch64_sve2_bcax" ) (define_insn_and_rewrite "*aarch64_sve2_bcax" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (xor:SVE_FULL_I (and:SVE_FULL_I (unspec:SVE_FULL_I [(match_operand 4) (not:SVE_FULL_I - (match_operand:SVE_FULL_I 3 "register_operand" "w, w"))] + (match_operand:SVE_FULL_I 3 "register_operand"))] UNSPEC_PRED_X) - (match_operand:SVE_FULL_I 2 "register_operand" "w, w")) - (match_operand:SVE_FULL_I 1 "register_operand" "0, w")))] + (match_operand:SVE_FULL_I 2 "register_operand")) + (match_operand:SVE_FULL_I 1 "register_operand")))] "TARGET_SVE2" - "@ - bcax\t%0.d, %0.d, %2.d, %3.d - movprfx\t%0, %1\;bcax\t%0.d, %0.d, %2.d, %3.d" + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] bcax\t%0.d, %0.d, %2.d, %3.d + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;bcax\t%0.d, %0.d, %2.d, %3.d + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Unpredicated 3-way exclusive OR. (define_insn "@aarch64_sve2_eor3" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, w, w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (xor:SVE_FULL_I (xor:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" "0, w, w, w") - (match_operand:SVE_FULL_I 2 "register_operand" "w, 0, w, w")) - (match_operand:SVE_FULL_I 3 "register_operand" "w, w, 0, w")))] + (match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "register_operand")) + (match_operand:SVE_FULL_I 3 "register_operand")))] "TARGET_SVE2" - "@ - eor3\t%0.d, %0.d, %2.d, %3.d - eor3\t%0.d, %0.d, %1.d, %3.d - eor3\t%0.d, %0.d, %1.d, %2.d - movprfx\t%0, %1\;eor3\t%0.d, %0.d, %2.d, %3.d" - [(set_attr "movprfx" "*,*,*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] eor3\t%0.d, %0.d, %2.d, %3.d + [ w , w , 0 , w ; * ] eor3\t%0.d, %0.d, %1.d, %3.d + [ w , w , w , 0 ; * ] eor3\t%0.d, %0.d, %1.d, %2.d + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;eor3\t%0.d, %0.d, %2.d, %3.d + } ) ;; Use NBSL for vector NOR. (define_insn_and_rewrite "*aarch64_sve2_nor" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I [(match_operand 3) (and:SVE_FULL_I (not:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" "%0, w")) + (match_operand:SVE_FULL_I 1 "register_operand")) (not:SVE_FULL_I - (match_operand:SVE_FULL_I 2 "register_operand" "w, w")))] + (match_operand:SVE_FULL_I 2 "register_operand")))] UNSPEC_PRED_X))] "TARGET_SVE2" - "@ - nbsl\t%0.d, %0.d, %2.d, %0.d - movprfx\t%0, %1\;nbsl\t%0.d, %0.d, %2.d, %0.d" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , %0 , w ; * ] nbsl\t%0.d, %0.d, %2.d, %0.d + [ ?&w , w , w ; yes ] movprfx\t%0, %1\;nbsl\t%0.d, %0.d, %2.d, %0.d + } "&& !CONSTANT_P (operands[3])" { operands[3] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Use NBSL for vector NAND. (define_insn_and_rewrite "*aarch64_sve2_nand" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I [(match_operand 3) (ior:SVE_FULL_I (not:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" "%0, w")) + (match_operand:SVE_FULL_I 1 "register_operand")) (not:SVE_FULL_I - (match_operand:SVE_FULL_I 2 "register_operand" "w, w")))] + (match_operand:SVE_FULL_I 2 "register_operand")))] UNSPEC_PRED_X))] "TARGET_SVE2" - "@ - nbsl\t%0.d, %0.d, %2.d, %2.d - movprfx\t%0, %1\;nbsl\t%0.d, %0.d, %2.d, %2.d" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , %0 , w ; * ] nbsl\t%0.d, %0.d, %2.d, %2.d + [ ?&w , w , w ; yes ] movprfx\t%0, %1\;nbsl\t%0.d, %0.d, %2.d, %2.d + } "&& !CONSTANT_P (operands[3])" { operands[3] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Unpredicated bitwise select. @@ -922,19 +927,19 @@ (define_expand "@aarch64_sve2_bsl" ) (define_insn "*aarch64_sve2_bsl" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (xor:SVE_FULL_I (and:SVE_FULL_I (xor:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" ", w") - (match_operand:SVE_FULL_I 2 "register_operand" ", w")) - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")) + (match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "register_operand")) + (match_operand:SVE_FULL_I 3 "register_operand")) (match_dup BSL_DUP)))] "TARGET_SVE2" - "@ - bsl\t%0.d, %0.d, %.d, %3.d - movprfx\t%0, %\;bsl\t%0.d, %0.d, %.d, %3.d" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , , , w ; * ] bsl\t%0.d, %0.d, %.d, %3.d + [ ?&w , w , w , w ; yes ] movprfx\t%0, %\;bsl\t%0.d, %0.d, %.d, %3.d + } ) ;; Unpredicated bitwise inverted select. @@ -959,27 +964,27 @@ (define_expand "@aarch64_sve2_nbsl" ) (define_insn_and_rewrite "*aarch64_sve2_nbsl" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I [(match_operand 4) (not:SVE_FULL_I (xor:SVE_FULL_I (and:SVE_FULL_I (xor:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" ", w") - (match_operand:SVE_FULL_I 2 "register_operand" ", w")) - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")) + (match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "register_operand")) + (match_operand:SVE_FULL_I 3 "register_operand")) (match_dup BSL_DUP)))] UNSPEC_PRED_X))] "TARGET_SVE2" - "@ - nbsl\t%0.d, %0.d, %.d, %3.d - movprfx\t%0, %\;nbsl\t%0.d, %0.d, %.d, %3.d" + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , , , w ; * ] nbsl\t%0.d, %0.d, %.d, %3.d + [ ?&w , w , w , w ; yes ] movprfx\t%0, %\;nbsl\t%0.d, %0.d, %.d, %3.d + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Unpredicated bitwise select with inverted first operand. @@ -1004,27 +1009,27 @@ (define_expand "@aarch64_sve2_bsl1n" ) (define_insn_and_rewrite "*aarch64_sve2_bsl1n" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (xor:SVE_FULL_I (and:SVE_FULL_I (unspec:SVE_FULL_I [(match_operand 4) (not:SVE_FULL_I (xor:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" ", w") - (match_operand:SVE_FULL_I 2 "register_operand" ", w")))] + (match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "register_operand")))] UNSPEC_PRED_X) - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")) + (match_operand:SVE_FULL_I 3 "register_operand")) (match_dup BSL_DUP)))] "TARGET_SVE2" - "@ - bsl1n\t%0.d, %0.d, %.d, %3.d - movprfx\t%0, %\;bsl1n\t%0.d, %0.d, %.d, %3.d" + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , , , w ; * ] bsl1n\t%0.d, %0.d, %.d, %3.d + [ ?&w , w , w , w ; yes ] movprfx\t%0, %\;bsl1n\t%0.d, %0.d, %.d, %3.d + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Unpredicated bitwise select with inverted second operand. @@ -1050,55 +1055,55 @@ (define_expand "@aarch64_sve2_bsl2n" ) (define_insn_and_rewrite "*aarch64_sve2_bsl2n" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (ior:SVE_FULL_I (and:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" ", w") - (match_operand:SVE_FULL_I 2 "register_operand" ", w")) + (match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "register_operand")) (unspec:SVE_FULL_I [(match_operand 4) (and:SVE_FULL_I (not:SVE_FULL_I - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")) + (match_operand:SVE_FULL_I 3 "register_operand")) (not:SVE_FULL_I (match_dup BSL_DUP)))] UNSPEC_PRED_X)))] "TARGET_SVE2" - "@ - bsl2n\t%0.d, %0.d, %3.d, %.d - movprfx\t%0, %\;bsl2n\t%0.d, %0.d, %3.d, %.d" + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , , , w ; * ] bsl2n\t%0.d, %0.d, %3.d, %.d + [ ?&w , w , w , w ; yes ] movprfx\t%0, %\;bsl2n\t%0.d, %0.d, %3.d, %.d + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Unpredicated bitwise select with inverted second operand, alternative form. ;; (bsl_dup ? bsl_mov : ~op3) == ((bsl_dup & bsl_mov) | (~bsl_dup & ~op3)) (define_insn_and_rewrite "*aarch64_sve2_bsl2n" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (ior:SVE_FULL_I (and:SVE_FULL_I - (match_operand:SVE_FULL_I 1 "register_operand" ", w") - (match_operand:SVE_FULL_I 2 "register_operand" ", w")) + (match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "register_operand")) (unspec:SVE_FULL_I [(match_operand 4) (and:SVE_FULL_I (not:SVE_FULL_I (match_dup BSL_DUP)) (not:SVE_FULL_I - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")))] + (match_operand:SVE_FULL_I 3 "register_operand")))] UNSPEC_PRED_X)))] "TARGET_SVE2" - "@ - bsl2n\t%0.d, %0.d, %3.d, %.d - movprfx\t%0, %\;bsl2n\t%0.d, %0.d, %3.d, %.d" + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , , , w ; * ] bsl2n\t%0.d, %0.d, %3.d, %.d + [ ?&w , w , w , w ; yes ] movprfx\t%0, %\;bsl2n\t%0.d, %0.d, %3.d, %.d + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; ------------------------------------------------------------------------- @@ -1131,40 +1136,40 @@ (define_expand "@aarch64_sve_add_" ;; Pattern-match SSRA and USRA as a predicated operation whose predicate ;; isn't needed. (define_insn_and_rewrite "*aarch64_sve2_sra" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (plus:SVE_FULL_I (unspec:SVE_FULL_I [(match_operand 4) (SHIFTRT:SVE_FULL_I - (match_operand:SVE_FULL_I 2 "register_operand" "w, w") + (match_operand:SVE_FULL_I 2 "register_operand") (match_operand:SVE_FULL_I 3 "aarch64_simd_rshift_imm"))] UNSPEC_PRED_X) - (match_operand:SVE_FULL_I 1 "register_operand" "0, w")))] + (match_operand:SVE_FULL_I 1 "register_operand")))] "TARGET_SVE2" - "@ - sra\t%0., %2., #%3 - movprfx\t%0, %1\;sra\t%0., %2., #%3" + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , w ; * ] sra\t%0., %2., #%3 + [ ?&w , w , w ; yes ] movprfx\t%0, %1\;sra\t%0., %2., #%3 + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; SRSRA and URSRA. (define_insn "@aarch64_sve_add_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (plus:SVE_FULL_I (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 2 "register_operand" "w, w") + [(match_operand:SVE_FULL_I 2 "register_operand") (match_operand:SVE_FULL_I 3 "aarch64_simd_rshift_imm")] VRSHR_N) - (match_operand:SVE_FULL_I 1 "register_operand" "0, w")))] + (match_operand:SVE_FULL_I 1 "register_operand")))] "TARGET_SVE2" - "@ - sra\t%0., %2., #%3 - movprfx\t%0, %1\;sra\t%0., %2., #%3" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , w ; * ] sra\t%0., %2., #%3 + [ ?&w , w , w ; yes ] movprfx\t%0, %1\;sra\t%0., %2., #%3 + } ) ;; ------------------------------------------------------------------------- @@ -1222,14 +1227,14 @@ (define_expand "@aarch64_sve2_aba" ;; Pattern-match SABA and UABA as an absolute-difference-and-accumulate ;; operation whose predicates aren't needed. (define_insn "*aarch64_sve2_aba" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (plus:SVE_FULL_I (minus:SVE_FULL_I (unspec:SVE_FULL_I [(match_operand 4) (USMAX:SVE_FULL_I - (match_operand:SVE_FULL_I 2 "register_operand" "w, w") - (match_operand:SVE_FULL_I 3 "register_operand" "w, w"))] + (match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand"))] UNSPEC_PRED_X) (unspec:SVE_FULL_I [(match_operand 5) @@ -1237,12 +1242,12 @@ (define_insn "*aarch64_sve2_aba" (match_dup 2) (match_dup 3))] UNSPEC_PRED_X)) - (match_operand:SVE_FULL_I 1 "register_operand" "0, w")))] + (match_operand:SVE_FULL_I 1 "register_operand")))] "TARGET_SVE2" - "@ - aba\t%0., %2., %3. - movprfx\t%0, %1\;aba\t%0., %2., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] aba\t%0., %2., %3. + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;aba\t%0., %2., %3. + } ) ;; ========================================================================= @@ -1370,142 +1375,142 @@ (define_insn "@aarch64_sve_" ;; Non-saturating MLA operations. (define_insn "@aarch64_sve_add_" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (plus:SVE_FULL_HSDI (unspec:SVE_FULL_HSDI - [(match_operand: 2 "register_operand" "w, w") - (match_operand: 3 "register_operand" "w, w")] + [(match_operand: 2 "register_operand") + (match_operand: 3 "register_operand")] SVE2_INT_ADD_BINARY_LONG) - (match_operand:SVE_FULL_HSDI 1 "register_operand" "0, w")))] + (match_operand:SVE_FULL_HSDI 1 "register_operand")))] "TARGET_SVE2" - "@ - \t%0., %2., %3. - movprfx\t%0, %1\;\t%0., %2., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] \t%0., %2., %3. + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;\t%0., %2., %3. + } ) ;; Non-saturating MLA operations with lane select. (define_insn "@aarch64_sve_add__lane_" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (plus:SVE_FULL_SDI (unspec:SVE_FULL_SDI - [(match_operand: 2 "register_operand" "w, w") + [(match_operand: 2 "register_operand") (unspec: - [(match_operand: 3 "register_operand" ", ") + [(match_operand: 3 "register_operand") (match_operand:SI 4 "const_int_operand")] UNSPEC_SVE_LANE_SELECT)] SVE2_INT_ADD_BINARY_LONG_LANE) - (match_operand:SVE_FULL_SDI 1 "register_operand" "0, w")))] + (match_operand:SVE_FULL_SDI 1 "register_operand")))] "TARGET_SVE2" - "@ - \t%0., %2., %3.[%4] - movprfx\t%0, %1\;\t%0., %2., %3.[%4]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , ; * ] \t%0., %2., %3.[%4] + [ ?&w , w , w , ; yes ] movprfx\t%0, %1\;\t%0., %2., %3.[%4] + } ) ;; Saturating MLA operations. (define_insn "@aarch64_sve_qadd_" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (ss_plus:SVE_FULL_HSDI (unspec:SVE_FULL_HSDI - [(match_operand: 2 "register_operand" "w, w") - (match_operand: 3 "register_operand" "w, w")] + [(match_operand: 2 "register_operand") + (match_operand: 3 "register_operand")] SVE2_INT_QADD_BINARY_LONG) - (match_operand:SVE_FULL_HSDI 1 "register_operand" "0, w")))] + (match_operand:SVE_FULL_HSDI 1 "register_operand")))] "TARGET_SVE2" - "@ - \t%0., %2., %3. - movprfx\t%0, %1\;\t%0., %2., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] \t%0., %2., %3. + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;\t%0., %2., %3. + } ) ;; Saturating MLA operations with lane select. (define_insn "@aarch64_sve_qadd__lane_" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (ss_plus:SVE_FULL_SDI (unspec:SVE_FULL_SDI - [(match_operand: 2 "register_operand" "w, w") + [(match_operand: 2 "register_operand") (unspec: - [(match_operand: 3 "register_operand" ", ") + [(match_operand: 3 "register_operand") (match_operand:SI 4 "const_int_operand")] UNSPEC_SVE_LANE_SELECT)] SVE2_INT_QADD_BINARY_LONG_LANE) - (match_operand:SVE_FULL_SDI 1 "register_operand" "0, w")))] + (match_operand:SVE_FULL_SDI 1 "register_operand")))] "TARGET_SVE2" - "@ - \t%0., %2., %3.[%4] - movprfx\t%0, %1\;\t%0., %2., %3.[%4]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , ; * ] \t%0., %2., %3.[%4] + [ ?&w , w , w , ; yes ] movprfx\t%0, %1\;\t%0., %2., %3.[%4] + } ) ;; Non-saturating MLS operations. (define_insn "@aarch64_sve_sub_" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (minus:SVE_FULL_HSDI - (match_operand:SVE_FULL_HSDI 1 "register_operand" "0, w") + (match_operand:SVE_FULL_HSDI 1 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand: 2 "register_operand" "w, w") - (match_operand: 3 "register_operand" "w, w")] + [(match_operand: 2 "register_operand") + (match_operand: 3 "register_operand")] SVE2_INT_SUB_BINARY_LONG)))] "TARGET_SVE2" - "@ - \t%0., %2., %3. - movprfx\t%0, %1\;\t%0., %2., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] \t%0., %2., %3. + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;\t%0., %2., %3. + } ) ;; Non-saturating MLS operations with lane select. (define_insn "@aarch64_sve_sub__lane_" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (minus:SVE_FULL_SDI - (match_operand:SVE_FULL_SDI 1 "register_operand" "0, w") + (match_operand:SVE_FULL_SDI 1 "register_operand") (unspec:SVE_FULL_SDI - [(match_operand: 2 "register_operand" "w, w") + [(match_operand: 2 "register_operand") (unspec: - [(match_operand: 3 "register_operand" ", ") + [(match_operand: 3 "register_operand") (match_operand:SI 4 "const_int_operand")] UNSPEC_SVE_LANE_SELECT)] SVE2_INT_SUB_BINARY_LONG_LANE)))] "TARGET_SVE2" - "@ - \t%0., %2., %3.[%4] - movprfx\t%0, %1\;\t%0., %2., %3.[%4]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , ; * ] \t%0., %2., %3.[%4] + [ ?&w , w , w , ; yes ] movprfx\t%0, %1\;\t%0., %2., %3.[%4] + } ) ;; Saturating MLS operations. (define_insn "@aarch64_sve_qsub_" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (ss_minus:SVE_FULL_HSDI - (match_operand:SVE_FULL_HSDI 1 "register_operand" "0, w") + (match_operand:SVE_FULL_HSDI 1 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand: 2 "register_operand" "w, w") - (match_operand: 3 "register_operand" "w, w")] + [(match_operand: 2 "register_operand") + (match_operand: 3 "register_operand")] SVE2_INT_QSUB_BINARY_LONG)))] "TARGET_SVE2" - "@ - \t%0., %2., %3. - movprfx\t%0, %1\;\t%0., %2., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] \t%0., %2., %3. + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;\t%0., %2., %3. + } ) ;; Saturating MLS operations with lane select. (define_insn "@aarch64_sve_qsub__lane_" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (ss_minus:SVE_FULL_SDI - (match_operand:SVE_FULL_SDI 1 "register_operand" "0, w") + (match_operand:SVE_FULL_SDI 1 "register_operand") (unspec:SVE_FULL_SDI - [(match_operand: 2 "register_operand" "w, w") + [(match_operand: 2 "register_operand") (unspec: - [(match_operand: 3 "register_operand" ", ") + [(match_operand: 3 "register_operand") (match_operand:SI 4 "const_int_operand")] UNSPEC_SVE_LANE_SELECT)] SVE2_INT_QSUB_BINARY_LONG_LANE)))] "TARGET_SVE2" - "@ - \t%0., %2., %3.[%4] - movprfx\t%0, %1\;\t%0., %2., %3.[%4]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , ; * ] \t%0., %2., %3.[%4] + [ ?&w , w , w , ; yes ] movprfx\t%0, %1\;\t%0., %2., %3.[%4] + } ) ;; ------------------------------------------------------------------------- ;; ---- [FP] Long multiplication with accumulation @@ -1518,34 +1523,34 @@ (define_insn "@aarch64_sve_qsub__lane_" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_sve_" - [(set (match_operand:VNx4SF_ONLY 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SF_ONLY 0 "register_operand") (unspec:VNx4SF_ONLY - [(match_operand: 1 "register_operand" "w, w") - (match_operand: 2 "register_operand" "w, w") - (match_operand:VNx4SF_ONLY 3 "register_operand" "0, w")] + [(match_operand: 1 "register_operand") + (match_operand: 2 "register_operand") + (match_operand:VNx4SF_ONLY 3 "register_operand")] SVE2_FP_TERNARY_LONG))] "TARGET_SVE2" - "@ - \t%0., %1., %2. - movprfx\t%0, %3\;\t%0., %1., %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , w , w , 0 ; * ] \t%0., %1., %2. + [ ?&w , w , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1., %2. + } ) (define_insn "@aarch64__lane_" - [(set (match_operand:VNx4SF_ONLY 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SF_ONLY 0 "register_operand") (unspec:VNx4SF_ONLY - [(match_operand: 1 "register_operand" "w, w") + [(match_operand: 1 "register_operand") (unspec: - [(match_operand: 2 "register_operand" ", ") + [(match_operand: 2 "register_operand") (match_operand:SI 3 "const_int_operand")] UNSPEC_SVE_LANE_SELECT) - (match_operand:VNx4SF_ONLY 4 "register_operand" "0, w")] + (match_operand:VNx4SF_ONLY 4 "register_operand")] SVE2_FP_TERNARY_LONG_LANE))] "TARGET_SVE2" - "@ - \t%0., %1., %2.[%3] - movprfx\t%0, %4\;\t%0., %1., %2.[%3]" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 4 ; attrs: movprfx ] + [ w , w , , 0 ; * ] \t%0., %1., %2.[%3] + [ ?&w , w , , w ; yes ] movprfx\t%0, %4\;\t%0., %1., %2.[%3] + } ) ;; ========================================================================= @@ -1698,17 +1703,17 @@ (define_insn "@aarch64_sve_" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand: 1 "register_operand" "Upl, Upl") - (match_operand:SVE_FULL_I 2 "register_operand" "0, w") - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")] + [(match_operand: 1 "register_operand") + (match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand")] SVE2_INT_BINARY_PAIR))] "TARGET_SVE2" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; ------------------------------------------------------------------------- @@ -1723,17 +1728,17 @@ (define_insn "@aarch64_pred_" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_pred_" - [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F - [(match_operand: 1 "register_operand" "Upl, Upl") - (match_operand:SVE_FULL_F 2 "register_operand" "0, w") - (match_operand:SVE_FULL_F 3 "register_operand" "w, w")] + [(match_operand: 1 "register_operand") + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "register_operand")] SVE2_FP_BINARY_PAIR))] "TARGET_SVE2" - "@ - \t%0., %1/m, %0., %3. - movprfx\t%0, %2\;\t%0., %1/m, %0., %3." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %0., %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %0., %3. + } ) ;; ------------------------------------------------------------------------- @@ -1767,43 +1772,44 @@ (define_expand "@cond_" ;; Predicated pairwise absolute difference and accumulate, merging with ;; the first input. (define_insn_and_rewrite "*cond__2" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_HSDI [(match_operand 4) - (match_operand:SVE_FULL_HSDI 2 "register_operand" "0, w") - (match_operand: 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_HSDI 2 "register_operand") + (match_operand: 3 "register_operand")] SVE2_INT_BINARY_PAIR_LONG) (match_dup 2)] UNSPEC_SEL))] "TARGET_SVE2" - "@ - \t%0., %1/m, %3. - movprfx\t%0, %2\;\t%0., %1/m, %3." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , 0 , w ; * ] \t%0., %1/m, %3. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %3. + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes")] ) ;; Predicated pairwise absolute difference and accumulate, merging with zero. (define_insn_and_rewrite "*cond__z" - [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=&w, &w") + [(set (match_operand:SVE_FULL_HSDI 0 "register_operand") (unspec:SVE_FULL_HSDI - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:SVE_FULL_HSDI [(match_operand 5) - (match_operand:SVE_FULL_HSDI 2 "register_operand" "0, w") - (match_operand: 3 "register_operand" "w, w")] + (match_operand:SVE_FULL_HSDI 2 "register_operand") + (match_operand: 3 "register_operand")] SVE2_INT_BINARY_PAIR_LONG) (match_operand:SVE_FULL_HSDI 4 "aarch64_simd_imm_zero")] UNSPEC_SEL))] "TARGET_SVE2" - "@ - movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %3. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %3." + {@ [ cons: =0 , 1 , 2 , 3 ] + [ &w , Upl , 0 , w ] movprfx\t%0., %1/z, %0.\;\t%0., %1/m, %3. + [ &w , Upl , w , w ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %3. + } "&& !CONSTANT_P (operands[5])" { operands[5] = CONSTM1_RTX (mode); @@ -1824,16 +1830,16 @@ (define_insn_and_rewrite "*cond__z" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_sve_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 1 "register_operand" "0, w") - (match_operand:SVE_FULL_I 2 "register_operand" "w, w")] + [(match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "register_operand")] SVE2_INT_CADD))] "TARGET_SVE2" - "@ - \t%0., %0., %2., # - movprfx\t%0, %1\;\t%0., %0., %2., #" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , 0 , w ; * ] \t%0., %0., %2., # + [ ?&w , w , w ; yes ] movprfx\t%0, %1\;\t%0., %0., %2., # + } ) ;; unpredicated optab pattern for auto-vectorizer @@ -1855,34 +1861,34 @@ (define_expand "cadd3" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_sve_" - [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_I 0 "register_operand") (unspec:SVE_FULL_I - [(match_operand:SVE_FULL_I 1 "register_operand" "0, w") - (match_operand:SVE_FULL_I 2 "register_operand" "w, w") - (match_operand:SVE_FULL_I 3 "register_operand" "w, w")] + [(match_operand:SVE_FULL_I 1 "register_operand") + (match_operand:SVE_FULL_I 2 "register_operand") + (match_operand:SVE_FULL_I 3 "register_operand")] SVE2_INT_CMLA))] "TARGET_SVE2" - "@ - \t%0., %2., %3., # - movprfx\t%0, %1\;\t%0., %2., %3., #" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] \t%0., %2., %3., # + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;\t%0., %2., %3., # + } ) (define_insn "@aarch64__lane_" - [(set (match_operand:SVE_FULL_HSI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_HSI 0 "register_operand") (unspec:SVE_FULL_HSI - [(match_operand:SVE_FULL_HSI 1 "register_operand" "0, w") - (match_operand:SVE_FULL_HSI 2 "register_operand" "w, w") + [(match_operand:SVE_FULL_HSI 1 "register_operand") + (match_operand:SVE_FULL_HSI 2 "register_operand") (unspec:SVE_FULL_HSI - [(match_operand:SVE_FULL_HSI 3 "register_operand" ", ") + [(match_operand:SVE_FULL_HSI 3 "register_operand") (match_operand:SI 4 "const_int_operand")] UNSPEC_SVE_LANE_SELECT)] SVE2_INT_CMLA))] "TARGET_SVE2" - "@ - \t%0., %2., %3.[%4], # - movprfx\t%0, %1\;\t%0., %2., %3.[%4], #" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , ; * ] \t%0., %2., %3.[%4], # + [ ?&w , w , w , ; yes ] movprfx\t%0, %1\;\t%0., %2., %3.[%4], # + } ) ;; unpredicated optab pattern for auto-vectorizer @@ -1935,34 +1941,34 @@ (define_expand "cmul3" ;; ------------------------------------------------------------------------- (define_insn "@aarch64_sve_" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (unspec:SVE_FULL_SDI - [(match_operand:SVE_FULL_SDI 1 "register_operand" "0, w") - (match_operand: 2 "register_operand" "w, w") - (match_operand: 3 "register_operand" "w, w")] + [(match_operand:SVE_FULL_SDI 1 "register_operand") + (match_operand: 2 "register_operand") + (match_operand: 3 "register_operand")] SVE2_INT_CDOT))] "TARGET_SVE2" - "@ - \t%0., %2., %3., # - movprfx\t%0, %1\;\t%0., %2., %3., #" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , w ; * ] \t%0., %2., %3., # + [ ?&w , w , w , w ; yes ] movprfx\t%0, %1\;\t%0., %2., %3., # + } ) (define_insn "@aarch64__lane_" - [(set (match_operand:SVE_FULL_SDI 0 "register_operand" "=w, ?&w") + [(set (match_operand:SVE_FULL_SDI 0 "register_operand") (unspec:SVE_FULL_SDI - [(match_operand:SVE_FULL_SDI 1 "register_operand" "0, w") - (match_operand: 2 "register_operand" "w, w") + [(match_operand:SVE_FULL_SDI 1 "register_operand") + (match_operand: 2 "register_operand") (unspec: - [(match_operand: 3 "register_operand" ", ") + [(match_operand: 3 "register_operand") (match_operand:SI 4 "const_int_operand")] UNSPEC_SVE_LANE_SELECT)] SVE2_INT_CDOT))] "TARGET_SVE2" - "@ - \t%0., %2., %3.[%4], # - movprfx\t%0, %1\;\t%0., %2., %3.[%4], #" - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , 0 , w , ; * ] \t%0., %2., %3.[%4], # + [ ?&w , w , w , ; yes ] movprfx\t%0, %1\;\t%0., %2., %3.[%4], # + } ) ;; ========================================================================= @@ -2067,17 +2073,17 @@ (define_insn "@aarch64_sve_cvtnt" ;; Predicated FCVTX (equivalent to what would be FCVTXNB, except that ;; it supports MOVPRFX). (define_insn "@aarch64_pred_" - [(set (match_operand:VNx4SF_ONLY 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SF_ONLY 0 "register_operand") (unspec:VNx4SF_ONLY - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 3 "aarch64_sve_gp_strictness") - (match_operand: 2 "register_operand" "0, w")] + (match_operand: 2 "register_operand")] SVE2_COND_FP_UNARY_NARROWB))] "TARGET_SVE2" - "@ - \t%0., %1/m, %2. - movprfx\t%0, %2\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %2. + } ) ;; Predicated FCVTX with merging. @@ -2096,45 +2102,45 @@ (define_expand "@cond_" ) (define_insn_and_rewrite "*cond__any_relaxed" - [(set (match_operand:VNx4SF_ONLY 0 "register_operand" "=&w, &w, &w") + [(set (match_operand:VNx4SF_ONLY 0 "register_operand") (unspec:VNx4SF_ONLY - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:VNx4SF_ONLY [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand: 2 "register_operand" "w, w, w")] + (match_operand: 2 "register_operand")] SVE2_COND_FP_UNARY_NARROWB) - (match_operand:VNx4SF_ONLY 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:VNx4SF_ONLY 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE2 && !rtx_equal_p (operands[2], operands[3])" - "@ - \t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. - movprfx\t%0, %3\;\t%0., %1/m, %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] \t%0., %1/m, %2. + [ &w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. + [ &w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %2. + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes,yes")] ) (define_insn "*cond__any_strict" - [(set (match_operand:VNx4SF_ONLY 0 "register_operand" "=&w, &w, &w") + [(set (match_operand:VNx4SF_ONLY 0 "register_operand") (unspec:VNx4SF_ONLY - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:VNx4SF_ONLY [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand: 2 "register_operand" "w, w, w")] + (match_operand: 2 "register_operand")] SVE2_COND_FP_UNARY_NARROWB) - (match_operand:VNx4SF_ONLY 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:VNx4SF_ONLY 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE2 && !rtx_equal_p (operands[2], operands[3])" - "@ - \t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. - movprfx\t%0, %3\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] \t%0., %1/m, %2. + [ &w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. + [ &w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %2. + } ) ;; Predicated FCVTXNT. This doesn't give a natural aarch64_pred_*/cond_* @@ -2168,18 +2174,18 @@ (define_insn "@aarch64_sve2_cvtxnt" ;; Predicated integer unary operations. (define_insn "@aarch64_pred_" - [(set (match_operand:VNx4SI_ONLY 0 "register_operand" "=w, ?&w") + [(set (match_operand:VNx4SI_ONLY 0 "register_operand") (unspec:VNx4SI_ONLY - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:VNx4SI_ONLY - [(match_operand:VNx4SI_ONLY 2 "register_operand" "0, w")] + [(match_operand:VNx4SI_ONLY 2 "register_operand")] SVE2_U32_UNARY)] UNSPEC_PRED_X))] "TARGET_SVE2" - "@ - \t%0., %1/m, %2. - movprfx\t%0, %2\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %2. + } ) ;; Predicated integer unary operations with merging. @@ -2202,27 +2208,27 @@ (define_expand "@cond_" ) (define_insn_and_rewrite "*cond_" - [(set (match_operand:VNx4SI_ONLY 0 "register_operand" "=w, ?&w, ?&w") + [(set (match_operand:VNx4SI_ONLY 0 "register_operand") (unspec:VNx4SI_ONLY - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec:VNx4SI_ONLY [(match_operand 4) (unspec:VNx4SI_ONLY - [(match_operand:VNx4SI_ONLY 2 "register_operand" "w, w, w")] + [(match_operand:VNx4SI_ONLY 2 "register_operand")] SVE2_U32_UNARY)] UNSPEC_PRED_X) - (match_operand:VNx4SI_ONLY 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand:VNx4SI_ONLY 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE2" - "@ - \t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. - movprfx\t%0, %3\;\t%0., %1/m, %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ w , Upl , w , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %2. + } "&& !CONSTANT_P (operands[4])" { operands[4] = CONSTM1_RTX (mode); } - [(set_attr "movprfx" "*,yes,yes")] ) ;; ------------------------------------------------------------------------- @@ -2234,17 +2240,17 @@ (define_insn_and_rewrite "*cond_" ;; Predicated FLOGB. (define_insn "@aarch64_pred_" - [(set (match_operand: 0 "register_operand" "=w, ?&w") + [(set (match_operand: 0 "register_operand") (unspec: - [(match_operand: 1 "register_operand" "Upl, Upl") + [(match_operand: 1 "register_operand") (match_operand:SI 3 "aarch64_sve_gp_strictness") - (match_operand:SVE_FULL_F 2 "register_operand" "0, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE2_COND_INT_UNARY_FP))] "TARGET_SVE2" - "@ - \t%0., %1/m, %2. - movprfx\t%0, %2\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes")] + {@ [ cons: =0 , 1 , 2 ; attrs: movprfx ] + [ w , Upl , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w ; yes ] movprfx\t%0, %2\;\t%0., %1/m, %2. + } ) ;; Predicated FLOGB with merging. @@ -2263,45 +2269,45 @@ (define_expand "@cond_" ) (define_insn_and_rewrite "*cond_" - [(set (match_operand: 0 "register_operand" "=&w, ?&w, ?&w") + [(set (match_operand: 0 "register_operand") (unspec: - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec: [(match_operand 4) (const_int SVE_RELAXED_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE2_COND_INT_UNARY_FP) - (match_operand: 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand: 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE2 && !rtx_equal_p (operands[2], operands[3])" - "@ - \t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. - movprfx\t%0, %3\;\t%0., %1/m, %2." + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %2. + } "&& !rtx_equal_p (operands[1], operands[4])" { operands[4] = copy_rtx (operands[1]); } - [(set_attr "movprfx" "*,yes,yes")] ) (define_insn "*cond__strict" - [(set (match_operand: 0 "register_operand" "=&w, ?&w, ?&w") + [(set (match_operand: 0 "register_operand") (unspec: - [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + [(match_operand: 1 "register_operand") (unspec: [(match_dup 1) (const_int SVE_STRICT_GP) - (match_operand:SVE_FULL_F 2 "register_operand" "w, w, w")] + (match_operand:SVE_FULL_F 2 "register_operand")] SVE2_COND_INT_UNARY_FP) - (match_operand: 3 "aarch64_simd_reg_or_zero" "0, Dz, w")] + (match_operand: 3 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE2 && !rtx_equal_p (operands[2], operands[3])" - "@ - \t%0., %1/m, %2. - movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. - movprfx\t%0, %3\;\t%0., %1/m, %2." - [(set_attr "movprfx" "*,yes,yes")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: movprfx ] + [ &w , Upl , w , 0 ; * ] \t%0., %1/m, %2. + [ ?&w , Upl , w , Dz ; yes ] movprfx\t%0., %1/z, %2.\;\t%0., %1/m, %2. + [ ?&w , Upl , w , w ; yes ] movprfx\t%0, %3\;\t%0., %1/m, %2. + } ) ;; ------------------------------------------------------------------------- diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 6f7827bd8c9..e245bb2a540 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -533,23 +533,23 @@ (define_expand "cbranchcc4" "") (define_insn "@ccmp" - [(set (match_operand:CC_ONLY 1 "cc_register" "") + [(set (match_operand:CC_ONLY 1 "cc_register") (if_then_else:CC_ONLY (match_operator 4 "aarch64_comparison_operator" - [(match_operand 0 "cc_register" "") + [(match_operand 0 "cc_register") (const_int 0)]) (compare:CC_ONLY - (match_operand:GPI 2 "register_operand" "r,r,r") - (match_operand:GPI 3 "aarch64_ccmp_operand" "r,Uss,Usn")) + (match_operand:GPI 2 "register_operand") + (match_operand:GPI 3 "aarch64_ccmp_operand")) (unspec:CC_ONLY [(match_operand 5 "immediate_operand")] UNSPEC_NZCV)))] "" - "@ - ccmp\\t%2, %3, %k5, %m4 - ccmp\\t%2, %3, %k5, %m4 - ccmn\\t%2, #%n3, %k5, %m4" - [(set_attr "type" "alus_sreg,alus_imm,alus_imm")] + {@ [ cons: 2 , 3 ; attrs: type ] + [ r , r ; alus_sreg ] ccmp\t%2, %3, %k5, %m4 + [ r , Uss ; alus_imm ] ccmp\t%2, %3, %k5, %m4 + [ r , Usn ; alus_imm ] ccmn\t%2, #%n3, %k5, %m4 + } ) (define_insn "@ccmp" @@ -570,23 +570,23 @@ (define_insn "@ccmp" ) (define_insn "@ccmp_rev" - [(set (match_operand:CC_ONLY 1 "cc_register" "") + [(set (match_operand:CC_ONLY 1 "cc_register") (if_then_else:CC_ONLY (match_operator 4 "aarch64_comparison_operator" - [(match_operand 0 "cc_register" "") + [(match_operand 0 "cc_register") (const_int 0)]) (unspec:CC_ONLY [(match_operand 5 "immediate_operand")] UNSPEC_NZCV) (compare:CC_ONLY - (match_operand:GPI 2 "register_operand" "r,r,r") - (match_operand:GPI 3 "aarch64_ccmp_operand" "r,Uss,Usn"))))] + (match_operand:GPI 2 "register_operand") + (match_operand:GPI 3 "aarch64_ccmp_operand"))))] "" - "@ - ccmp\\t%2, %3, %k5, %M4 - ccmp\\t%2, %3, %k5, %M4 - ccmn\\t%2, #%n3, %k5, %M4" - [(set_attr "type" "alus_sreg,alus_imm,alus_imm")] + {@ [ cons: 2 , 3 ; attrs: type ] + [ r , r ; alus_sreg ] ccmp\t%2, %3, %k5, %M4 + [ r , Uss ; alus_imm ] ccmp\t%2, %3, %k5, %M4 + [ r , Usn ; alus_imm ] ccmn\t%2, #%n3, %k5, %M4 + } ) (define_insn "@ccmp_rev" @@ -1056,15 +1056,16 @@ (define_expand "call" ) (define_insn "*call_insn" - [(call (mem:DI (match_operand:DI 0 "aarch64_call_insn_operand" "Ucr, Usf")) + [(call (mem:DI (match_operand:DI 0 "aarch64_call_insn_operand")) (match_operand 1 "" "")) (unspec:DI [(match_operand:DI 2 "const_int_operand")] UNSPEC_CALLEE_ABI) (clobber (reg:DI LR_REGNUM))] "" - "@ - * return aarch64_indirect_call_asm (operands[0]); - bl\\t%c0" - [(set_attr "type" "call, call")]) + {@ [ cons: 0 ; attrs: type ] + [ Ucr ; call ] << aarch64_indirect_call_asm (operands[0]); + [ Usf ; call ] bl\t%c0 + } +) (define_expand "call_value" [(parallel @@ -1083,15 +1084,15 @@ (define_expand "call_value" (define_insn "*call_value_insn" [(set (match_operand 0 "" "") - (call (mem:DI (match_operand:DI 1 "aarch64_call_insn_operand" "Ucr, Usf")) + (call (mem:DI (match_operand:DI 1 "aarch64_call_insn_operand")) (match_operand 2 "" ""))) (unspec:DI [(match_operand:DI 3 "const_int_operand")] UNSPEC_CALLEE_ABI) (clobber (reg:DI LR_REGNUM))] "" - "@ - * return aarch64_indirect_call_asm (operands[1]); - bl\\t%c1" - [(set_attr "type" "call, call")] + {@ [ cons: 1 ; attrs: type ] + [ Ucr ; call ] << aarch64_indirect_call_asm (operands[1]); + [ Usf ; call ] bl\t%c1 + } ) (define_expand "sibcall" @@ -1459,78 +1460,69 @@ (define_expand "mov" ) (define_insn "*mov_aarch64" - [(set (match_operand:HFBF 0 "nonimmediate_operand" "=w,w ,w ,w ,?r,?r,w,w,w ,w ,w,m,r,m ,r") - (match_operand:HFBF 1 "general_operand" "Y ,?rY,?r,?rY, w, w,w,w,Ufc,Uvi,m,w,m,rY,r"))] + [(set (match_operand:HFBF 0 "nonimmediate_operand") + (match_operand:HFBF 1 "general_operand"))] "TARGET_FLOAT && (register_operand (operands[0], mode) || aarch64_reg_or_fp_zero (operands[1], mode))" - "@ - movi\\t%0.4h, #0 - fmov\\t%h0, %w1 - dup\\t%w0.4h, %w1 - fmov\\t%s0, %w1 - umov\\t%w0, %1.h[0] - fmov\\t%w0, %s1 - mov\\t%0.h[0], %1.h[0] - fmov\\t%s0, %s1 - fmov\\t%h0, %1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], HImode); - ldr\\t%h0, %1 - str\\t%h1, %0 - ldrh\\t%w0, %1 - strh\\t%w1, %0 - mov\\t%w0, %w1" - [(set_attr "type" "neon_move,f_mcr,neon_move,f_mcr,neon_to_gp,f_mrc, - neon_move,fmov,fconsts,neon_move,f_loads,f_stores, - load_4,store_4,mov_reg") - (set_attr "arch" "simd,fp16,simd,*,simd,*,simd,*,fp16,simd,*,*,*,*,*")] + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ w , Y ; neon_move , simd ] movi\t%0.4h, #0 + [ w , ?rY ; f_mcr , fp16 ] fmov\t%h0, %w1 + [ w , ?r ; neon_move , simd ] dup\t%w0.4h, %w1 + [ w , ?rY ; f_mcr , * ] fmov\t%s0, %w1 + [ ?r , w ; neon_to_gp , simd ] umov\t%w0, %1.h[0] + [ ?r , w ; f_mrc , * ] fmov\t%w0, %s1 + [ w , w ; neon_move , simd ] mov\t%0.h[0], %1.h[0] + [ w , w ; fmov , * ] fmov\t%s0, %s1 + [ w , Ufc ; fconsts , fp16 ] fmov\t%h0, %1 + [ w , Uvi ; neon_move , simd ] << aarch64_output_scalar_simd_mov_immediate (operands[1], HImode); + [ w , m ; f_loads , * ] ldr\t%h0, %1 + [ m , w ; f_stores , * ] str\t%h1, %0 + [ r , m ; load_4 , * ] ldrh\t%w0, %1 + [ m , rY ; store_4 , * ] strh\t%w1, %0 + [ r , r ; mov_reg , * ] mov\t%w0, %w1 + } ) (define_insn "*mov_aarch64" - [(set (match_operand:SFD 0 "nonimmediate_operand" "=w,w ,?r,w,w ,w ,w,m,r,m ,r,r") - (match_operand:SFD 1 "general_operand" "Y ,?rY, w,w,Ufc,Uvi,m,w,m,rY,r,M"))] + [(set (match_operand:SFD 0 "nonimmediate_operand") + (match_operand:SFD 1 "general_operand"))] "TARGET_FLOAT && (register_operand (operands[0], mode) || aarch64_reg_or_fp_zero (operands[1], mode))" - "@ - movi\\t%0.2s, #0 - fmov\\t%s0, %w1 - fmov\\t%w0, %s1 - fmov\\t%s0, %s1 - fmov\\t%s0, %1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode); - ldr\\t%s0, %1 - str\\t%s1, %0 - ldr\\t%w0, %1 - str\\t%w1, %0 - mov\\t%w0, %w1 - mov\\t%w0, %1" - [(set_attr "type" "neon_move,f_mcr,f_mrc,fmov,fconsts,neon_move,\ - f_loads,f_stores,load_4,store_4,mov_reg,\ - fconsts") - (set_attr "arch" "simd,*,*,*,*,simd,*,*,*,*,*,*")] + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ w , Y ; neon_move , simd ] movi\t%0.2s, #0 + [ w , ?rY ; f_mcr , * ] fmov\t%s0, %w1 + [ ?r , w ; f_mrc , * ] fmov\t%w0, %s1 + [ w , w ; fmov , * ] fmov\t%s0, %s1 + [ w , Ufc ; fconsts , * ] fmov\t%s0, %1 + [ w , Uvi ; neon_move , simd ] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode); + [ w , m ; f_loads , * ] ldr\t%s0, %1 + [ m , w ; f_stores , * ] str\t%s1, %0 + [ r , m ; load_4 , * ] ldr\t%w0, %1 + [ m , rY ; store_4 , * ] str\t%w1, %0 + [ r , r ; mov_reg , * ] mov\t%w0, %w1 + [ r , M ; fconsts , * ] mov\t%w0, %1 + } ) (define_insn "*mov_aarch64" - [(set (match_operand:DFD 0 "nonimmediate_operand" "=w, w ,?r,w,w ,w ,w,m,r,m ,r,r") - (match_operand:DFD 1 "general_operand" "Y , ?rY, w,w,Ufc,Uvi,m,w,m,rY,r,O"))] + [(set (match_operand:DFD 0 "nonimmediate_operand") + (match_operand:DFD 1 "general_operand"))] "TARGET_FLOAT && (register_operand (operands[0], mode) || aarch64_reg_or_fp_zero (operands[1], mode))" - "@ - movi\\t%d0, #0 - fmov\\t%d0, %x1 - fmov\\t%x0, %d1 - fmov\\t%d0, %d1 - fmov\\t%d0, %1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], DImode); - ldr\\t%d0, %1 - str\\t%d1, %0 - ldr\\t%x0, %1 - str\\t%x1, %0 - mov\\t%x0, %x1 - * return aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? \"mov\\t%x0, %1\" : \"mov\\t%w0, %1\";" - [(set_attr "type" "neon_move,f_mcr,f_mrc,fmov,fconstd,neon_move,\ - f_loadd,f_stored,load_8,store_8,mov_reg,\ - fconstd") - (set_attr "arch" "simd,*,*,*,*,simd,*,*,*,*,*,*")] + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ w , Y ; neon_move , simd ] movi\t%d0, #0 + [ w , ?rY ; f_mcr , * ] fmov\t%d0, %x1 + [ ?r , w ; f_mrc , * ] fmov\t%x0, %d1 + [ w , w ; fmov , * ] fmov\t%d0, %d1 + [ w , Ufc ; fconstd , * ] fmov\t%d0, %1 + [ w , Uvi ; neon_move , simd ] << aarch64_output_scalar_simd_mov_immediate (operands[1], DImode); + [ w , m ; f_loadd , * ] ldr\t%d0, %1 + [ m , w ; f_stored , * ] str\t%d1, %0 + [ r , m ; load_8 , * ] ldr\t%x0, %1 + [ m , rY ; store_8 , * ] str\t%x1, %0 + [ r , r ; mov_reg , * ] mov\t%x0, %x1 + [ r , O ; fconstd , * ] << aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? "mov\t%x0, %1" : "mov\t%w0, %1"; + } ) (define_split @@ -1728,36 +1720,34 @@ (define_expand "setmemdi" ;; Operands 1 and 3 are tied together by the final condition; so we allow ;; fairly lax checking on the second memory operation. (define_insn "load_pair_sw_" - [(set (match_operand:SX 0 "register_operand" "=r,w") - (match_operand:SX 1 "aarch64_mem_pair_operand" "Ump,Ump")) - (set (match_operand:SX2 2 "register_operand" "=r,w") - (match_operand:SX2 3 "memory_operand" "m,m"))] + [(set (match_operand:SX 0 "register_operand") + (match_operand:SX 1 "aarch64_mem_pair_operand")) + (set (match_operand:SX2 2 "register_operand") + (match_operand:SX2 3 "memory_operand"))] "rtx_equal_p (XEXP (operands[3], 0), plus_constant (Pmode, XEXP (operands[1], 0), GET_MODE_SIZE (mode)))" - "@ - ldp\\t%w0, %w2, %z1 - ldp\\t%s0, %s2, %z1" - [(set_attr "type" "load_8,neon_load1_2reg") - (set_attr "arch" "*,fp")] + {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type , arch ] + [ r , Ump , r , m ; load_8 , * ] ldp\t%w0, %w2, %z1 + [ w , Ump , w , m ; neon_load1_2reg , fp ] ldp\t%s0, %s2, %z1 + } ) ;; Storing different modes that can still be merged (define_insn "load_pair_dw_" - [(set (match_operand:DX 0 "register_operand" "=r,w") - (match_operand:DX 1 "aarch64_mem_pair_operand" "Ump,Ump")) - (set (match_operand:DX2 2 "register_operand" "=r,w") - (match_operand:DX2 3 "memory_operand" "m,m"))] + [(set (match_operand:DX 0 "register_operand") + (match_operand:DX 1 "aarch64_mem_pair_operand")) + (set (match_operand:DX2 2 "register_operand") + (match_operand:DX2 3 "memory_operand"))] "rtx_equal_p (XEXP (operands[3], 0), plus_constant (Pmode, XEXP (operands[1], 0), GET_MODE_SIZE (mode)))" - "@ - ldp\\t%x0, %x2, %z1 - ldp\\t%d0, %d2, %z1" - [(set_attr "type" "load_16,neon_load1_2reg") - (set_attr "arch" "*,fp")] + {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type , arch ] + [ r , Ump , r , m ; load_16 , * ] ldp\t%x0, %x2, %z1 + [ w , Ump , w , m ; neon_load1_2reg , fp ] ldp\t%d0, %d2, %z1 + } ) (define_insn "load_pair_dw_tftf" @@ -1778,36 +1768,34 @@ (define_insn "load_pair_dw_tftf" ;; Operands 0 and 2 are tied together by the final condition; so we allow ;; fairly lax checking on the second memory operation. (define_insn "store_pair_sw_" - [(set (match_operand:SX 0 "aarch64_mem_pair_operand" "=Ump,Ump") - (match_operand:SX 1 "aarch64_reg_zero_or_fp_zero" "rYZ,w")) - (set (match_operand:SX2 2 "memory_operand" "=m,m") - (match_operand:SX2 3 "aarch64_reg_zero_or_fp_zero" "rYZ,w"))] + [(set (match_operand:SX 0 "aarch64_mem_pair_operand") + (match_operand:SX 1 "aarch64_reg_zero_or_fp_zero")) + (set (match_operand:SX2 2 "memory_operand") + (match_operand:SX2 3 "aarch64_reg_zero_or_fp_zero"))] "rtx_equal_p (XEXP (operands[2], 0), plus_constant (Pmode, XEXP (operands[0], 0), GET_MODE_SIZE (mode)))" - "@ - stp\\t%w1, %w3, %z0 - stp\\t%s1, %s3, %z0" - [(set_attr "type" "store_8,neon_store1_2reg") - (set_attr "arch" "*,fp")] + {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type , arch ] + [ Ump , rYZ , m , rYZ ; store_8 , * ] stp\t%w1, %w3, %z0 + [ Ump , w , m , w ; neon_store1_2reg , fp ] stp\t%s1, %s3, %z0 + } ) ;; Storing different modes that can still be merged (define_insn "store_pair_dw_" - [(set (match_operand:DX 0 "aarch64_mem_pair_operand" "=Ump,Ump") - (match_operand:DX 1 "aarch64_reg_zero_or_fp_zero" "rYZ,w")) - (set (match_operand:DX2 2 "memory_operand" "=m,m") - (match_operand:DX2 3 "aarch64_reg_zero_or_fp_zero" "rYZ,w"))] + [(set (match_operand:DX 0 "aarch64_mem_pair_operand") + (match_operand:DX 1 "aarch64_reg_zero_or_fp_zero")) + (set (match_operand:DX2 2 "memory_operand") + (match_operand:DX2 3 "aarch64_reg_zero_or_fp_zero"))] "rtx_equal_p (XEXP (operands[2], 0), plus_constant (Pmode, XEXP (operands[0], 0), GET_MODE_SIZE (mode)))" - "@ - stp\\t%x1, %x3, %z0 - stp\\t%d1, %d3, %z0" - [(set_attr "type" "store_16,neon_store1_2reg") - (set_attr "arch" "*,fp")] + {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type , arch ] + [ Ump , rYZ , m , rYZ ; store_16 , * ] stp\t%x1, %x3, %z0 + [ Ump , w , m , w ; neon_store1_2reg , fp ] stp\t%d1, %d3, %z0 + } ) (define_insn "store_pair_dw_tftf" @@ -1935,13 +1923,13 @@ (define_expand "sidi2" ) (define_insn "*extendsidi2_aarch64" - [(set (match_operand:DI 0 "register_operand" "=r,r") - (sign_extend:DI (match_operand:SI 1 "nonimmediate_operand" "r,m")))] + [(set (match_operand:DI 0 "register_operand") + (sign_extend:DI (match_operand:SI 1 "nonimmediate_operand")))] "" - "@ - sxtw\t%0, %w1 - ldrsw\t%0, %1" - [(set_attr "type" "extend,load_4")] + {@ [ cons: =0 , 1 ; attrs: type ] + [ r , r ; extend ] sxtw\t%0, %w1 + [ r , m ; load_4 ] ldrsw\t%0, %1 + } ) (define_insn "*load_pair_extendsidi2_aarch64" @@ -1958,34 +1946,32 @@ (define_insn "*load_pair_extendsidi2_aarch64" ) (define_insn "*zero_extendsidi2_aarch64" - [(set (match_operand:DI 0 "register_operand" "=r,r,w,w,r,w") - (zero_extend:DI (match_operand:SI 1 "nonimmediate_operand" "r,m,r,m,w,w")))] - "" - "@ - uxtw\t%0, %w1 - ldr\t%w0, %1 - fmov\t%s0, %w1 - ldr\t%s0, %1 - fmov\t%w0, %s1 - fmov\t%s0, %s1" - [(set_attr "type" "mov_reg,load_4,f_mcr,f_loads,f_mrc,fmov") - (set_attr "arch" "*,*,fp,fp,fp,fp")] + [(set (match_operand:DI 0 "register_operand") + (zero_extend:DI (match_operand:SI 1 "nonimmediate_operand")))] + "" + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ r , r ; mov_reg , * ] uxtw\t%0, %w1 + [ r , m ; load_4 , * ] ldr\t%w0, %1 + [ w , r ; f_mcr , fp ] fmov\t%s0, %w1 + [ w , m ; f_loads , fp ] ldr\t%s0, %1 + [ r , w ; f_mrc , fp ] fmov\t%w0, %s1 + [ w , w ; fmov , fp ] fmov\t%s0, %s1 + } ) (define_insn "*load_pair_zero_extendsidi2_aarch64" - [(set (match_operand:DI 0 "register_operand" "=r,w") - (zero_extend:DI (match_operand:SI 1 "aarch64_mem_pair_operand" "Ump,Ump"))) - (set (match_operand:DI 2 "register_operand" "=r,w") - (zero_extend:DI (match_operand:SI 3 "memory_operand" "m,m")))] + [(set (match_operand:DI 0 "register_operand") + (zero_extend:DI (match_operand:SI 1 "aarch64_mem_pair_operand"))) + (set (match_operand:DI 2 "register_operand") + (zero_extend:DI (match_operand:SI 3 "memory_operand")))] "rtx_equal_p (XEXP (operands[3], 0), plus_constant (Pmode, XEXP (operands[1], 0), GET_MODE_SIZE (SImode)))" - "@ - ldp\t%w0, %w2, %z1 - ldp\t%s0, %s2, %z1" - [(set_attr "type" "load_8,neon_load1_2reg") - (set_attr "arch" "*,fp")] + {@ [ cons: =0 , 1 , =2 , 3 ; attrs: type , arch ] + [ r , Ump , r , m ; load_8 , * ] ldp\t%w0, %w2, %z1 + [ w , Ump , w , m ; neon_load1_2reg , fp ] ldp\t%s0, %s2, %z1 + } ) (define_expand "2" @@ -1995,28 +1981,26 @@ (define_expand "2" ) (define_insn "*extend2_aarch64" - [(set (match_operand:GPI 0 "register_operand" "=r,r,r") - (sign_extend:GPI (match_operand:SHORT 1 "nonimmediate_operand" "r,m,w")))] + [(set (match_operand:GPI 0 "register_operand") + (sign_extend:GPI (match_operand:SHORT 1 "nonimmediate_operand")))] "" - "@ - sxt\t%0, %w1 - ldrs\t%0, %1 - smov\t%0, %1.[0]" - [(set_attr "type" "extend,load_4,neon_to_gp") - (set_attr "arch" "*,*,fp")] + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ r , r ; extend , * ] sxt\t%0, %w1 + [ r , m ; load_4 , * ] ldrs\t%0, %1 + [ r , w ; neon_to_gp , fp ] smov\t%0, %1.[0] + } ) (define_insn "*zero_extend2_aarch64" - [(set (match_operand:GPI 0 "register_operand" "=r,r,w,r") - (zero_extend:GPI (match_operand:SHORT 1 "nonimmediate_operand" "r,m,m,w")))] + [(set (match_operand:GPI 0 "register_operand") + (zero_extend:GPI (match_operand:SHORT 1 "nonimmediate_operand")))] "" - "@ - and\t%0, %1, - ldr\t%w0, %1 - ldr\t%0, %1 - umov\t%w0, %1.[0]" - [(set_attr "type" "logic_imm,load_4,f_loads,neon_to_gp") - (set_attr "arch" "*,*,fp,fp")] + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ r , r ; logic_imm , * ] and\t%0, %1, + [ r , m ; load_4 , * ] ldr\t%w0, %1 + [ w , m ; f_loads , fp ] ldr\t%0, %1 + [ r , w ; neon_to_gp , fp ] umov\t%w0, %1.[0] + } ) (define_expand "qihi2" @@ -2026,23 +2010,23 @@ (define_expand "qihi2" ) (define_insn "*extendqihi2_aarch64" - [(set (match_operand:HI 0 "register_operand" "=r,r") - (sign_extend:HI (match_operand:QI 1 "nonimmediate_operand" "r,m")))] + [(set (match_operand:HI 0 "register_operand") + (sign_extend:HI (match_operand:QI 1 "nonimmediate_operand")))] "" - "@ - sxtb\t%w0, %w1 - ldrsb\t%w0, %1" - [(set_attr "type" "extend,load_4")] + {@ [ cons: =0 , 1 ; attrs: type ] + [ r , r ; extend ] sxtb\t%w0, %w1 + [ r , m ; load_4 ] ldrsb\t%w0, %1 + } ) (define_insn "*zero_extendqihi2_aarch64" - [(set (match_operand:HI 0 "register_operand" "=r,r") - (zero_extend:HI (match_operand:QI 1 "nonimmediate_operand" "r,m")))] + [(set (match_operand:HI 0 "register_operand") + (zero_extend:HI (match_operand:QI 1 "nonimmediate_operand")))] "" - "@ - and\t%w0, %w1, 255 - ldrb\t%w0, %1" - [(set_attr "type" "logic_imm,load_4")] + {@ [ cons: =0 , 1 ; attrs: type ] + [ r , r ; logic_imm ] and\t%w0, %w1, 255 + [ r , m ; load_4 ] ldrb\t%w0, %1 + } ) ;; ------------------------------------------------------------------- @@ -2088,38 +2072,37 @@ (define_expand "add3" (define_insn "*add3_aarch64" [(set - (match_operand:GPI 0 "register_operand" "=rk,rk,w,rk,r,r,rk") + (match_operand:GPI 0 "register_operand") (plus:GPI - (match_operand:GPI 1 "register_operand" "%rk,rk,w,rk,rk,0,rk") - (match_operand:GPI 2 "aarch64_pluslong_operand" "I,r,w,J,Uaa,Uai,Uav")))] - "" - "@ - add\\t%0, %1, %2 - add\\t%0, %1, %2 - add\\t%0, %1, %2 - sub\\t%0, %1, #%n2 - # - * return aarch64_output_sve_scalar_inc_dec (operands[2]); - * return aarch64_output_sve_addvl_addpl (operands[2]);" + (match_operand:GPI 1 "register_operand") + (match_operand:GPI 2 "aarch64_pluslong_operand")))] + "" + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ rk , %rk , I ; alu_imm , * ] add\t%0, %1, %2 + [ rk , rk , r ; alu_sreg , * ] add\t%0, %1, %2 + [ w , w , w ; neon_add , simd ] add\t%0, %1, %2 + [ rk , rk , J ; alu_imm , * ] sub\t%0, %1, #%n2 + [ r , rk , Uaa ; multiple , * ] # + [ r , 0 , Uai ; alu_imm , sve ] << aarch64_output_sve_scalar_inc_dec (operands[2]); + [ rk , rk , Uav ; alu_imm , sve ] << aarch64_output_sve_addvl_addpl (operands[2]); + } ;; The "alu_imm" types for INC/DEC and ADDVL/ADDPL are just placeholders. - [(set_attr "type" "alu_imm,alu_sreg,neon_add,alu_imm,multiple,alu_imm,alu_imm") - (set_attr "arch" "*,*,simd,*,*,sve,sve")] ) ;; zero_extend version of above (define_insn "*addsi3_aarch64_uxtw" [(set - (match_operand:DI 0 "register_operand" "=rk,rk,rk,r") + (match_operand:DI 0 "register_operand") (zero_extend:DI - (plus:SI (match_operand:SI 1 "register_operand" "%rk,rk,rk,rk") - (match_operand:SI 2 "aarch64_pluslong_operand" "I,r,J,Uaa"))))] - "" - "@ - add\\t%w0, %w1, %2 - add\\t%w0, %w1, %w2 - sub\\t%w0, %w1, #%n2 - #" - [(set_attr "type" "alu_imm,alu_sreg,alu_imm,multiple")] + (plus:SI (match_operand:SI 1 "register_operand") + (match_operand:SI 2 "aarch64_pluslong_operand"))))] + "" + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ rk , %rk , I ; alu_imm ] add\t%w0, %w1, %2 + [ rk , rk , r ; alu_sreg ] add\t%w0, %w1, %w2 + [ rk , rk , J ; alu_imm ] sub\t%w0, %w1, #%n2 + [ r , rk , Uaa ; multiple ] # + } ) ;; If there's a free register, and we can load the constant with a @@ -2182,19 +2165,20 @@ (define_split ;; this pattern. (define_insn_and_split "*add3_poly_1" [(set - (match_operand:GPI 0 "register_operand" "=r,r,r,r,r,r,&r") + (match_operand:GPI 0 "register_operand") (plus:GPI - (match_operand:GPI 1 "register_operand" "%rk,rk,rk,rk,0,rk,rk") - (match_operand:GPI 2 "aarch64_pluslong_or_poly_operand" "I,r,J,Uaa,Uai,Uav,Uat")))] + (match_operand:GPI 1 "register_operand") + (match_operand:GPI 2 "aarch64_pluslong_or_poly_operand")))] "TARGET_SVE && operands[0] != stack_pointer_rtx" - "@ - add\\t%0, %1, %2 - add\\t%0, %1, %2 - sub\\t%0, %1, #%n2 - # - * return aarch64_output_sve_scalar_inc_dec (operands[2]); - * return aarch64_output_sve_addvl_addpl (operands[2]); - #" + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , %rk , I ; alu_imm ] add\t%0, %1, %2 + [ r , rk , r ; alu_sreg ] add\t%0, %1, %2 + [ r , rk , J ; alu_imm ] sub\t%0, %1, #%n2 + [ r , rk , Uaa ; multiple ] # + [ r , 0 , Uai ; alu_imm ] << aarch64_output_sve_scalar_inc_dec (operands[2]); + [ r , rk , Uav ; alu_imm ] << aarch64_output_sve_addvl_addpl (operands[2]); + [ &r , rk , Uat ; multiple ] # + } "&& epilogue_completed && !reg_overlap_mentioned_p (operands[0], operands[1]) && aarch64_split_add_offset_immediate (operands[2], mode)" @@ -2205,7 +2189,6 @@ (define_insn_and_split "*add3_poly_1" DONE; } ;; The "alu_imm" types for INC/DEC and ADDVL/ADDPL are just placeholders. - [(set_attr "type" "alu_imm,alu_sreg,alu_imm,multiple,alu_imm,alu_imm,multiple")] ) (define_split @@ -2360,82 +2343,83 @@ (define_expand "uaddvti4" (define_insn "add3_compare0" [(set (reg:CC_NZ CC_REGNUM) (compare:CC_NZ - (plus:GPI (match_operand:GPI 1 "register_operand" "%rk,rk,rk") - (match_operand:GPI 2 "aarch64_plus_operand" "r,I,J")) + (plus:GPI (match_operand:GPI 1 "register_operand") + (match_operand:GPI 2 "aarch64_plus_operand")) (const_int 0))) - (set (match_operand:GPI 0 "register_operand" "=r,r,r") + (set (match_operand:GPI 0 "register_operand") (plus:GPI (match_dup 1) (match_dup 2)))] "" - "@ - adds\\t%0, %1, %2 - adds\\t%0, %1, %2 - subs\\t%0, %1, #%n2" - [(set_attr "type" "alus_sreg,alus_imm,alus_imm")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , %rk , r ; alus_sreg ] adds\t%0, %1, %2 + [ r , rk , I ; alus_imm ] adds\t%0, %1, %2 + [ r , rk , J ; alus_imm ] subs\t%0, %1, #%n2 + } ) ;; zero_extend version of above (define_insn "*addsi3_compare0_uxtw" [(set (reg:CC_NZ CC_REGNUM) (compare:CC_NZ - (plus:SI (match_operand:SI 1 "register_operand" "%rk,rk,rk") - (match_operand:SI 2 "aarch64_plus_operand" "r,I,J")) + (plus:SI (match_operand:SI 1 "register_operand") + (match_operand:SI 2 "aarch64_plus_operand")) (const_int 0))) - (set (match_operand:DI 0 "register_operand" "=r,r,r") + (set (match_operand:DI 0 "register_operand") (zero_extend:DI (plus:SI (match_dup 1) (match_dup 2))))] "" - "@ - adds\\t%w0, %w1, %w2 - adds\\t%w0, %w1, %2 - subs\\t%w0, %w1, #%n2" - [(set_attr "type" "alus_sreg,alus_imm,alus_imm")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , %rk , r ; alus_sreg ] adds\t%w0, %w1, %w2 + [ r , rk , I ; alus_imm ] adds\t%w0, %w1, %2 + [ r , rk , J ; alus_imm ] subs\t%w0, %w1, #%n2 + } ) (define_insn "*add3_compareC_cconly" [(set (reg:CC_C CC_REGNUM) (compare:CC_C (plus:GPI - (match_operand:GPI 0 "register_operand" "r,r,r") - (match_operand:GPI 1 "aarch64_plus_operand" "r,I,J")) + (match_operand:GPI 0 "register_operand") + (match_operand:GPI 1 "aarch64_plus_operand")) (match_dup 0)))] "" - "@ - cmn\\t%0, %1 - cmn\\t%0, %1 - cmp\\t%0, #%n1" - [(set_attr "type" "alus_sreg,alus_imm,alus_imm")] + {@ [ cons: 0 , 1 ; attrs: type ] + [ r , r ; alus_sreg ] cmn\t%0, %1 + [ r , I ; alus_imm ] cmn\t%0, %1 + [ r , J ; alus_imm ] cmp\t%0, #%n1 + } ) (define_insn "add3_compareC" [(set (reg:CC_C CC_REGNUM) (compare:CC_C (plus:GPI - (match_operand:GPI 1 "register_operand" "rk,rk,rk") - (match_operand:GPI 2 "aarch64_plus_operand" "r,I,J")) + (match_operand:GPI 1 "register_operand") + (match_operand:GPI 2 "aarch64_plus_operand")) (match_dup 1))) - (set (match_operand:GPI 0 "register_operand" "=r,r,r") + (set (match_operand:GPI 0 "register_operand") (plus:GPI (match_dup 1) (match_dup 2)))] "" - "@ - adds\\t%0, %1, %2 - adds\\t%0, %1, %2 - subs\\t%0, %1, #%n2" - [(set_attr "type" "alus_sreg,alus_imm,alus_imm")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , rk , r ; alus_sreg ] adds\t%0, %1, %2 + [ r , rk , I ; alus_imm ] adds\t%0, %1, %2 + [ r , rk , J ; alus_imm ] subs\t%0, %1, #%n2 + } ) (define_insn "*add3_compareV_cconly_imm" [(set (reg:CC_V CC_REGNUM) (compare:CC_V (plus: - (sign_extend: (match_operand:GPI 0 "register_operand" "r,r")) - (match_operand: 1 "const_scalar_int_operand" "")) + (sign_extend: (match_operand:GPI 0 "register_operand")) + (match_operand: 1 "const_scalar_int_operand")) (sign_extend: (plus:GPI (match_dup 0) - (match_operand:GPI 2 "aarch64_plus_immediate" "I,J")))))] + (match_operand:GPI 2 "aarch64_plus_immediate")))))] "INTVAL (operands[1]) == INTVAL (operands[2])" - "@ - cmn\\t%0, %1 - cmp\\t%0, #%n1" + {@ [ cons: 0 , 2 ] + [ r , I ] cmn\t%0, %1 + [ r , J ] cmp\t%0, #%n1 + } [(set_attr "type" "alus_imm")] ) @@ -2456,17 +2440,17 @@ (define_insn "add3_compareV_imm" (compare:CC_V (plus: (sign_extend: - (match_operand:GPI 1 "register_operand" "rk,rk")) - (match_operand:GPI 2 "aarch64_plus_immediate" "I,J")) + (match_operand:GPI 1 "register_operand")) + (match_operand:GPI 2 "aarch64_plus_immediate")) (sign_extend: (plus:GPI (match_dup 1) (match_dup 2))))) - (set (match_operand:GPI 0 "register_operand" "=r,r") + (set (match_operand:GPI 0 "register_operand") (plus:GPI (match_dup 1) (match_dup 2)))] "" - "@ - adds\\t%0, %1, %2 - subs\\t%0, %1, #%n2" - [(set_attr "type" "alus_imm,alus_imm")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , rk , I ; alus_imm ] adds\t%0, %1, %2 + [ r , rk , J ; alus_imm ] subs\t%0, %1, #%n2 + } ) (define_insn "add3_compareV" @@ -2582,15 +2566,15 @@ (define_insn "*subs__shift_" (define_insn "*add3nr_compare0" [(set (reg:CC_NZ CC_REGNUM) (compare:CC_NZ - (plus:GPI (match_operand:GPI 0 "register_operand" "%r,r,r") - (match_operand:GPI 1 "aarch64_plus_operand" "r,I,J")) + (plus:GPI (match_operand:GPI 0 "register_operand") + (match_operand:GPI 1 "aarch64_plus_operand")) (const_int 0)))] "" - "@ - cmn\\t%0, %1 - cmn\\t%0, %1 - cmp\\t%0, #%n1" - [(set_attr "type" "alus_sreg,alus_imm,alus_imm")] + {@ [ cons: 0 , 1 ; attrs: type ] + [ %r , r ; alus_sreg ] cmn\t%0, %1 + [ r , I ; alus_imm ] cmn\t%0, %1 + [ r , J ; alus_imm ] cmp\t%0, #%n1 + } ) (define_insn "aarch64_sub_compare0" @@ -2902,15 +2886,14 @@ (define_insn "*subsi3_uxtw" ) (define_insn "subdi3" - [(set (match_operand:DI 0 "register_operand" "=rk,w") - (minus:DI (match_operand:DI 1 "register_operand" "rk,w") - (match_operand:DI 2 "register_operand" "r,w")))] + [(set (match_operand:DI 0 "register_operand") + (minus:DI (match_operand:DI 1 "register_operand") + (match_operand:DI 2 "register_operand")))] "" - "@ - sub\\t%x0, %x1, %x2 - sub\\t%d0, %d1, %d2" - [(set_attr "type" "alu_sreg, neon_sub") - (set_attr "arch" "*,simd")] + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ rk , rk , r ; alu_sreg , * ] sub\t%x0, %x1, %x2 + [ w , w , w ; neon_sub , simd ] sub\t%d0, %d1, %d2 + } ) (define_expand "subv4" @@ -2950,16 +2933,17 @@ (define_insn "subv_imm" (compare:CC_V (sign_extend: (minus:GPI - (match_operand:GPI 1 "register_operand" "rk,rk") - (match_operand:GPI 2 "aarch64_plus_immediate" "I,J"))) + (match_operand:GPI 1 "register_operand") + (match_operand:GPI 2 "aarch64_plus_immediate"))) (minus: (sign_extend: (match_dup 1)) (match_dup 2)))) - (set (match_operand:GPI 0 "register_operand" "=r,r") + (set (match_operand:GPI 0 "register_operand") (minus:GPI (match_dup 1) (match_dup 2)))] "" - "@ - subs\\t%0, %1, %2 - adds\\t%0, %1, #%n2" + {@ [ cons: =0 , 1 , 2 ] + [ r , rk , I ] subs\t%0, %1, %2 + [ r , rk , J ] adds\t%0, %1, #%n2 + } [(set_attr "type" "alus_sreg")] ) @@ -3004,15 +2988,16 @@ (define_insn "*cmpv_insn" [(set (reg:CC_V CC_REGNUM) (compare:CC_V (sign_extend: - (minus:GPI (match_operand:GPI 0 "register_operand" "r,r,r") - (match_operand:GPI 1 "aarch64_plus_operand" "r,I,J"))) + (minus:GPI (match_operand:GPI 0 "register_operand") + (match_operand:GPI 1 "aarch64_plus_operand"))) (minus: (sign_extend: (match_dup 0)) (sign_extend: (match_dup 1)))))] "" - "@ - cmp\\t%0, %1 - cmp\\t%0, %1 - cmp\\t%0, #%n1" + {@ [ cons: 0 , 1 ] + [ r , r ] cmp\t%0, %1 + [ r , I ] cmp\t%0, %1 + [ r , J ] cmp\t%0, #%n1 + } [(set_attr "type" "alus_sreg")] ) @@ -3159,16 +3144,17 @@ (define_insn "*subsi3_compare0_uxtw" (define_insn "sub3_compare1_imm" [(set (reg:CC CC_REGNUM) (compare:CC - (match_operand:GPI 1 "register_operand" "rk,rk") - (match_operand:GPI 2 "aarch64_plus_immediate" "I,J"))) - (set (match_operand:GPI 0 "register_operand" "=r,r") + (match_operand:GPI 1 "register_operand") + (match_operand:GPI 2 "aarch64_plus_immediate"))) + (set (match_operand:GPI 0 "register_operand") (plus:GPI (match_dup 1) - (match_operand:GPI 3 "aarch64_plus_immediate" "J,I")))] + (match_operand:GPI 3 "aarch64_plus_immediate")))] "UINTVAL (operands[2]) == -UINTVAL (operands[3])" - "@ - subs\\t%0, %1, %2 - adds\\t%0, %1, #%n2" + {@ [ cons: =0 , 1 , 2 , 3 ] + [ r , rk , I , J ] subs\t%0, %1, %2 + [ r , rk , J , I ] adds\t%0, %1, #%n2 + } [(set_attr "type" "alus_imm")] ) @@ -3609,14 +3595,13 @@ (define_expand "abs2" ) (define_insn "neg2" - [(set (match_operand:GPI 0 "register_operand" "=r,w") - (neg:GPI (match_operand:GPI 1 "register_operand" "r,w")))] + [(set (match_operand:GPI 0 "register_operand") + (neg:GPI (match_operand:GPI 1 "register_operand")))] "" - "@ - neg\\t%0, %1 - neg\\t%0, %1" - [(set_attr "type" "alu_sreg, neon_neg") - (set_attr "arch" "*,simd")] + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ r , r ; alu_sreg , * ] neg\t%0, %1 + [ w , w ; neon_neg , simd ] neg\t%0, %1 + } ) ;; zero_extend version of above @@ -3931,35 +3916,37 @@ (define_insn "*divsi3_uxtw" (define_insn "cmp" [(set (reg:CC CC_REGNUM) - (compare:CC (match_operand:GPI 0 "register_operand" "rk,rk,rk") - (match_operand:GPI 1 "aarch64_plus_operand" "r,I,J")))] + (compare:CC (match_operand:GPI 0 "register_operand") + (match_operand:GPI 1 "aarch64_plus_operand")))] "" - "@ - cmp\\t%0, %1 - cmp\\t%0, %1 - cmn\\t%0, #%n1" - [(set_attr "type" "alus_sreg,alus_imm,alus_imm")] + {@ [ cons: 0 , 1 ; attrs: type ] + [ rk , r ; alus_sreg ] cmp\t%0, %1 + [ rk , I ; alus_imm ] cmp\t%0, %1 + [ rk , J ; alus_imm ] cmn\t%0, #%n1 + } ) (define_insn "fcmp" [(set (reg:CCFP CC_REGNUM) - (compare:CCFP (match_operand:GPF 0 "register_operand" "w,w") - (match_operand:GPF 1 "aarch64_fp_compare_operand" "Y,w")))] + (compare:CCFP (match_operand:GPF 0 "register_operand") + (match_operand:GPF 1 "aarch64_fp_compare_operand")))] "TARGET_FLOAT" - "@ - fcmp\\t%0, #0.0 - fcmp\\t%0, %1" + {@ [ cons: 0 , 1 ] + [ w , Y ] fcmp\t%0, #0.0 + [ w , w ] fcmp\t%0, %1 + } [(set_attr "type" "fcmp")] ) (define_insn "fcmpe" [(set (reg:CCFPE CC_REGNUM) - (compare:CCFPE (match_operand:GPF 0 "register_operand" "w,w") - (match_operand:GPF 1 "aarch64_fp_compare_operand" "Y,w")))] + (compare:CCFPE (match_operand:GPF 0 "register_operand") + (match_operand:GPF 1 "aarch64_fp_compare_operand")))] "TARGET_FLOAT" - "@ - fcmpe\\t%0, #0.0 - fcmpe\\t%0, %1" + {@ [ cons: 0 , 1 ] + [ w , Y ] fcmpe\t%0, #0.0 + [ w , w ] fcmpe\t%0, %1 + } [(set_attr "type" "fcmp")] ) @@ -4146,47 +4133,47 @@ (define_expand "cmov6" ) (define_insn "*cmov_insn" - [(set (match_operand:ALLI 0 "register_operand" "=r,r,r,r,r,r,r") + [(set (match_operand:ALLI 0 "register_operand") (if_then_else:ALLI (match_operator 1 "aarch64_comparison_operator" - [(match_operand 2 "cc_register" "") (const_int 0)]) - (match_operand:ALLI 3 "aarch64_reg_zero_or_m1_or_1" "rZ,rZ,UsM,rZ,Ui1,UsM,Ui1") - (match_operand:ALLI 4 "aarch64_reg_zero_or_m1_or_1" "rZ,UsM,rZ,Ui1,rZ,UsM,Ui1")))] + [(match_operand 2 "cc_register") (const_int 0)]) + (match_operand:ALLI 3 "aarch64_reg_zero_or_m1_or_1") + (match_operand:ALLI 4 "aarch64_reg_zero_or_m1_or_1")))] "!((operands[3] == const1_rtx && operands[4] == constm1_rtx) || (operands[3] == constm1_rtx && operands[4] == const1_rtx))" ;; Final two alternatives should be unreachable, but included for completeness - "@ - csel\\t%0, %3, %4, %m1 - csinv\\t%0, %3, zr, %m1 - csinv\\t%0, %4, zr, %M1 - csinc\\t%0, %3, zr, %m1 - csinc\\t%0, %4, zr, %M1 - mov\\t%0, -1 - mov\\t%0, 1" - [(set_attr "type" "csel, csel, csel, csel, csel, mov_imm, mov_imm")] + {@ [ cons: =0 , 3 , 4 ; attrs: type ] + [ r , rZ , rZ ; csel ] csel\t%0, %3, %4, %m1 + [ r , rZ , UsM ; csel ] csinv\t%0, %3, zr, %m1 + [ r , UsM , rZ ; csel ] csinv\t%0, %4, zr, %M1 + [ r , rZ , Ui1 ; csel ] csinc\t%0, %3, zr, %m1 + [ r , Ui1 , rZ ; csel ] csinc\t%0, %4, zr, %M1 + [ r , UsM , UsM ; mov_imm ] mov\t%0, -1 + [ r , Ui1 , Ui1 ; mov_imm ] mov\t%0, 1 + } ) ;; zero_extend version of above (define_insn "*cmovsi_insn_uxtw" - [(set (match_operand:DI 0 "register_operand" "=r,r,r,r,r,r,r") + [(set (match_operand:DI 0 "register_operand") (zero_extend:DI (if_then_else:SI (match_operator 1 "aarch64_comparison_operator" - [(match_operand 2 "cc_register" "") (const_int 0)]) - (match_operand:SI 3 "aarch64_reg_zero_or_m1_or_1" "rZ,rZ,UsM,rZ,Ui1,UsM,Ui1") - (match_operand:SI 4 "aarch64_reg_zero_or_m1_or_1" "rZ,UsM,rZ,Ui1,rZ,UsM,Ui1"))))] + [(match_operand 2 "cc_register") (const_int 0)]) + (match_operand:SI 3 "aarch64_reg_zero_or_m1_or_1") + (match_operand:SI 4 "aarch64_reg_zero_or_m1_or_1"))))] "!((operands[3] == const1_rtx && operands[4] == constm1_rtx) || (operands[3] == constm1_rtx && operands[4] == const1_rtx))" ;; Final two alternatives should be unreachable, but included for completeness - "@ - csel\\t%w0, %w3, %w4, %m1 - csinv\\t%w0, %w3, wzr, %m1 - csinv\\t%w0, %w4, wzr, %M1 - csinc\\t%w0, %w3, wzr, %m1 - csinc\\t%w0, %w4, wzr, %M1 - mov\\t%w0, -1 - mov\\t%w0, 1" - [(set_attr "type" "csel, csel, csel, csel, csel, mov_imm, mov_imm")] + {@ [ cons: =0 , 3 , 4 ; attrs: type ] + [ r , rZ , rZ ; csel ] csel\t%w0, %w3, %w4, %m1 + [ r , rZ , UsM ; csel ] csinv\t%w0, %w3, wzr, %m1 + [ r , UsM , rZ ; csel ] csinv\t%w0, %w4, wzr, %M1 + [ r , rZ , Ui1 ; csel ] csinc\t%w0, %w3, wzr, %m1 + [ r , Ui1 , rZ ; csel ] csinc\t%w0, %w4, wzr, %M1 + [ r , UsM , UsM ; mov_imm ] mov\t%w0, -1 + [ r , Ui1 , Ui1 ; mov_imm ] mov\t%w0, 1 + } ) ;; There are two canonical forms for `cmp ? -1 : a`. @@ -4541,60 +4528,59 @@ (define_insn_and_split "*aarch64_and_imm2" ) (define_insn "3" - [(set (match_operand:GPI 0 "register_operand" "=r,rk,w") - (LOGICAL:GPI (match_operand:GPI 1 "register_operand" "%r,r,w") - (match_operand:GPI 2 "aarch64_logical_operand" "r,,w")))] + [(set (match_operand:GPI 0 "register_operand") + (LOGICAL:GPI (match_operand:GPI 1 "register_operand") + (match_operand:GPI 2 "aarch64_logical_operand")))] "" - "@ - \\t%0, %1, %2 - \\t%0, %1, %2 - \\t%0., %1., %2." - [(set_attr "type" "logic_reg,logic_imm,neon_logic") - (set_attr "arch" "*,*,simd")] + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ r , %r , r ; logic_reg , * ] \t%0, %1, %2 + [ rk , r , ; logic_imm , * ] \t%0, %1, %2 + [ w , w , w ; neon_logic , simd ] \t%0., %1., %2. + } ) ;; zero_extend version of above (define_insn "*si3_uxtw" - [(set (match_operand:DI 0 "register_operand" "=r,rk") + [(set (match_operand:DI 0 "register_operand") (zero_extend:DI - (LOGICAL:SI (match_operand:SI 1 "register_operand" "%r,r") - (match_operand:SI 2 "aarch64_logical_operand" "r,K"))))] + (LOGICAL:SI (match_operand:SI 1 "register_operand") + (match_operand:SI 2 "aarch64_logical_operand"))))] "" - "@ - \\t%w0, %w1, %w2 - \\t%w0, %w1, %2" - [(set_attr "type" "logic_reg,logic_imm")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , %r , r ; logic_reg ] \t%w0, %w1, %w2 + [ rk , r , K ; logic_imm ] \t%w0, %w1, %2 + } ) (define_insn "*and3_compare0" [(set (reg:CC_NZV CC_REGNUM) (compare:CC_NZV - (and:GPI (match_operand:GPI 1 "register_operand" "%r,r") - (match_operand:GPI 2 "aarch64_logical_operand" "r,")) + (and:GPI (match_operand:GPI 1 "register_operand") + (match_operand:GPI 2 "aarch64_logical_operand")) (const_int 0))) - (set (match_operand:GPI 0 "register_operand" "=r,r") + (set (match_operand:GPI 0 "register_operand") (and:GPI (match_dup 1) (match_dup 2)))] "" - "@ - ands\\t%0, %1, %2 - ands\\t%0, %1, %2" - [(set_attr "type" "logics_reg,logics_imm")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , %r , r ; logics_reg ] ands\t%0, %1, %2 + [ r , r , ; logics_imm ] ands\t%0, %1, %2 + } ) ;; zero_extend version of above (define_insn "*andsi3_compare0_uxtw" [(set (reg:CC_NZV CC_REGNUM) (compare:CC_NZV - (and:SI (match_operand:SI 1 "register_operand" "%r,r") - (match_operand:SI 2 "aarch64_logical_operand" "r,K")) + (and:SI (match_operand:SI 1 "register_operand") + (match_operand:SI 2 "aarch64_logical_operand")) (const_int 0))) - (set (match_operand:DI 0 "register_operand" "=r,r") + (set (match_operand:DI 0 "register_operand") (zero_extend:DI (and:SI (match_dup 1) (match_dup 2))))] "" - "@ - ands\\t%w0, %w1, %w2 - ands\\t%w0, %w1, %2" - [(set_attr "type" "logics_reg,logics_imm")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , %r , r ; logics_reg ] ands\t%w0, %w1, %w2 + [ r , r , K ; logics_imm ] ands\t%w0, %w1, %2 + } ) (define_insn "*and_3_compare0" @@ -4759,14 +4745,13 @@ (define_insn "*_si3_uxtw" ) (define_insn "one_cmpl2" - [(set (match_operand:GPI 0 "register_operand" "=r,w") - (not:GPI (match_operand:GPI 1 "register_operand" "r,w")))] + [(set (match_operand:GPI 0 "register_operand") + (not:GPI (match_operand:GPI 1 "register_operand")))] "" - "@ - mvn\\t%0, %1 - mvn\\t%0.8b, %1.8b" - [(set_attr "type" "logic_reg,neon_logic") - (set_attr "arch" "*,simd")] + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ r , r ; logic_reg , * ] mvn\t%0, %1 + [ w , w ; neon_logic , simd ] mvn\t%0.8b, %1.8b + } ) (define_insn "*one_cmpl_zero_extend" @@ -4794,15 +4779,14 @@ (define_insn "*one_cmpl_2" ;; Binary logical operators negating one operand, i.e. (a & !b), (a | !b). (define_insn "*_one_cmpl3" - [(set (match_operand:GPI 0 "register_operand" "=r,w") - (NLOGICAL:GPI (not:GPI (match_operand:GPI 1 "register_operand" "r,w")) - (match_operand:GPI 2 "register_operand" "r,w")))] + [(set (match_operand:GPI 0 "register_operand") + (NLOGICAL:GPI (not:GPI (match_operand:GPI 1 "register_operand")) + (match_operand:GPI 2 "register_operand")))] "" - "@ - \\t%0, %2, %1 - \\t%0., %2., %1." - [(set_attr "type" "logic_reg,neon_logic") - (set_attr "arch" "*,simd")] + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ r , r , r ; logic_reg , * ] \t%0, %2, %1 + [ w , w , w ; neon_logic , simd ] \t%0., %2., %1. + } ) (define_insn "*_one_cmplsidi3_ze" @@ -5141,14 +5125,14 @@ (define_insn "*ands_compare0" (define_insn "*and3nr_compare0" [(set (reg:CC_NZV CC_REGNUM) (compare:CC_NZV - (and:GPI (match_operand:GPI 0 "register_operand" "%r,r") - (match_operand:GPI 1 "aarch64_logical_operand" "r,")) + (and:GPI (match_operand:GPI 0 "register_operand") + (match_operand:GPI 1 "aarch64_logical_operand")) (const_int 0)))] "" - "@ - tst\\t%0, %1 - tst\\t%0, %1" - [(set_attr "type" "logics_reg,logics_imm")] + {@ [ cons: 0 , 1 ; attrs: type ] + [ %r , r ; logics_reg ] tst\t%0, %1 + [ r , ; logics_imm ] tst\t%0, %1 + } ) (define_split @@ -5431,36 +5415,33 @@ (define_insn_and_split "*aarch64__reg_minus3" ;; Logical left shift using SISD or Integer instruction (define_insn "*aarch64_ashl_sisd_or_int_3" - [(set (match_operand:GPI 0 "register_operand" "=r,r,w,w") + [(set (match_operand:GPI 0 "register_operand") (ashift:GPI - (match_operand:GPI 1 "register_operand" "r,r,w,w") - (match_operand:QI 2 "aarch64_reg_or_shift_imm_" "Us,r,Us,w")))] + (match_operand:GPI 1 "register_operand") + (match_operand:QI 2 "aarch64_reg_or_shift_imm_")))] "" - "@ - lsl\t%0, %1, %2 - lsl\t%0, %1, %2 - shl\t%0, %1, %2 - ushl\t%0, %1, %2" - [(set_attr "type" "bfx,shift_reg,neon_shift_imm, neon_shift_reg") - (set_attr "arch" "*,*,simd,simd")] + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ r , r , Us ; bfx , * ] lsl\t%0, %1, %2 + [ r , r , r ; shift_reg , * ] lsl\t%0, %1, %2 + [ w , w , Us ; neon_shift_imm , simd ] shl\t%0, %1, %2 + [ w , w , w ; neon_shift_reg , simd ] ushl\t%0, %1, %2 + } ) ;; Logical right shift using SISD or Integer instruction (define_insn "*aarch64_lshr_sisd_or_int_3" - [(set (match_operand:GPI 0 "register_operand" "=r,r,w,&w,&w") + [(set (match_operand:GPI 0 "register_operand") (lshiftrt:GPI - (match_operand:GPI 1 "register_operand" "r,r,w,w,w") - (match_operand:QI 2 "aarch64_reg_or_shift_imm_" - "Us,r,Us,w,0")))] - "" - "@ - lsr\t%0, %1, %2 - lsr\t%0, %1, %2 - ushr\t%0, %1, %2 - # - #" - [(set_attr "type" "bfx,shift_reg,neon_shift_imm,neon_shift_reg,neon_shift_reg") - (set_attr "arch" "*,*,simd,simd,simd")] + (match_operand:GPI 1 "register_operand") + (match_operand:QI 2 "aarch64_reg_or_shift_imm_")))] + "" + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ r , r , Us ; bfx , * ] lsr\t%0, %1, %2 + [ r , r , r ; shift_reg , * ] lsr\t%0, %1, %2 + [ w , w , Us ; neon_shift_imm , simd ] ushr\t%0, %1, %2 + [ &w , w , w ; neon_shift_reg , simd ] # + [ &w , w , 0 ; neon_shift_reg , simd ] # + } ) (define_split @@ -5495,20 +5476,18 @@ (define_split ;; Arithmetic right shift using SISD or Integer instruction (define_insn "*aarch64_ashr_sisd_or_int_3" - [(set (match_operand:GPI 0 "register_operand" "=r,r,w,&w,&w") + [(set (match_operand:GPI 0 "register_operand") (ashiftrt:GPI - (match_operand:GPI 1 "register_operand" "r,r,w,w,w") - (match_operand:QI 2 "aarch64_reg_or_shift_imm_di" - "Us,r,Us,w,0")))] - "" - "@ - asr\t%0, %1, %2 - asr\t%0, %1, %2 - sshr\t%0, %1, %2 - # - #" - [(set_attr "type" "bfx,shift_reg,neon_shift_imm,neon_shift_reg,neon_shift_reg") - (set_attr "arch" "*,*,simd,simd,simd")] + (match_operand:GPI 1 "register_operand") + (match_operand:QI 2 "aarch64_reg_or_shift_imm_di")))] + "" + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ r , r , Us ; bfx , * ] asr\t%0, %1, %2 + [ r , r , r ; shift_reg , * ] asr\t%0, %1, %2 + [ w , w , Us ; neon_shift_imm , simd ] sshr\t%0, %1, %2 + [ &w , w , w ; neon_shift_reg , simd ] # + [ &w , w , 0 ; neon_shift_reg , simd ] # + } ) (define_split @@ -5592,15 +5571,15 @@ (define_insn "*aarch64_sisd_neg_qi" ;; Rotate right (define_insn "*ror3_insn" - [(set (match_operand:GPI 0 "register_operand" "=r,r") + [(set (match_operand:GPI 0 "register_operand") (rotatert:GPI - (match_operand:GPI 1 "register_operand" "r,r") - (match_operand:QI 2 "aarch64_reg_or_shift_imm_" "Us,r")))] + (match_operand:GPI 1 "register_operand") + (match_operand:QI 2 "aarch64_reg_or_shift_imm_")))] "" - "@ - ror\\t%0, %1, %2 - ror\\t%0, %1, %2" - [(set_attr "type" "rotate_imm,shift_reg")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , r , Us ; rotate_imm ] ror\t%0, %1, %2 + [ r , r , r ; shift_reg ] ror\t%0, %1, %2 + } ) (define_insn "*rol3_insn" @@ -5617,15 +5596,15 @@ (define_insn "*rol3_insn" ;; zero_extend version of shifts (define_insn "*si3_insn_uxtw" - [(set (match_operand:DI 0 "register_operand" "=r,r") + [(set (match_operand:DI 0 "register_operand") (zero_extend:DI (SHIFT_no_rotate:SI - (match_operand:SI 1 "register_operand" "r,r") - (match_operand:QI 2 "aarch64_reg_or_shift_imm_si" "Uss,r"))))] + (match_operand:SI 1 "register_operand") + (match_operand:QI 2 "aarch64_reg_or_shift_imm_si"))))] "" - "@ - \\t%w0, %w1, %2 - \\t%w0, %w1, %w2" - [(set_attr "type" "bfx,shift_reg")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , r , Uss ; bfx ] \t%w0, %w1, %2 + [ r , r , r ; shift_reg ] \t%w0, %w1, %w2 + } ) ;; zero_extend version of rotate right @@ -6490,14 +6469,13 @@ (define_insn "truncdfhf2" ;; and making r = w more expensive (define_insn "_trunc2" - [(set (match_operand:GPI 0 "register_operand" "=w,?r") - (FIXUORS:GPI (match_operand: 1 "register_operand" "w,w")))] + [(set (match_operand:GPI 0 "register_operand") + (FIXUORS:GPI (match_operand: 1 "register_operand")))] "TARGET_FLOAT" - "@ - fcvtz\t%0, %1 - fcvtz\t%0, %1" - [(set_attr "type" "neon_fp_to_int_s,f_cvtf2i") - (set_attr "arch" "simd,fp")] + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ w , w ; neon_fp_to_int_s , simd ] fcvtz\t%0, %1 + [ ?r , w ; f_cvtf2i , fp ] fcvtz\t%0, %1 + } ) ;; Convert HF -> SI or DI @@ -6570,14 +6548,13 @@ (define_insn "*aarch64_cvtf2_mult" ;; Equal width integer to fp conversion. (define_insn "2" - [(set (match_operand:GPF 0 "register_operand" "=w,w") - (FLOATUORS:GPF (match_operand: 1 "register_operand" "w,?r")))] + [(set (match_operand:GPF 0 "register_operand") + (FLOATUORS:GPF (match_operand: 1 "register_operand")))] "TARGET_FLOAT" - "@ - cvtf\t%0, %1 - cvtf\t%0, %1" - [(set_attr "type" "neon_int_to_fp_,f_cvti2f") - (set_attr "arch" "simd,fp")] + {@ [ cons: =0 , 1 ; attrs: type , arch ] + [ w , w ; neon_int_to_fp_ , simd ] cvtf\t%0, %1 + [ w , ?r ; f_cvti2f , fp ] cvtf\t%0, %1 + } ) ;; Unequal width integer to fp conversions. @@ -6654,29 +6631,27 @@ (define_expand "dihf2" ;; Convert between fixed-point and floating-point (scalar modes) (define_insn "3" - [(set (match_operand: 0 "register_operand" "=r, w") - (unspec: [(match_operand:GPF 1 "register_operand" "w, w") - (match_operand:SI 2 "immediate_operand" "i, i")] + [(set (match_operand: 0 "register_operand") + (unspec: [(match_operand:GPF 1 "register_operand") + (match_operand:SI 2 "immediate_operand")] FCVT_F2FIXED))] "" - "@ - \t%0, %1, #%2 - \t%0, %1, #%2" - [(set_attr "type" "f_cvtf2i, neon_fp_to_int_") - (set_attr "arch" "fp,simd")] + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ r , w , i ; f_cvtf2i , fp ] \t%0, %1, #%2 + [ w , w , i ; neon_fp_to_int_ , simd ] \t%0, %1, #%2 + } ) (define_insn "3" - [(set (match_operand: 0 "register_operand" "=w, w") - (unspec: [(match_operand:GPI 1 "register_operand" "r, w") - (match_operand:SI 2 "immediate_operand" "i, i")] + [(set (match_operand: 0 "register_operand") + (unspec: [(match_operand:GPI 1 "register_operand") + (match_operand:SI 2 "immediate_operand")] FCVT_FIXED2F))] "" - "@ - \t%0, %1, #%2 - \t%0, %1, #%2" - [(set_attr "type" "f_cvti2f, neon_int_to_fp_") - (set_attr "arch" "fp,simd")] + {@ [ cons: =0 , 1 , 2 ; attrs: type , arch ] + [ w , r , i ; f_cvti2f , fp ] \t%0, %1, #%2 + [ w , w , i ; neon_int_to_fp_ , simd ] \t%0, %1, #%2 + } ) (define_insn "hf3" @@ -6849,14 +6824,14 @@ (define_expand "3" ) (define_insn "*aarch64_3_cssc" - [(set (match_operand:GPI 0 "register_operand" "=r,r") - (MAXMIN:GPI (match_operand:GPI 1 "register_operand" "r,r") - (match_operand:GPI 2 "aarch64_minmax_operand" "r,Um")))] + [(set (match_operand:GPI 0 "register_operand") + (MAXMIN:GPI (match_operand:GPI 1 "register_operand") + (match_operand:GPI 2 "aarch64_minmax_operand")))] "TARGET_CSSC" - "@ - \\t%0, %1, %2 - \\t%0, %1, %2" - [(set_attr "type" "alu_sreg,alu_imm")] + {@ [ cons: =0 , 1 , 2 ; attrs: type ] + [ r , r , r ; alu_sreg ] \t%0, %1, %2 + [ r , r , Um ; alu_imm ] \t%0, %1, %2 + } ) (define_insn "*aarch64_3_zero" @@ -6949,18 +6924,18 @@ (define_expand "copysign3" ) (define_insn "copysign3_insn" - [(set (match_operand:GPF 0 "register_operand" "=w,w,w,r") - (unspec:GPF [(match_operand:GPF 1 "register_operand" "w,0,w,r") - (match_operand:GPF 2 "register_operand" "w,w,0,0") - (match_operand: 3 "register_operand" "0,w,w,X")] + [(set (match_operand:GPF 0 "register_operand") + (unspec:GPF [(match_operand:GPF 1 "register_operand") + (match_operand:GPF 2 "register_operand") + (match_operand: 3 "register_operand")] UNSPEC_COPYSIGN))] "TARGET_SIMD" - "@ - bsl\\t%0., %2., %1. - bit\\t%0., %2., %3. - bif\\t%0., %1., %3. - bfxil\\t%0, %1, #0, " - [(set_attr "type" "neon_bsl,neon_bsl,neon_bsl,bfm")] + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: type ] + [ w , w , w , 0 ; neon_bsl ] bsl\t%0., %2., %1. + [ w , 0 , w , w ; neon_bsl ] bit\t%0., %2., %3. + [ w , w , 0 , w ; neon_bsl ] bif\t%0., %1., %3. + [ r , r , 0 , X ; bfm ] bfxil\t%0, %1, #0, + } ) -- 2.25.1