From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2085.outbound.protection.outlook.com [40.107.22.85]) by sourceware.org (Postfix) with ESMTPS id 5F2623858D28 for ; Thu, 4 May 2023 05:44:16 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 5F2623858D28 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NgX9BJEBBxaQpoZJaoUf8ewAOk43W340Zq922gqehZU=; b=tNqHiW/4fdBIW1Bed0BdNfSOmk+/jbwjt2dmOSowExRubvUGLUs5Mc5/qauKAKp3Y70kyvsPtErpozQaLj7TdZPCvgQVsNbc0kBlwa76r5UVxia9j9Cm4mWNrspx/6FRtBYJvf7TgD8U3Pe/ViXBj5jn/sMJfgseQFgW06CjWe0= Received: from DUZPR01CA0130.eurprd01.prod.exchangelabs.com (2603:10a6:10:4bc::25) by DB9PR08MB8386.eurprd08.prod.outlook.com (2603:10a6:10:3d9::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Thu, 4 May 2023 05:44:08 +0000 Received: from DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com (2603:10a6:10:4bc:cafe::96) by DUZPR01CA0130.outlook.office365.com (2603:10a6:10:4bc::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 05:44:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT051.mail.protection.outlook.com (100.127.142.148) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 05:44:07 +0000 Received: ("Tessian outbound 3570909035da:v136"); Thu, 04 May 2023 05:44:07 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 351d4574b075dbe6 X-CR-MTA-TID: 64aa7808 Received: from 48fef8dfbdc7.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 2446178F-D191-41A1-9980-073FAD1A8AB7.1; Thu, 04 May 2023 05:43:58 +0000 Received: from EUR05-AM6-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 48fef8dfbdc7.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Thu, 04 May 2023 05:43:58 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mQRAcoJsq1YgVjyxBkRSc+jHAcI5V/krBZKCm2Ff9+GV4S1mLDkYc+IjavzQltTbBFMhonz1dz3HJgDyUz4cNnifjqzCwDKlH+6U7gSqgDUv/gr/6xq/sbwEcLjbSfNERVIAjTiUpsW2Km4yqd2i2PjsgjtB+RoB/HACWkejUXWkS0OqNcO2szFqAlY4DiKA1049V4j98Q0f0P+c4oR7aqWFSyNCiM6ECywVfPNK0eanNFVCBikI5Y3GVvnfGvsSysx87HNz8DxXm30yE4Bh3FVgHHJAmf1Z1vOnfRN+pJMJYAt6xh8RcV/acr0Jh5YOEKchVYA0NEUBfiYIzF4eEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NgX9BJEBBxaQpoZJaoUf8ewAOk43W340Zq922gqehZU=; b=Ybgb0qhlCE/zRvJrTYoB/A1CUBm5YOKCSgJ+B9KAIlZnfNGyMfL6etytTtdFv0PqJpUQQQd29pc+rjLXeFsytBm+H9GtChfpqcACLQJC/2/WCmZExYtE+nDHU8r76v3CMy64Bn4BwXVFC8hMBJuDbNUWrRQ0V5uYwvPiFkynOE837er0y5zBWq8sj+vese68p2ySCNUb67/gxIFAFusZlRA70D176fy39XXexBiGEoGNZoTVKCk9gOz8BVgdkZhM5UvysqQ2g8KxNV7U4800xNljcrX8dqd3FOFGbdp0WF6qqxxJdgMj2qiIxS7QcvW4Jgo8Q0xH/yeOddgAf9Wj6w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NgX9BJEBBxaQpoZJaoUf8ewAOk43W340Zq922gqehZU=; b=tNqHiW/4fdBIW1Bed0BdNfSOmk+/jbwjt2dmOSowExRubvUGLUs5Mc5/qauKAKp3Y70kyvsPtErpozQaLj7TdZPCvgQVsNbc0kBlwa76r5UVxia9j9Cm4mWNrspx/6FRtBYJvf7TgD8U3Pe/ViXBj5jn/sMJfgseQFgW06CjWe0= Received: from AS8PR08MB7079.eurprd08.prod.outlook.com (2603:10a6:20b:400::12) by DU0PR08MB9606.eurprd08.prod.outlook.com (2603:10a6:10:44a::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Thu, 4 May 2023 05:43:55 +0000 Received: from AS8PR08MB7079.eurprd08.prod.outlook.com ([fe80::b487:af1e:9182:b18f]) by AS8PR08MB7079.eurprd08.prod.outlook.com ([fe80::b487:af1e:9182:b18f%5]) with mapi id 15.20.6363.026; Thu, 4 May 2023 05:43:55 +0000 From: Tejas Belagod To: "gcc-patches@gcc.gnu.org" CC: Richard Sandiford Subject: Re: [PATCH] [PR96339] AArch64: Optimise svlast[ab] Thread-Topic: [PATCH] [PR96339] AArch64: Optimise svlast[ab] Thread-Index: AQHZV/wJ/KX/qZALAEyoz6/5XPXgAK9J5rmH Date: Thu, 4 May 2023 05:43:55 +0000 Message-ID: References: <20230316113927.4967-1-tejas.belagod@arm.com> In-Reply-To: <20230316113927.4967-1-tejas.belagod@arm.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; x-ms-traffictypediagnostic: AS8PR08MB7079:EE_|DU0PR08MB9606:EE_|DBAEUR03FT051:EE_|DB9PR08MB8386:EE_ X-MS-Office365-Filtering-Correlation-Id: b5de5417-5134-4859-ef17-08db4c62942e x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: xKzvDm07AVbjitnufvAXV1bhIbJfRUc5WihIH4BzC0wA9dFERZ/UpLos1HT4iRZQZUI7EkoaA/P50DrxbRVHPYmYSbZBETQvJawr/mO5PLAiT+y60OoPdk8bTzHwroKQH+XkERgkJdCHLyKIxpFMyRMJZNvymQ+wre9IXL0QDMBCFXRn3jkO2PspLlzEC0ppP6HshiFm/QcYsekYfKP6xf3fIip/Ctsl2uvp/Kp+2dFHxaeqfXtXC5wBTdk3WZZTDlXF+tggLSOjLK11GgQWfW+C0+qDoOneBh7N0C4JMA6au6sr1juVeBwYvnU/2f1lf2aa34zgJQIGvAl8XBtqcZ4JUZxbiQ7MZcSX8gkeeS+eHyKBnRi3GU1yy09/1qEbWYEbVfHtSWJbdbdE0mEvpvwkQN28T14InlL6O8DuIxqdrxdV4rcCFmh5a4DY8BA/WiSPtcyo7cZTfL8gOdDXbKGyhnAfKflmpAM0ZvFZ0fxUVrZ+L3piuK91BVC5LG+wTtAcSk6ZVkd+uRB5NMMeyF2mJe4QbhiUNK8MdrDbz9RC2OECR6fw3GJdF139Sd5EwEAd4/jBkF4PxaBloqnkOZvD+jFqpagBHz2jJGwukRJda/zCNguIoOKucjk6TmjIzOY1S2J7kltA3k7rx5vusw== X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7079.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(39860400002)(346002)(376002)(136003)(451199021)(33656002)(86362001)(478600001)(91956017)(316002)(76116006)(66446008)(66946007)(66476007)(64756008)(7696005)(6916009)(4326008)(71200400001)(66556008)(30864003)(2906002)(55016003)(52536014)(8936002)(8676002)(41300700001)(5660300002)(38070700005)(38100700002)(122000001)(186003)(53546011)(9686003)(6506007)(26005)(83380400001)(84970400001)(579004)(559001);DIR:OUT;SFP:1101; Content-Type: multipart/alternative; boundary="_000_AS8PR08MB7079862977AB8EF6BC84D720EA6D9AS8PR08MB7079eurp_" MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9606 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: f948ed0a-9097-4e72-5b82-08db4c628cb0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5dyiiwWWZMjo1ZmLPa6kOhXnwtkguNaf8m2o+WmNGdbAzHlHsr3oCrZY0lAMxsu7aWJlyP+avVebgBuZjF7bGr80LmeWIf2n5b9+cwq2JFg0CQ9+OYkv4bZZp4zS1cFxyOCc/5LSpDUKO1XFfvhwJnvZAR2s7kG6PFhUUBrZtzeFTZSq/wkE7muR+xk/GeotLopK35KJmSYZic/OcnyUcAkHiIpLJpKzMKGOcE6ElGX5CSM9CiE30LYahdkkKl8cTK6gsXPr184fdVLFW2N5BZFn9I4LffVW/nuI6EmB0DePp32bO0xLDBKwlgR9QwGbQY2igLBAhfjJJpjbcUCzRkHkQnZXYmQliGwfKzwOR7VIqRRveqKs3XOuWrVWe/OWQ6sDzgd9NPwsm50PlZCV1F5Dkvq/81oWOk5juO6FMWYScTo54TE3DpHhnUbEO6rfPseZWgvYbbWDMJQRNA4Cr8tTxID8MqX1+ibAAUsP6EmMCZ9h50CghdE642CtdvakJJqbKZOScJeKj1MM8MfUyUoksAvYF/rU+ukN31Oe1R4rCrzVP/PXleQa6AAQF65PMgCieyzK40JZ8bLCktrhbGRtFgwfLwzZFfKicsMtQUhUSueK7zKAreGPF9NQbxIDT9CChh30VkOjvUL7tn1B5SRxzIwAqbW6gQzrw/KR4WorKp3Sm3hTjb0ToOC+tFyttBvNH6yFCcLbzUKoquwjEgNSclUzBlNMBlcBAoImz6x/nLE2O4vNsWftEMtgTNySXpxBZ8Xfs91qbHUawvqirw== X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199021)(40470700004)(46966006)(36840700001)(478600001)(186003)(9686003)(6506007)(53546011)(336012)(26005)(36860700001)(47076005)(7696005)(84970400001)(4326008)(6916009)(41300700001)(70206006)(70586007)(34020700004)(316002)(83380400001)(5660300002)(8676002)(52536014)(8936002)(356005)(81166007)(82740400003)(30864003)(2906002)(40460700003)(55016003)(40480700001)(82310400005)(33656002)(86362001)(579004)(559001);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 05:44:07.9510 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b5de5417-5134-4859-ef17-08db4c62942e X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8386 X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,HTML_MESSAGE,KAM_DMARC_NONE,KAM_SHORT,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: --_000_AS8PR08MB7079862977AB8EF6BC84D720EA6D9AS8PR08MB7079eurp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable [Ping] From: Tejas Belagod Date: Thursday, March 16, 2023 at 5:09 PM To: gcc-patches@gcc.gnu.org Cc: Tejas Belagod , Richard Sandiford Subject: [PATCH] [PR96339] AArch64: Optimise svlast[ab] From: Tejas Belagod This PR optimizes an SVE intrinsics sequence where svlasta (svptrue_pat_b8 (SV_VL1), x) a scalar is selected based on a constant predicate and a variable vector. This sequence is optimized to return the correspoding element of a NEON vector. For eg. svlasta (svptrue_pat_b8 (SV_VL1), x) returns umov w0, v0.b[1] Likewise, svlastb (svptrue_pat_b8 (SV_VL1), x) returns umov w0, v0.b[0] This optimization only works provided the constant predicate maps to a ra= nge that is within the bounds of a 128-bit NEON register. gcc/ChangeLog: PR target/96339 * config/aarch64/aarch64-sve-builtins-base.cc (svlast_impl::fold): = Fold sve calls that have a constant input predicate vector. (svlast_impl::is_lasta): Query to check if intrinsic is svlasta. (svlast_impl::is_lastb): Query to check if intrinsic is svlastb. (svlast_impl::vect_all_same): Check if all vector elements are equa= l. gcc/testsuite/ChangeLog: PR target/96339 * gcc.target/aarch64/sve/acle/general-c/svlast.c: New. * gcc.target/aarch64/sve/acle/general-c/svlast128_run.c: New. * gcc.target/aarch64/sve/acle/general-c/svlast256_run.c: New. * gcc.target/aarch64/sve/pcs/return_4.c (caller_bf16): Fix asm to expect optimized code for function body. * gcc.target/aarch64/sve/pcs/return_4_128.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_4_256.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_4_512.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_4_1024.c (caller_bf16): Likewis= e. * gcc.target/aarch64/sve/pcs/return_4_2048.c (caller_bf16): Likewis= e. * gcc.target/aarch64/sve/pcs/return_5.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_5_128.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_5_256.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_5_512.c (caller_bf16): Likewise. * gcc.target/aarch64/sve/pcs/return_5_1024.c (caller_bf16): Likewis= e. * gcc.target/aarch64/sve/pcs/return_5_2048.c (caller_bf16): Likewis= e. --- .../aarch64/aarch64-sve-builtins-base.cc | 124 +++++++ .../aarch64/sve/acle/general-c/svlast.c | 63 ++++ .../sve/acle/general-c/svlast128_run.c | 313 +++++++++++++++++ .../sve/acle/general-c/svlast256_run.c | 314 ++++++++++++++++++ .../gcc.target/aarch64/sve/pcs/return_4.c | 2 - .../aarch64/sve/pcs/return_4_1024.c | 2 - .../gcc.target/aarch64/sve/pcs/return_4_128.c | 2 - .../aarch64/sve/pcs/return_4_2048.c | 2 - .../gcc.target/aarch64/sve/pcs/return_4_256.c | 2 - .../gcc.target/aarch64/sve/pcs/return_4_512.c | 2 - .../gcc.target/aarch64/sve/pcs/return_5.c | 2 - .../aarch64/sve/pcs/return_5_1024.c | 2 - .../gcc.target/aarch64/sve/pcs/return_5_128.c | 2 - .../aarch64/sve/pcs/return_5_2048.c | 2 - .../gcc.target/aarch64/sve/pcs/return_5_256.c | 2 - .../gcc.target/aarch64/sve/pcs/return_5_512.c | 2 - 16 files changed, 814 insertions(+), 24 deletions(-) create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svl= ast.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svl= ast128_run.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svl= ast256_run.c diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b/gcc/config/a= arch64/aarch64-sve-builtins-base.cc index cd9cace3c9b..db2b4dcaac9 100644 --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc @@ -1056,6 +1056,130 @@ class svlast_impl : public quiet public: CONSTEXPR svlast_impl (int unspec) : m_unspec (unspec) {} + bool is_lasta () const { return m_unspec =3D=3D UNSPEC_LASTA; } + bool is_lastb () const { return m_unspec =3D=3D UNSPEC_LASTB; } + + bool vect_all_same (tree v , int step) const + { + int i; + int nelts =3D vector_cst_encoded_nelts (v); + int first_el =3D 0; + + for (i =3D first_el; i < nelts; i +=3D step) + if (VECTOR_CST_ENCODED_ELT (v, i) !=3D VECTOR_CST_ENCODED_ELT (v, fi= rst_el)) + return false; + + return true; + } + + /* Fold a svlast{a/b} call with constant predicate to a BIT_FIELD_REF. + BIT_FIELD_REF lowers to a NEON element extract, so we have to make su= re + the index of the element being accessed is in the range of a NEON vec= tor + width. */ + gimple *fold (gimple_folder & f) const override + { + tree pred =3D gimple_call_arg (f.call, 0); + tree val =3D gimple_call_arg (f.call, 1); + + if (TREE_CODE (pred) =3D=3D VECTOR_CST) + { + HOST_WIDE_INT pos; + unsigned int const_vg; + int i =3D 0; + int step =3D f.type_suffix (0).element_bytes; + int step_1 =3D gcd (step, VECTOR_CST_NPATTERNS (pred)); + int npats =3D VECTOR_CST_NPATTERNS (pred); + unsigned HOST_WIDE_INT nelts =3D vector_cst_encoded_nelts (pred); + tree b =3D NULL_TREE; + bool const_vl =3D aarch64_sve_vg.is_constant (&const_vg); + + /* We can optimize 2 cases common to variable and fixed-length cases + without a linear search of the predicate vector: + 1. LASTA if predicate is all true, return element 0. + 2. LASTA if predicate all false, return element 0. */ + if (is_lasta () && vect_all_same (pred, step_1)) + { + b =3D build3 (BIT_FIELD_REF, TREE_TYPE (f.lhs), val, + bitsize_int (step * BITS_PER_UNIT), bitsize_int (0)= ); + return gimple_build_assign (f.lhs, b); + } + + /* Handle the all-false case for LASTB where SVE VL =3D=3D 128b - + return the highest numbered element. */ + if (is_lastb () && known_eq (BYTES_PER_SVE_VECTOR, 16) + && vect_all_same (pred, step_1) + && integer_zerop (VECTOR_CST_ENCODED_ELT (pred, 0))) + { + b =3D build3 (BIT_FIELD_REF, TREE_TYPE (f.lhs), val, + bitsize_int (step * BITS_PER_UNIT), + bitsize_int ((16 - step) * BITS_PER_UNIT)); + + return gimple_build_assign (f.lhs, b); + } + + /* If VECTOR_CST_NELTS_PER_PATTERN (pred) =3D=3D 2 and every multip= le of + 'step_1' in + [VECTOR_CST_NPATTERNS .. VECTOR_CST_ENCODED_NELTS - 1] + is zero, then we can treat the vector as VECTOR_CST_NPATTERNS + elements followed by all inactive elements. */ + if (!const_vl && VECTOR_CST_NELTS_PER_PATTERN (pred) =3D=3D 2) + for (i =3D npats; i < nelts; i +=3D step_1) + { + /* If there are active elements in the repeated pattern of + a variable-length vector, then return NULL as there is no = way + to be sure statically if this falls within the NEON range.= */ + if (!integer_zerop (VECTOR_CST_ENCODED_ELT (pred, i))) + return NULL; + } + + /* If we're here, it means either: + 1. The vector is variable-length and there's no active element i= n the + repeated part of the pattern, or + 2. The vector is fixed-length. + Fall-through to a linear search. */ + + /* Restrict the scope of search to NPATS if vector is + variable-length. */ + if (!VECTOR_CST_NELTS (pred).is_constant (&nelts)) + nelts =3D npats; + + /* Fall through to finding the last active element linearly for + for all cases where the last active element is known to be + within a statically-determinable range. */ + i =3D MAX ((int)nelts - step, 0); + for (; i >=3D 0; i -=3D step) + if (!integer_zerop (VECTOR_CST_ELT (pred, i))) + break; + + if (is_lastb ()) + { + /* For LASTB, the element is the last active element. */ + pos =3D i; + } + else + { + /* For LASTA, the element is one after last active element. */ + pos =3D i + step; + + /* If last active element is + last element, wrap-around and return first NEON element. */ + if (known_ge (pos, BYTES_PER_SVE_VECTOR)) + pos =3D 0; + } + + /* Out of NEON range. */ + if (pos < 0 || pos > 15) + return NULL; + + b =3D build3 (BIT_FIELD_REF, TREE_TYPE (f.lhs), val, + bitsize_int (step * BITS_PER_UNIT), + bitsize_int (pos * BITS_PER_UNIT)); + + return gimple_build_assign (f.lhs, b); + } + return NULL; + } + rtx expand (function_expander &e) const override { diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast.c b= /gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast.c new file mode 100644 index 00000000000..fdbe5e309af --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast.c @@ -0,0 +1,63 @@ +/* { dg-do compile } */ +/* { dg-options "-O3 -msve-vector-bits=3D256" } */ + +#include +#include "arm_sve.h" + +#define NAME(name, size, pat, sign, ab) \ + name ## _ ## size ## _ ## pat ## _ ## sign ## _ ## ab + +#define NAMEF(name, size, pat, sign, ab) \ + name ## _ ## size ## _ ## pat ## _ ## sign ## _ ## ab ## _false + +#define SVTYPE(size, sign) \ + sv ## sign ## int ## size ## _t + +#define STYPE(size, sign) sign ## int ## size ##_t + +#define SVELAST_DEF(size, pat, sign, ab, su) \ + STYPE (size, sign) __attribute__((noinline)) \ + NAME (foo, size, pat, sign, ab) (SVTYPE (size, sign) x) \ + { \ + return svlast ## ab (svptrue_pat_b ## size (pat), x); \ + } \ + STYPE (size, sign) __attribute__((noinline)) \ + NAMEF (foo, size, pat, sign, ab) (SVTYPE (size, sign) x) \ + { \ + return svlast ## ab (svpfalse (), x); \ + } + +#define ALL_PATS(SIZE, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL1, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL2, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL3, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL4, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL5, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL6, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL7, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL8, SIGN, AB, SU) \ + SVELAST_DEF (SIZE, SV_VL16, SIGN, AB, SU) + +#define ALL_SIGN(SIZE, AB) \ + ALL_PATS (SIZE, , AB, s) \ + ALL_PATS (SIZE, u, AB, u) + +#define ALL_SIZE(AB) \ + ALL_SIGN (8, AB) \ + ALL_SIGN (16, AB) \ + ALL_SIGN (32, AB) \ + ALL_SIGN (64, AB) + +#define ALL_POS() \ + ALL_SIZE (a) \ + ALL_SIZE (b) + + +ALL_POS() + +/* { dg-final { scan-assembler-times {\tumov\tw[0-9]+, v[0-9]+\.b} 52 } } = */ +/* { dg-final { scan-assembler-times {\tumov\tw[0-9]+, v[0-9]+\.h} 50 } } = */ +/* { dg-final { scan-assembler-times {\tumov\tw[0-9]+, v[0-9]+\.s} 12 } } = */ +/* { dg-final { scan-assembler-times {\tfmov\tw0, s0} 24 } } */ +/* { dg-final { scan-assembler-times {\tumov\tx[0-9]+, v[0-9]+\.d} 4 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tx0, d0} 32 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast128_= run.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast128_run.c new file mode 100644 index 00000000000..5e1e9303d7b --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast128_run.c @@ -0,0 +1,313 @@ +/* { dg-do run { target aarch64_sve128_hw } } */ +/* { dg-options "-O3 -msve-vector-bits=3D128 -std=3Dgnu99" } */ + +#include "svlast.c" + +int +main (void) +{ + int8_t res_8_SV_VL1__a =3D 1; + int8_t res_8_SV_VL2__a =3D 2; + int8_t res_8_SV_VL3__a =3D 3; + int8_t res_8_SV_VL4__a =3D 4; + int8_t res_8_SV_VL5__a =3D 5; + int8_t res_8_SV_VL6__a =3D 6; + int8_t res_8_SV_VL7__a =3D 7; + int8_t res_8_SV_VL8__a =3D 8; + int8_t res_8_SV_VL16__a =3D 0; + uint8_t res_8_SV_VL1_u_a =3D 1; + uint8_t res_8_SV_VL2_u_a =3D 2; + uint8_t res_8_SV_VL3_u_a =3D 3; + uint8_t res_8_SV_VL4_u_a =3D 4; + uint8_t res_8_SV_VL5_u_a =3D 5; + uint8_t res_8_SV_VL6_u_a =3D 6; + uint8_t res_8_SV_VL7_u_a =3D 7; + uint8_t res_8_SV_VL8_u_a =3D 8; + uint8_t res_8_SV_VL16_u_a =3D 0; + int16_t res_16_SV_VL1__a =3D 1; + int16_t res_16_SV_VL2__a =3D 2; + int16_t res_16_SV_VL3__a =3D 3; + int16_t res_16_SV_VL4__a =3D 4; + int16_t res_16_SV_VL5__a =3D 5; + int16_t res_16_SV_VL6__a =3D 6; + int16_t res_16_SV_VL7__a =3D 7; + int16_t res_16_SV_VL8__a =3D 0; + int16_t res_16_SV_VL16__a =3D 0; + uint16_t res_16_SV_VL1_u_a =3D 1; + uint16_t res_16_SV_VL2_u_a =3D 2; + uint16_t res_16_SV_VL3_u_a =3D 3; + uint16_t res_16_SV_VL4_u_a =3D 4; + uint16_t res_16_SV_VL5_u_a =3D 5; + uint16_t res_16_SV_VL6_u_a =3D 6; + uint16_t res_16_SV_VL7_u_a =3D 7; + uint16_t res_16_SV_VL8_u_a =3D 0; + uint16_t res_16_SV_VL16_u_a =3D 0; + int32_t res_32_SV_VL1__a =3D 1; + int32_t res_32_SV_VL2__a =3D 2; + int32_t res_32_SV_VL3__a =3D 3; + int32_t res_32_SV_VL4__a =3D 0; + int32_t res_32_SV_VL5__a =3D 0; + int32_t res_32_SV_VL6__a =3D 0; + int32_t res_32_SV_VL7__a =3D 0; + int32_t res_32_SV_VL8__a =3D 0; + int32_t res_32_SV_VL16__a =3D 0; + uint32_t res_32_SV_VL1_u_a =3D 1; + uint32_t res_32_SV_VL2_u_a =3D 2; + uint32_t res_32_SV_VL3_u_a =3D 3; + uint32_t res_32_SV_VL4_u_a =3D 0; + uint32_t res_32_SV_VL5_u_a =3D 0; + uint32_t res_32_SV_VL6_u_a =3D 0; + uint32_t res_32_SV_VL7_u_a =3D 0; + uint32_t res_32_SV_VL8_u_a =3D 0; + uint32_t res_32_SV_VL16_u_a =3D 0; + int64_t res_64_SV_VL1__a =3D 1; + int64_t res_64_SV_VL2__a =3D 0; + int64_t res_64_SV_VL3__a =3D 0; + int64_t res_64_SV_VL4__a =3D 0; + int64_t res_64_SV_VL5__a =3D 0; + int64_t res_64_SV_VL6__a =3D 0; + int64_t res_64_SV_VL7__a =3D 0; + int64_t res_64_SV_VL8__a =3D 0; + int64_t res_64_SV_VL16__a =3D 0; + uint64_t res_64_SV_VL1_u_a =3D 1; + uint64_t res_64_SV_VL2_u_a =3D 0; + uint64_t res_64_SV_VL3_u_a =3D 0; + uint64_t res_64_SV_VL4_u_a =3D 0; + uint64_t res_64_SV_VL5_u_a =3D 0; + uint64_t res_64_SV_VL6_u_a =3D 0; + uint64_t res_64_SV_VL7_u_a =3D 0; + uint64_t res_64_SV_VL8_u_a =3D 0; + uint64_t res_64_SV_VL16_u_a =3D 0; + int8_t res_8_SV_VL1__b =3D 0; + int8_t res_8_SV_VL2__b =3D 1; + int8_t res_8_SV_VL3__b =3D 2; + int8_t res_8_SV_VL4__b =3D 3; + int8_t res_8_SV_VL5__b =3D 4; + int8_t res_8_SV_VL6__b =3D 5; + int8_t res_8_SV_VL7__b =3D 6; + int8_t res_8_SV_VL8__b =3D 7; + int8_t res_8_SV_VL16__b =3D 15; + uint8_t res_8_SV_VL1_u_b =3D 0; + uint8_t res_8_SV_VL2_u_b =3D 1; + uint8_t res_8_SV_VL3_u_b =3D 2; + uint8_t res_8_SV_VL4_u_b =3D 3; + uint8_t res_8_SV_VL5_u_b =3D 4; + uint8_t res_8_SV_VL6_u_b =3D 5; + uint8_t res_8_SV_VL7_u_b =3D 6; + uint8_t res_8_SV_VL8_u_b =3D 7; + uint8_t res_8_SV_VL16_u_b =3D 15; + int16_t res_16_SV_VL1__b =3D 0; + int16_t res_16_SV_VL2__b =3D 1; + int16_t res_16_SV_VL3__b =3D 2; + int16_t res_16_SV_VL4__b =3D 3; + int16_t res_16_SV_VL5__b =3D 4; + int16_t res_16_SV_VL6__b =3D 5; + int16_t res_16_SV_VL7__b =3D 6; + int16_t res_16_SV_VL8__b =3D 7; + int16_t res_16_SV_VL16__b =3D 7; + uint16_t res_16_SV_VL1_u_b =3D 0; + uint16_t res_16_SV_VL2_u_b =3D 1; + uint16_t res_16_SV_VL3_u_b =3D 2; + uint16_t res_16_SV_VL4_u_b =3D 3; + uint16_t res_16_SV_VL5_u_b =3D 4; + uint16_t res_16_SV_VL6_u_b =3D 5; + uint16_t res_16_SV_VL7_u_b =3D 6; + uint16_t res_16_SV_VL8_u_b =3D 7; + uint16_t res_16_SV_VL16_u_b =3D 7; + int32_t res_32_SV_VL1__b =3D 0; + int32_t res_32_SV_VL2__b =3D 1; + int32_t res_32_SV_VL3__b =3D 2; + int32_t res_32_SV_VL4__b =3D 3; + int32_t res_32_SV_VL5__b =3D 3; + int32_t res_32_SV_VL6__b =3D 3; + int32_t res_32_SV_VL7__b =3D 3; + int32_t res_32_SV_VL8__b =3D 3; + int32_t res_32_SV_VL16__b =3D 3; + uint32_t res_32_SV_VL1_u_b =3D 0; + uint32_t res_32_SV_VL2_u_b =3D 1; + uint32_t res_32_SV_VL3_u_b =3D 2; + uint32_t res_32_SV_VL4_u_b =3D 3; + uint32_t res_32_SV_VL5_u_b =3D 3; + uint32_t res_32_SV_VL6_u_b =3D 3; + uint32_t res_32_SV_VL7_u_b =3D 3; + uint32_t res_32_SV_VL8_u_b =3D 3; + uint32_t res_32_SV_VL16_u_b =3D 3; + int64_t res_64_SV_VL1__b =3D 0; + int64_t res_64_SV_VL2__b =3D 1; + int64_t res_64_SV_VL3__b =3D 1; + int64_t res_64_SV_VL4__b =3D 1; + int64_t res_64_SV_VL5__b =3D 1; + int64_t res_64_SV_VL6__b =3D 1; + int64_t res_64_SV_VL7__b =3D 1; + int64_t res_64_SV_VL8__b =3D 1; + int64_t res_64_SV_VL16__b =3D 1; + uint64_t res_64_SV_VL1_u_b =3D 0; + uint64_t res_64_SV_VL2_u_b =3D 1; + uint64_t res_64_SV_VL3_u_b =3D 1; + uint64_t res_64_SV_VL4_u_b =3D 1; + uint64_t res_64_SV_VL5_u_b =3D 1; + uint64_t res_64_SV_VL6_u_b =3D 1; + uint64_t res_64_SV_VL7_u_b =3D 1; + uint64_t res_64_SV_VL8_u_b =3D 1; + uint64_t res_64_SV_VL16_u_b =3D 1; + + int8_t res_8_SV_VL1__a_false =3D 0; + int8_t res_8_SV_VL2__a_false =3D 0; + int8_t res_8_SV_VL3__a_false =3D 0; + int8_t res_8_SV_VL4__a_false =3D 0; + int8_t res_8_SV_VL5__a_false =3D 0; + int8_t res_8_SV_VL6__a_false =3D 0; + int8_t res_8_SV_VL7__a_false =3D 0; + int8_t res_8_SV_VL8__a_false =3D 0; + int8_t res_8_SV_VL16__a_false =3D 0; + uint8_t res_8_SV_VL1_u_a_false =3D 0; + uint8_t res_8_SV_VL2_u_a_false =3D 0; + uint8_t res_8_SV_VL3_u_a_false =3D 0; + uint8_t res_8_SV_VL4_u_a_false =3D 0; + uint8_t res_8_SV_VL5_u_a_false =3D 0; + uint8_t res_8_SV_VL6_u_a_false =3D 0; + uint8_t res_8_SV_VL7_u_a_false =3D 0; + uint8_t res_8_SV_VL8_u_a_false =3D 0; + uint8_t res_8_SV_VL16_u_a_false =3D 0; + int16_t res_16_SV_VL1__a_false =3D 0; + int16_t res_16_SV_VL2__a_false =3D 0; + int16_t res_16_SV_VL3__a_false =3D 0; + int16_t res_16_SV_VL4__a_false =3D 0; + int16_t res_16_SV_VL5__a_false =3D 0; + int16_t res_16_SV_VL6__a_false =3D 0; + int16_t res_16_SV_VL7__a_false =3D 0; + int16_t res_16_SV_VL8__a_false =3D 0; + int16_t res_16_SV_VL16__a_false =3D 0; + uint16_t res_16_SV_VL1_u_a_false =3D 0; + uint16_t res_16_SV_VL2_u_a_false =3D 0; + uint16_t res_16_SV_VL3_u_a_false =3D 0; + uint16_t res_16_SV_VL4_u_a_false =3D 0; + uint16_t res_16_SV_VL5_u_a_false =3D 0; + uint16_t res_16_SV_VL6_u_a_false =3D 0; + uint16_t res_16_SV_VL7_u_a_false =3D 0; + uint16_t res_16_SV_VL8_u_a_false =3D 0; + uint16_t res_16_SV_VL16_u_a_false =3D 0; + int32_t res_32_SV_VL1__a_false =3D 0; + int32_t res_32_SV_VL2__a_false =3D 0; + int32_t res_32_SV_VL3__a_false =3D 0; + int32_t res_32_SV_VL4__a_false =3D 0; + int32_t res_32_SV_VL5__a_false =3D 0; + int32_t res_32_SV_VL6__a_false =3D 0; + int32_t res_32_SV_VL7__a_false =3D 0; + int32_t res_32_SV_VL8__a_false =3D 0; + int32_t res_32_SV_VL16__a_false =3D 0; + uint32_t res_32_SV_VL1_u_a_false =3D 0; + uint32_t res_32_SV_VL2_u_a_false =3D 0; + uint32_t res_32_SV_VL3_u_a_false =3D 0; + uint32_t res_32_SV_VL4_u_a_false =3D 0; + uint32_t res_32_SV_VL5_u_a_false =3D 0; + uint32_t res_32_SV_VL6_u_a_false =3D 0; + uint32_t res_32_SV_VL7_u_a_false =3D 0; + uint32_t res_32_SV_VL8_u_a_false =3D 0; + uint32_t res_32_SV_VL16_u_a_false =3D 0; + int64_t res_64_SV_VL1__a_false =3D 0; + int64_t res_64_SV_VL2__a_false =3D 0; + int64_t res_64_SV_VL3__a_false =3D 0; + int64_t res_64_SV_VL4__a_false =3D 0; + int64_t res_64_SV_VL5__a_false =3D 0; + int64_t res_64_SV_VL6__a_false =3D 0; + int64_t res_64_SV_VL7__a_false =3D 0; + int64_t res_64_SV_VL8__a_false =3D 0; + int64_t res_64_SV_VL16__a_false =3D 0; + uint64_t res_64_SV_VL1_u_a_false =3D 0; + uint64_t res_64_SV_VL2_u_a_false =3D 0; + uint64_t res_64_SV_VL3_u_a_false =3D 0; + uint64_t res_64_SV_VL4_u_a_false =3D 0; + uint64_t res_64_SV_VL5_u_a_false =3D 0; + uint64_t res_64_SV_VL6_u_a_false =3D 0; + uint64_t res_64_SV_VL7_u_a_false =3D 0; + uint64_t res_64_SV_VL8_u_a_false =3D 0; + uint64_t res_64_SV_VL16_u_a_false =3D 0; + int8_t res_8_SV_VL1__b_false =3D 15; + int8_t res_8_SV_VL2__b_false =3D 15; + int8_t res_8_SV_VL3__b_false =3D 15; + int8_t res_8_SV_VL4__b_false =3D 15; + int8_t res_8_SV_VL5__b_false =3D 15; + int8_t res_8_SV_VL6__b_false =3D 15; + int8_t res_8_SV_VL7__b_false =3D 15; + int8_t res_8_SV_VL8__b_false =3D 15; + int8_t res_8_SV_VL16__b_false =3D 15; + uint8_t res_8_SV_VL1_u_b_false =3D 15; + uint8_t res_8_SV_VL2_u_b_false =3D 15; + uint8_t res_8_SV_VL3_u_b_false =3D 15; + uint8_t res_8_SV_VL4_u_b_false =3D 15; + uint8_t res_8_SV_VL5_u_b_false =3D 15; + uint8_t res_8_SV_VL6_u_b_false =3D 15; + uint8_t res_8_SV_VL7_u_b_false =3D 15; + uint8_t res_8_SV_VL8_u_b_false =3D 15; + uint8_t res_8_SV_VL16_u_b_false =3D 15; + int16_t res_16_SV_VL1__b_false =3D 7; + int16_t res_16_SV_VL2__b_false =3D 7; + int16_t res_16_SV_VL3__b_false =3D 7; + int16_t res_16_SV_VL4__b_false =3D 7; + int16_t res_16_SV_VL5__b_false =3D 7; + int16_t res_16_SV_VL6__b_false =3D 7; + int16_t res_16_SV_VL7__b_false =3D 7; + int16_t res_16_SV_VL8__b_false =3D 7; + int16_t res_16_SV_VL16__b_false =3D 7; + uint16_t res_16_SV_VL1_u_b_false =3D 7; + uint16_t res_16_SV_VL2_u_b_false =3D 7; + uint16_t res_16_SV_VL3_u_b_false =3D 7; + uint16_t res_16_SV_VL4_u_b_false =3D 7; + uint16_t res_16_SV_VL5_u_b_false =3D 7; + uint16_t res_16_SV_VL6_u_b_false =3D 7; + uint16_t res_16_SV_VL7_u_b_false =3D 7; + uint16_t res_16_SV_VL8_u_b_false =3D 7; + uint16_t res_16_SV_VL16_u_b_false =3D 7; + int32_t res_32_SV_VL1__b_false =3D 3; + int32_t res_32_SV_VL2__b_false =3D 3; + int32_t res_32_SV_VL3__b_false =3D 3; + int32_t res_32_SV_VL4__b_false =3D 3; + int32_t res_32_SV_VL5__b_false =3D 3; + int32_t res_32_SV_VL6__b_false =3D 3; + int32_t res_32_SV_VL7__b_false =3D 3; + int32_t res_32_SV_VL8__b_false =3D 3; + int32_t res_32_SV_VL16__b_false =3D 3; + uint32_t res_32_SV_VL1_u_b_false =3D 3; + uint32_t res_32_SV_VL2_u_b_false =3D 3; + uint32_t res_32_SV_VL3_u_b_false =3D 3; + uint32_t res_32_SV_VL4_u_b_false =3D 3; + uint32_t res_32_SV_VL5_u_b_false =3D 3; + uint32_t res_32_SV_VL6_u_b_false =3D 3; + uint32_t res_32_SV_VL7_u_b_false =3D 3; + uint32_t res_32_SV_VL8_u_b_false =3D 3; + uint32_t res_32_SV_VL16_u_b_false =3D 3; + int64_t res_64_SV_VL1__b_false =3D 1; + int64_t res_64_SV_VL2__b_false =3D 1; + int64_t res_64_SV_VL3__b_false =3D 1; + int64_t res_64_SV_VL4__b_false =3D 1; + int64_t res_64_SV_VL5__b_false =3D 1; + int64_t res_64_SV_VL6__b_false =3D 1; + int64_t res_64_SV_VL7__b_false =3D 1; + int64_t res_64_SV_VL8__b_false =3D 1; + int64_t res_64_SV_VL16__b_false =3D 1; + uint64_t res_64_SV_VL1_u_b_false =3D 1; + uint64_t res_64_SV_VL2_u_b_false =3D 1; + uint64_t res_64_SV_VL3_u_b_false =3D 1; + uint64_t res_64_SV_VL4_u_b_false =3D 1; + uint64_t res_64_SV_VL5_u_b_false =3D 1; + uint64_t res_64_SV_VL6_u_b_false =3D 1; + uint64_t res_64_SV_VL7_u_b_false =3D 1; + uint64_t res_64_SV_VL8_u_b_false =3D 1; + uint64_t res_64_SV_VL16_u_b_false =3D 1; + +#undef SVELAST_DEF +#define SVELAST_DEF(size, pat, sign, ab, su) \ + if (NAME (foo, size, pat, sign, ab) \ + (svindex_ ## su ## size (0, 1)) !=3D \ + NAME (res, size, pat, sign, ab)) \ + __builtin_abort (); \ + if (NAMEF (foo, size, pat, sign, ab) \ + (svindex_ ## su ## size (0, 1)) !=3D \ + NAMEF (res, size, pat, sign, ab)) \ + __builtin_abort (); + + ALL_POS () + + return 0; +} diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast256_= run.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast256_run.c new file mode 100644 index 00000000000..f6ba7ea7d89 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general-c/svlast256_run.c @@ -0,0 +1,314 @@ +/* { dg-do run { target aarch64_sve256_hw } } */ +/* { dg-options "-O3 -msve-vector-bits=3D256 -std=3Dgnu99" } */ + +#include "svlast.c" + +int +main (void) +{ + int8_t res_8_SV_VL1__a =3D 1; + int8_t res_8_SV_VL2__a =3D 2; + int8_t res_8_SV_VL3__a =3D 3; + int8_t res_8_SV_VL4__a =3D 4; + int8_t res_8_SV_VL5__a =3D 5; + int8_t res_8_SV_VL6__a =3D 6; + int8_t res_8_SV_VL7__a =3D 7; + int8_t res_8_SV_VL8__a =3D 8; + int8_t res_8_SV_VL16__a =3D 16; + uint8_t res_8_SV_VL1_u_a =3D 1; + uint8_t res_8_SV_VL2_u_a =3D 2; + uint8_t res_8_SV_VL3_u_a =3D 3; + uint8_t res_8_SV_VL4_u_a =3D 4; + uint8_t res_8_SV_VL5_u_a =3D 5; + uint8_t res_8_SV_VL6_u_a =3D 6; + uint8_t res_8_SV_VL7_u_a =3D 7; + uint8_t res_8_SV_VL8_u_a =3D 8; + uint8_t res_8_SV_VL16_u_a =3D 16; + int16_t res_16_SV_VL1__a =3D 1; + int16_t res_16_SV_VL2__a =3D 2; + int16_t res_16_SV_VL3__a =3D 3; + int16_t res_16_SV_VL4__a =3D 4; + int16_t res_16_SV_VL5__a =3D 5; + int16_t res_16_SV_VL6__a =3D 6; + int16_t res_16_SV_VL7__a =3D 7; + int16_t res_16_SV_VL8__a =3D 8; + int16_t res_16_SV_VL16__a =3D 0; + uint16_t res_16_SV_VL1_u_a =3D 1; + uint16_t res_16_SV_VL2_u_a =3D 2; + uint16_t res_16_SV_VL3_u_a =3D 3; + uint16_t res_16_SV_VL4_u_a =3D 4; + uint16_t res_16_SV_VL5_u_a =3D 5; + uint16_t res_16_SV_VL6_u_a =3D 6; + uint16_t res_16_SV_VL7_u_a =3D 7; + uint16_t res_16_SV_VL8_u_a =3D 8; + uint16_t res_16_SV_VL16_u_a =3D 0; + int32_t res_32_SV_VL1__a =3D 1; + int32_t res_32_SV_VL2__a =3D 2; + int32_t res_32_SV_VL3__a =3D 3; + int32_t res_32_SV_VL4__a =3D 4; + int32_t res_32_SV_VL5__a =3D 5; + int32_t res_32_SV_VL6__a =3D 6; + int32_t res_32_SV_VL7__a =3D 7; + int32_t res_32_SV_VL8__a =3D 0; + int32_t res_32_SV_VL16__a =3D 0; + uint32_t res_32_SV_VL1_u_a =3D 1; + uint32_t res_32_SV_VL2_u_a =3D 2; + uint32_t res_32_SV_VL3_u_a =3D 3; + uint32_t res_32_SV_VL4_u_a =3D 4; + uint32_t res_32_SV_VL5_u_a =3D 5; + uint32_t res_32_SV_VL6_u_a =3D 6; + uint32_t res_32_SV_VL7_u_a =3D 7; + uint32_t res_32_SV_VL8_u_a =3D 0; + uint32_t res_32_SV_VL16_u_a =3D 0; + int64_t res_64_SV_VL1__a =3D 1; + int64_t res_64_SV_VL2__a =3D 2; + int64_t res_64_SV_VL3__a =3D 3; + int64_t res_64_SV_VL4__a =3D 0; + int64_t res_64_SV_VL5__a =3D 0; + int64_t res_64_SV_VL6__a =3D 0; + int64_t res_64_SV_VL7__a =3D 0; + int64_t res_64_SV_VL8__a =3D 0; + int64_t res_64_SV_VL16__a =3D 0; + uint64_t res_64_SV_VL1_u_a =3D 1; + uint64_t res_64_SV_VL2_u_a =3D 2; + uint64_t res_64_SV_VL3_u_a =3D 3; + uint64_t res_64_SV_VL4_u_a =3D 0; + uint64_t res_64_SV_VL5_u_a =3D 0; + uint64_t res_64_SV_VL6_u_a =3D 0; + uint64_t res_64_SV_VL7_u_a =3D 0; + uint64_t res_64_SV_VL8_u_a =3D 0; + uint64_t res_64_SV_VL16_u_a =3D 0; + int8_t res_8_SV_VL1__b =3D 0; + int8_t res_8_SV_VL2__b =3D 1; + int8_t res_8_SV_VL3__b =3D 2; + int8_t res_8_SV_VL4__b =3D 3; + int8_t res_8_SV_VL5__b =3D 4; + int8_t res_8_SV_VL6__b =3D 5; + int8_t res_8_SV_VL7__b =3D 6; + int8_t res_8_SV_VL8__b =3D 7; + int8_t res_8_SV_VL16__b =3D 15; + uint8_t res_8_SV_VL1_u_b =3D 0; + uint8_t res_8_SV_VL2_u_b =3D 1; + uint8_t res_8_SV_VL3_u_b =3D 2; + uint8_t res_8_SV_VL4_u_b =3D 3; + uint8_t res_8_SV_VL5_u_b =3D 4; + uint8_t res_8_SV_VL6_u_b =3D 5; + uint8_t res_8_SV_VL7_u_b =3D 6; + uint8_t res_8_SV_VL8_u_b =3D 7; + uint8_t res_8_SV_VL16_u_b =3D 15; + int16_t res_16_SV_VL1__b =3D 0; + int16_t res_16_SV_VL2__b =3D 1; + int16_t res_16_SV_VL3__b =3D 2; + int16_t res_16_SV_VL4__b =3D 3; + int16_t res_16_SV_VL5__b =3D 4; + int16_t res_16_SV_VL6__b =3D 5; + int16_t res_16_SV_VL7__b =3D 6; + int16_t res_16_SV_VL8__b =3D 7; + int16_t res_16_SV_VL16__b =3D 15; + uint16_t res_16_SV_VL1_u_b =3D 0; + uint16_t res_16_SV_VL2_u_b =3D 1; + uint16_t res_16_SV_VL3_u_b =3D 2; + uint16_t res_16_SV_VL4_u_b =3D 3; + uint16_t res_16_SV_VL5_u_b =3D 4; + uint16_t res_16_SV_VL6_u_b =3D 5; + uint16_t res_16_SV_VL7_u_b =3D 6; + uint16_t res_16_SV_VL8_u_b =3D 7; + uint16_t res_16_SV_VL16_u_b =3D 15; + int32_t res_32_SV_VL1__b =3D 0; + int32_t res_32_SV_VL2__b =3D 1; + int32_t res_32_SV_VL3__b =3D 2; + int32_t res_32_SV_VL4__b =3D 3; + int32_t res_32_SV_VL5__b =3D 4; + int32_t res_32_SV_VL6__b =3D 5; + int32_t res_32_SV_VL7__b =3D 6; + int32_t res_32_SV_VL8__b =3D 7; + int32_t res_32_SV_VL16__b =3D 7; + uint32_t res_32_SV_VL1_u_b =3D 0; + uint32_t res_32_SV_VL2_u_b =3D 1; + uint32_t res_32_SV_VL3_u_b =3D 2; + uint32_t res_32_SV_VL4_u_b =3D 3; + uint32_t res_32_SV_VL5_u_b =3D 4; + uint32_t res_32_SV_VL6_u_b =3D 5; + uint32_t res_32_SV_VL7_u_b =3D 6; + uint32_t res_32_SV_VL8_u_b =3D 7; + uint32_t res_32_SV_VL16_u_b =3D 7; + int64_t res_64_SV_VL1__b =3D 0; + int64_t res_64_SV_VL2__b =3D 1; + int64_t res_64_SV_VL3__b =3D 2; + int64_t res_64_SV_VL4__b =3D 3; + int64_t res_64_SV_VL5__b =3D 3; + int64_t res_64_SV_VL6__b =3D 3; + int64_t res_64_SV_VL7__b =3D 3; + int64_t res_64_SV_VL8__b =3D 3; + int64_t res_64_SV_VL16__b =3D 3; + uint64_t res_64_SV_VL1_u_b =3D 0; + uint64_t res_64_SV_VL2_u_b =3D 1; + uint64_t res_64_SV_VL3_u_b =3D 2; + uint64_t res_64_SV_VL4_u_b =3D 3; + uint64_t res_64_SV_VL5_u_b =3D 3; + uint64_t res_64_SV_VL6_u_b =3D 3; + uint64_t res_64_SV_VL7_u_b =3D 3; + uint64_t res_64_SV_VL8_u_b =3D 3; + uint64_t res_64_SV_VL16_u_b =3D 3; + + int8_t res_8_SV_VL1__a_false =3D 0; + int8_t res_8_SV_VL2__a_false =3D 0; + int8_t res_8_SV_VL3__a_false =3D 0; + int8_t res_8_SV_VL4__a_false =3D 0; + int8_t res_8_SV_VL5__a_false =3D 0; + int8_t res_8_SV_VL6__a_false =3D 0; + int8_t res_8_SV_VL7__a_false =3D 0; + int8_t res_8_SV_VL8__a_false =3D 0; + int8_t res_8_SV_VL16__a_false =3D 0; + uint8_t res_8_SV_VL1_u_a_false =3D 0; + uint8_t res_8_SV_VL2_u_a_false =3D 0; + uint8_t res_8_SV_VL3_u_a_false =3D 0; + uint8_t res_8_SV_VL4_u_a_false =3D 0; + uint8_t res_8_SV_VL5_u_a_false =3D 0; + uint8_t res_8_SV_VL6_u_a_false =3D 0; + uint8_t res_8_SV_VL7_u_a_false =3D 0; + uint8_t res_8_SV_VL8_u_a_false =3D 0; + uint8_t res_8_SV_VL16_u_a_false =3D 0; + int16_t res_16_SV_VL1__a_false =3D 0; + int16_t res_16_SV_VL2__a_false =3D 0; + int16_t res_16_SV_VL3__a_false =3D 0; + int16_t res_16_SV_VL4__a_false =3D 0; + int16_t res_16_SV_VL5__a_false =3D 0; + int16_t res_16_SV_VL6__a_false =3D 0; + int16_t res_16_SV_VL7__a_false =3D 0; + int16_t res_16_SV_VL8__a_false =3D 0; + int16_t res_16_SV_VL16__a_false =3D 0; + uint16_t res_16_SV_VL1_u_a_false =3D 0; + uint16_t res_16_SV_VL2_u_a_false =3D 0; + uint16_t res_16_SV_VL3_u_a_false =3D 0; + uint16_t res_16_SV_VL4_u_a_false =3D 0; + uint16_t res_16_SV_VL5_u_a_false =3D 0; + uint16_t res_16_SV_VL6_u_a_false =3D 0; + uint16_t res_16_SV_VL7_u_a_false =3D 0; + uint16_t res_16_SV_VL8_u_a_false =3D 0; + uint16_t res_16_SV_VL16_u_a_false =3D 0; + int32_t res_32_SV_VL1__a_false =3D 0; + int32_t res_32_SV_VL2__a_false =3D 0; + int32_t res_32_SV_VL3__a_false =3D 0; + int32_t res_32_SV_VL4__a_false =3D 0; + int32_t res_32_SV_VL5__a_false =3D 0; + int32_t res_32_SV_VL6__a_false =3D 0; + int32_t res_32_SV_VL7__a_false =3D 0; + int32_t res_32_SV_VL8__a_false =3D 0; + int32_t res_32_SV_VL16__a_false =3D 0; + uint32_t res_32_SV_VL1_u_a_false =3D 0; + uint32_t res_32_SV_VL2_u_a_false =3D 0; + uint32_t res_32_SV_VL3_u_a_false =3D 0; + uint32_t res_32_SV_VL4_u_a_false =3D 0; + uint32_t res_32_SV_VL5_u_a_false =3D 0; + uint32_t res_32_SV_VL6_u_a_false =3D 0; + uint32_t res_32_SV_VL7_u_a_false =3D 0; + uint32_t res_32_SV_VL8_u_a_false =3D 0; + uint32_t res_32_SV_VL16_u_a_false =3D 0; + int64_t res_64_SV_VL1__a_false =3D 0; + int64_t res_64_SV_VL2__a_false =3D 0; + int64_t res_64_SV_VL3__a_false =3D 0; + int64_t res_64_SV_VL4__a_false =3D 0; + int64_t res_64_SV_VL5__a_false =3D 0; + int64_t res_64_SV_VL6__a_false =3D 0; + int64_t res_64_SV_VL7__a_false =3D 0; + int64_t res_64_SV_VL8__a_false =3D 0; + int64_t res_64_SV_VL16__a_false =3D 0; + uint64_t res_64_SV_VL1_u_a_false =3D 0; + uint64_t res_64_SV_VL2_u_a_false =3D 0; + uint64_t res_64_SV_VL3_u_a_false =3D 0; + uint64_t res_64_SV_VL4_u_a_false =3D 0; + uint64_t res_64_SV_VL5_u_a_false =3D 0; + uint64_t res_64_SV_VL6_u_a_false =3D 0; + uint64_t res_64_SV_VL7_u_a_false =3D 0; + uint64_t res_64_SV_VL8_u_a_false =3D 0; + uint64_t res_64_SV_VL16_u_a_false =3D 0; + int8_t res_8_SV_VL1__b_false =3D 31; + int8_t res_8_SV_VL2__b_false =3D 31; + int8_t res_8_SV_VL3__b_false =3D 31; + int8_t res_8_SV_VL4__b_false =3D 31; + int8_t res_8_SV_VL5__b_false =3D 31; + int8_t res_8_SV_VL6__b_false =3D 31; + int8_t res_8_SV_VL7__b_false =3D 31; + int8_t res_8_SV_VL8__b_false =3D 31; + int8_t res_8_SV_VL16__b_false =3D 31; + uint8_t res_8_SV_VL1_u_b_false =3D 31; + uint8_t res_8_SV_VL2_u_b_false =3D 31; + uint8_t res_8_SV_VL3_u_b_false =3D 31; + uint8_t res_8_SV_VL4_u_b_false =3D 31; + uint8_t res_8_SV_VL5_u_b_false =3D 31; + uint8_t res_8_SV_VL6_u_b_false =3D 31; + uint8_t res_8_SV_VL7_u_b_false =3D 31; + uint8_t res_8_SV_VL8_u_b_false =3D 31; + uint8_t res_8_SV_VL16_u_b_false =3D 31; + int16_t res_16_SV_VL1__b_false =3D 15; + int16_t res_16_SV_VL2__b_false =3D 15; + int16_t res_16_SV_VL3__b_false =3D 15; + int16_t res_16_SV_VL4__b_false =3D 15; + int16_t res_16_SV_VL5__b_false =3D 15; + int16_t res_16_SV_VL6__b_false =3D 15; + int16_t res_16_SV_VL7__b_false =3D 15; + int16_t res_16_SV_VL8__b_false =3D 15; + int16_t res_16_SV_VL16__b_false =3D 15; + uint16_t res_16_SV_VL1_u_b_false =3D 15; + uint16_t res_16_SV_VL2_u_b_false =3D 15; + uint16_t res_16_SV_VL3_u_b_false =3D 15; + uint16_t res_16_SV_VL4_u_b_false =3D 15; + uint16_t res_16_SV_VL5_u_b_false =3D 15; + uint16_t res_16_SV_VL6_u_b_false =3D 15; + uint16_t res_16_SV_VL7_u_b_false =3D 15; + uint16_t res_16_SV_VL8_u_b_false =3D 15; + uint16_t res_16_SV_VL16_u_b_false =3D 15; + int32_t res_32_SV_VL1__b_false =3D 7; + int32_t res_32_SV_VL2__b_false =3D 7; + int32_t res_32_SV_VL3__b_false =3D 7; + int32_t res_32_SV_VL4__b_false =3D 7; + int32_t res_32_SV_VL5__b_false =3D 7; + int32_t res_32_SV_VL6__b_false =3D 7; + int32_t res_32_SV_VL7__b_false =3D 7; + int32_t res_32_SV_VL8__b_false =3D 7; + int32_t res_32_SV_VL16__b_false =3D 7; + uint32_t res_32_SV_VL1_u_b_false =3D 7; + uint32_t res_32_SV_VL2_u_b_false =3D 7; + uint32_t res_32_SV_VL3_u_b_false =3D 7; + uint32_t res_32_SV_VL4_u_b_false =3D 7; + uint32_t res_32_SV_VL5_u_b_false =3D 7; + uint32_t res_32_SV_VL6_u_b_false =3D 7; + uint32_t res_32_SV_VL7_u_b_false =3D 7; + uint32_t res_32_SV_VL8_u_b_false =3D 7; + uint32_t res_32_SV_VL16_u_b_false =3D 7; + int64_t res_64_SV_VL1__b_false =3D 3; + int64_t res_64_SV_VL2__b_false =3D 3; + int64_t res_64_SV_VL3__b_false =3D 3; + int64_t res_64_SV_VL4__b_false =3D 3; + int64_t res_64_SV_VL5__b_false =3D 3; + int64_t res_64_SV_VL6__b_false =3D 3; + int64_t res_64_SV_VL7__b_false =3D 3; + int64_t res_64_SV_VL8__b_false =3D 3; + int64_t res_64_SV_VL16__b_false =3D 3; + uint64_t res_64_SV_VL1_u_b_false =3D 3; + uint64_t res_64_SV_VL2_u_b_false =3D 3; + uint64_t res_64_SV_VL3_u_b_false =3D 3; + uint64_t res_64_SV_VL4_u_b_false =3D 3; + uint64_t res_64_SV_VL5_u_b_false =3D 3; + uint64_t res_64_SV_VL6_u_b_false =3D 3; + uint64_t res_64_SV_VL7_u_b_false =3D 3; + uint64_t res_64_SV_VL8_u_b_false =3D 3; + uint64_t res_64_SV_VL16_u_b_false =3D 3; + + +#undef SVELAST_DEF +#define SVELAST_DEF(size, pat, sign, ab, su) \ + if (NAME (foo, size, pat, sign, ab) \ + (svindex_ ## su ## size (0 ,1)) !=3D \ + NAME (res, size, pat, sign, ab)) \ + __builtin_abort (); \ + if (NAMEF (foo, size, pat, sign, ab) \ + (svindex_ ## su ## size (0 ,1)) !=3D \ + NAMEF (res, size, pat, sign, ab)) \ + __builtin_abort (); + + ALL_POS () + + return 0; +} diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4.c b/gcc/test= suite/gcc.target/aarch64/sve/pcs/return_4.c index 1e38371842f..91fdd3c202e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, all -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_1024.c b/gcc= /testsuite/gcc.target/aarch64/sve/pcs/return_4_1024.c index 491c35af221..7d824caae1b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_1024.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_1024.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl128 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_128.c b/gcc/= testsuite/gcc.target/aarch64/sve/pcs/return_4_128.c index eebb913273a..e0aa3a5fa68 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_128.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_128.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl16 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_2048.c b/gcc= /testsuite/gcc.target/aarch64/sve/pcs/return_4_2048.c index 73c3b2ec045..3238015d9eb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_2048.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_2048.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl256 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_256.c b/gcc/= testsuite/gcc.target/aarch64/sve/pcs/return_4_256.c index 29744c81402..50861098934 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_256.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_256.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl32 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_512.c b/gcc/= testsuite/gcc.target/aarch64/sve/pcs/return_4_512.c index cf25c31bcbf..300dacce955 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_512.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_4_512.c @@ -186,8 +186,6 @@ CALLER (f16, __SVFloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl64 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5.c b/gcc/test= suite/gcc.target/aarch64/sve/pcs/return_5.c index 9ad3e227654..0a840a38384 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, all -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_1024.c b/gcc= /testsuite/gcc.target/aarch64/sve/pcs/return_5_1024.c index d573e5fc69c..18cefbff1e6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_1024.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_1024.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl128 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_128.c b/gcc/= testsuite/gcc.target/aarch64/sve/pcs/return_5_128.c index 200b0eb8242..c622ed55674 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_128.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_128.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl16 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_2048.c b/gcc= /testsuite/gcc.target/aarch64/sve/pcs/return_5_2048.c index f6f8858fd47..3286280687d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_2048.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_2048.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl256 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_256.c b/gcc/= testsuite/gcc.target/aarch64/sve/pcs/return_5_256.c index e62f59cc885..3c6afa2fdf1 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_256.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_256.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl32 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_512.c b/gcc/= testsuite/gcc.target/aarch64/sve/pcs/return_5_512.c index 483558cb576..bb7d3ebf9d4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_512.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/pcs/return_5_512.c @@ -186,8 +186,6 @@ CALLER (f16, svfloat16_t) ** caller_bf16: ** ... ** bl callee_bf16 -** ptrue (p[0-7])\.b, vl64 -** lasta h0, \1, z0\.h ** ldp x29, x30, \[sp\], 16 ** ret */ -- 2.17.1 --_000_AS8PR08MB7079862977AB8EF6BC84D720EA6D9AS8PR08MB7079eurp_--