From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2072.outbound.protection.outlook.com [40.107.22.72]) by sourceware.org (Postfix) with ESMTPS id 03F0B3858D20 for ; Fri, 5 May 2023 11:02:41 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 03F0B3858D20 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=X2QEecUw0X2vBqSSgScO+K+gEI7LHL0fH6vnzSMAXCs=; b=S4R/COZkeBjjxrT5MbEOzZuHebaLBzGkVlNtTqXzpcOSZccKDt/RacDVXQZjem/zdO4V/yA3QSBxFbN7Swg9g/cZnY1xm37tpt88DQk+5e3DvVOG9VqDCdteyns75yyW7YQg1vZ/jXJn5cPm3mbJuRNuXPfyT2RFhBZV7fHLNC4= Received: from DB3PR08CA0013.eurprd08.prod.outlook.com (2603:10a6:8::26) by GV1PR08MB8715.eurprd08.prod.outlook.com (2603:10a6:150:85::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 11:02:35 +0000 Received: from DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com (2603:10a6:8:0:cafe::19) by DB3PR08CA0013.outlook.office365.com (2603:10a6:8::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 11:02:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT041.mail.protection.outlook.com (100.127.142.233) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 11:02:35 +0000 Received: ("Tessian outbound 5bb4c51d5a1f:v136"); Fri, 05 May 2023 11:02:35 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: b35e91c31d4754ff X-CR-MTA-TID: 64aa7808 Received: from 6238ee68630f.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 0B046DC7-DCE9-4A6B-8217-6EC47102B0AE.1; Fri, 05 May 2023 11:02:25 +0000 Received: from EUR04-VI1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6238ee68630f.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 11:02:25 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ge5AykSMVW2iwq27iov2nXBmPOglXTVHOf7rH4lYAvcE7J4xYVxI+ngXiOzvVgbWom+N7PsEMKHZhMH5sD+5dhlKxjv1LRK0zjHF6OQSzdsUUJRkoFqY54bj0xbEhjIQangdYc0bul7Cf6q1FBaRvB8nucDuw6NwewaU9fbUU9e/7DD+rAyoLjA7WkaJmo8WbGt15czGM0LU2uOth3rDMs7eyBBsQQSAkqPYePODdmhBHiMu7LwhvUgVk2d0OILhthr4zIFX9yq5MreX8oBaMm8pVazK0sr4HneIhrcCrVmig/mc8yfeT8aqvaTKV2OfTFyUEHs8B/MKrz3meNihEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=X2QEecUw0X2vBqSSgScO+K+gEI7LHL0fH6vnzSMAXCs=; b=oHNXA4v2iaMBYT0Ah+FJHRHsClI8554zP3Kmx16oE0FyNbJlD+E5Uufx1/9+Hg8bDO6xRLzFjdtaRLldq9ncCKTA9zwgJKsT4zk/N+1hp2DSZrQ3BWHqVtISjTLQMurikcAgfEyelKSpq8yGl5iKbbKGo9p8xkDEebsnLhWLuyndQ5nqgdbh90eB6G1qVPOcDPfA1s2jPDtcgVxxkk6o7wjdlVbkG2Gs0WefLBxyotbEuibQOk6jFHg/ZHPVterkNxBOqYz3uBHgJ+6f6bph0IfOSVPo1Wugd8LJnBzDj1mXLYkxtdDubVjDtJySNYtJIuYrYfxja9J389O1aROD7w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=X2QEecUw0X2vBqSSgScO+K+gEI7LHL0fH6vnzSMAXCs=; b=S4R/COZkeBjjxrT5MbEOzZuHebaLBzGkVlNtTqXzpcOSZccKDt/RacDVXQZjem/zdO4V/yA3QSBxFbN7Swg9g/cZnY1xm37tpt88DQk+5e3DvVOG9VqDCdteyns75yyW7YQg1vZ/jXJn5cPm3mbJuRNuXPfyT2RFhBZV7fHLNC4= Received: from PAXPR08MB6926.eurprd08.prod.outlook.com (2603:10a6:102:138::24) by DU0PR08MB8496.eurprd08.prod.outlook.com (2603:10a6:10:403::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 11:02:19 +0000 Received: from PAXPR08MB6926.eurprd08.prod.outlook.com ([fe80::db73:66ba:ae70:1ff1]) by PAXPR08MB6926.eurprd08.prod.outlook.com ([fe80::db73:66ba:ae70:1ff1%3]) with mapi id 15.20.6363.027; Fri, 5 May 2023 11:02:19 +0000 From: Kyrylo Tkachov To: Christophe Lyon , "gcc-patches@gcc.gnu.org" , Richard Earnshaw , Richard Sandiford CC: Christophe Lyon Subject: RE: [PATCH 17/23] arm: [MVE intrinsics] rework vshrnbq vshrntq vrshrnbq vrshrntq vqshrnbq vqshrntq vqrshrnbq vqrshrntq Thread-Topic: [PATCH 17/23] arm: [MVE intrinsics] rework vshrnbq vshrntq vrshrnbq vrshrntq vqshrnbq vqshrntq vqrshrnbq vqrshrntq Thread-Index: AQHZfy07qPPUDUVed0CvgMXU41BVsa9Lg57w Date: Fri, 5 May 2023 11:02:19 +0000 Message-ID: References: <20230505083930.101210-1-christophe.lyon@arm.com> <20230505083930.101210-17-christophe.lyon@arm.com> In-Reply-To: <20230505083930.101210-17-christophe.lyon@arm.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; x-ms-traffictypediagnostic: PAXPR08MB6926:EE_|DU0PR08MB8496:EE_|DBAEUR03FT041:EE_|GV1PR08MB8715:EE_ X-MS-Office365-Filtering-Correlation-Id: 61503394-f700-418e-7fc4-08db4d583b51 x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: JAKS3mzy+/sMG8cAYT6mnZEpOE+IcP0iERveGd9/7TWjt9f61SSh7o8ynFVq4F1tHdZSzdsv0Q7o3hI8ewF+dcaUQ19udyTmMW5mR6pFnpjcFsnTqDu4TKEFMoGEIfRu8Evr0XJO3V9zEO+6mrl/H0O3M+LxIp/TAa/JnIciM1I1hkBI3iA4ZiYrwwk6278Tnl8EjhlB2aH3z+rQvmYBhegM0gxdGH3sz6iFBL8U5+l+ePC+J+U+mc4L2sdt1TDoaeWTUdbMn4aNvB9LIllLzpvB/P4ch5d7ScPFn3UespZzW8VJRbdDLFGtD0CbBK9W781kpY44LCzfgQ0gOVouBf5Es3BvcNI5EYe4+q1ZgBpx+UiWHT9k3DlefcVhed/AFV56U8x/9q353TtjNXmKFSOJy90Jau3kkj0d6ZmfkxC+m6Xgoe1GXHFL2BUkKnBuh5XQRP8jSl0JoOdA5NwbuwWPb31hoSbLldc7lZo5wg5RL2KwDLNnUerl7Q1GTQlazXE3EcK7DElgzURnquahz4xjJEbl1386EmPahTZI3xfazYjHYIhPB8QJ4131zTqF/HTFVCgAj87I9Bk1Qj1OtZC1qiJIc1GTSOlGBvnxQh97VyTWL1Brj3t6vrH9WgjOBouGHOD2NfwKPKpRrzUO0A8zLFFbnsRW97dzSMsgxmbckpKHdw1z/6Hf71NO1VyBAXl5StOES5WaJC771l6QVVSve+yOqQJFHwV1iaJbVuezKk+31B7zST2QO4ARy9dOPSF5FlHSRJPkORW2Qx5NKw== X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6926.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(346002)(136003)(396003)(39860400002)(451199021)(7696005)(6636002)(4326008)(66946007)(66476007)(66556008)(66446008)(64756008)(478600001)(76116006)(316002)(110136005)(33656002)(71200400001)(86362001)(83380400001)(9686003)(53546011)(6506007)(26005)(8936002)(5660300002)(52536014)(8676002)(30864003)(41300700001)(55016003)(2906002)(38070700005)(186003)(122000001)(38100700002)(146423001)(40753002)(133343001)(121003001)(17423001)(156123004)(559001)(579004);DIR:OUT;SFP:1101; Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8496 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 9bdd6acd-32be-4ed1-5bd7-08db4d583203 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hP8XBCQQHu+o4ltbnsc6GnhLOKE7NCsRJMbQ17Cq8TY5bn2Qk/CWxkhjiyVBj+729aUHMjkSJ44vPdNbb8uCvE7IDJjQnbqVnsE9z/tLr5KslJshZd+st4q7VChmDaso8BsvC4gwWVWRqqrnSBrPcIkCQTAVVoN6dg9RlcvF6LzjRLD7H37CPhPX01dE+8iBYiO5aKvgsLHf2aHSRyqoWiH0qLLEC5BVs5szcN9mUFXqVmlb4KI9l8oilTHPTWP7TZaipKTCWYT88tZ789iLYs5s3/fCOkwK9tfw4t/RBL67v42RQWgOPrK4dnuDpEuRa6WmVAKPkyE5fKGH/F7hPpkFiaGGRLqZlC6Qj8+uXeOi+LPsWMEMhE5K6Naq71RMba7XqeMR6uygdhL7qR1BgB6WpxreHtRnS93kBAJdjH79lxjkPXRFBVvHjVyahbbbY1YROBqygS4B+ywYRSEqMLFF/hsDhrCO4E3SVBDBroBHYVA5sHbmktREWySkR22c6QUvWd9Q2PdSm3LW7mbOuvbJ+ePpSBswTQ5CAYdN8iigdSZ/eyK4A0ynio4lVc7Jz/8evKtFpAAqWEv0I4lgBlfYXO2th7MdDdVtYuJCkf/3yZpBBzRx27q9CcDKiOnxZOf9jQRLARqEAJekGMzGGKDZfr5ezmMrBjBi+ZSZzGtVPGtK6yQxJUuA3odpEhRew2sXuEqPMeAxMO+Ej3oRGKlaIGzDCv9Aj2fN0Af7lMl/405HTExCqJwocO3A5KYkE1jNO4R3UCwBi3DhiNZfywzCfW2ZzWkwwp8v/ze49FOcB9RfXw8oJ+ODu5GZjcqEXkWeCL3L54mZmKVvntDGowkvvk/fBabamfp5beiydA8xAb4LaOdbqFw+hIjLLp3bqB1IaW2SE0AOugndBThj4A== X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(346002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(33656002)(86362001)(316002)(110136005)(6636002)(4326008)(70586007)(70206006)(7696005)(478600001)(41300700001)(40480700001)(55016003)(82310400005)(5660300002)(52536014)(8676002)(8936002)(2906002)(30864003)(34020700004)(82740400003)(356005)(81166007)(186003)(9686003)(53546011)(36860700001)(26005)(6506007)(336012)(47076005)(83380400001)(40460700003)(146423001)(40753002)(17423001)(121003001)(133343001)(156123004)(559001)(579004);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 11:02:35.0714 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 61503394-f700-418e-7fc4-08db4d583b51 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8715 X-Spam-Status: No, score=-10.4 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,KAM_DMARC_NONE,LONGWORDS,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: > -----Original Message----- > From: Christophe Lyon > Sent: Friday, May 5, 2023 9:39 AM > To: gcc-patches@gcc.gnu.org; Kyrylo Tkachov ; > Richard Earnshaw ; Richard Sandiford > > Cc: Christophe Lyon > Subject: [PATCH 17/23] arm: [MVE intrinsics] rework vshrnbq vshrntq > vrshrnbq vrshrntq vqshrnbq vqshrntq vqrshrnbq vqrshrntq >=20 > Implement vshrnbq, vshrntq, vrshrnbq, vrshrntq, vqshrnbq, vqshrntq, > vqrshrnbq, vqrshrntq using the new MVE builtins framework. Ok with a style nit... >=20 > 2022-09-08 Christophe Lyon >=20 > gcc/ > * config/arm/arm-mve-builtins-base.cc (FUNCTION_ONLY_N_NO_F): > New. > (vshrnbq, vshrntq, vrshrnbq, vrshrntq, vqshrnbq, vqshrntq) > (vqrshrnbq, vqrshrntq): New. > * config/arm/arm-mve-builtins-base.def (vshrnbq, vshrntq) > (vrshrnbq, vrshrntq, vqshrnbq, vqshrntq, vqrshrnbq, vqrshrntq): > New. > * config/arm/arm-mve-builtins-base.h (vshrnbq, vshrntq, vrshrnbq) > (vrshrntq, vqshrnbq, vqshrntq, vqrshrnbq, vqrshrntq): New. > * config/arm/arm-mve-builtins.cc > (function_instance::has_inactive_argument): Handle vshrnbq, > vshrntq, vrshrnbq, vrshrntq, vqshrnbq, vqshrntq, vqrshrnbq, > vqrshrntq. > * config/arm/arm_mve.h (vshrnbq): Remove. > (vshrntq): Remove. > (vshrnbq_m): Remove. > (vshrntq_m): Remove. > (vshrnbq_n_s16): Remove. > (vshrntq_n_s16): Remove. > (vshrnbq_n_u16): Remove. > (vshrntq_n_u16): Remove. > (vshrnbq_n_s32): Remove. > (vshrntq_n_s32): Remove. > (vshrnbq_n_u32): Remove. > (vshrntq_n_u32): Remove. > (vshrnbq_m_n_s32): Remove. > (vshrnbq_m_n_s16): Remove. > (vshrnbq_m_n_u32): Remove. > (vshrnbq_m_n_u16): Remove. > (vshrntq_m_n_s32): Remove. > (vshrntq_m_n_s16): Remove. > (vshrntq_m_n_u32): Remove. > (vshrntq_m_n_u16): Remove. > (__arm_vshrnbq_n_s16): Remove. > (__arm_vshrntq_n_s16): Remove. > (__arm_vshrnbq_n_u16): Remove. > (__arm_vshrntq_n_u16): Remove. > (__arm_vshrnbq_n_s32): Remove. > (__arm_vshrntq_n_s32): Remove. > (__arm_vshrnbq_n_u32): Remove. > (__arm_vshrntq_n_u32): Remove. > (__arm_vshrnbq_m_n_s32): Remove. > (__arm_vshrnbq_m_n_s16): Remove. > (__arm_vshrnbq_m_n_u32): Remove. > (__arm_vshrnbq_m_n_u16): Remove. > (__arm_vshrntq_m_n_s32): Remove. > (__arm_vshrntq_m_n_s16): Remove. > (__arm_vshrntq_m_n_u32): Remove. > (__arm_vshrntq_m_n_u16): Remove. > (__arm_vshrnbq): Remove. > (__arm_vshrntq): Remove. > (__arm_vshrnbq_m): Remove. > (__arm_vshrntq_m): Remove. > (vrshrnbq): Remove. > (vrshrntq): Remove. > (vrshrnbq_m): Remove. > (vrshrntq_m): Remove. > (vrshrnbq_n_s16): Remove. > (vrshrntq_n_s16): Remove. > (vrshrnbq_n_u16): Remove. > (vrshrntq_n_u16): Remove. > (vrshrnbq_n_s32): Remove. > (vrshrntq_n_s32): Remove. > (vrshrnbq_n_u32): Remove. > (vrshrntq_n_u32): Remove. > (vrshrnbq_m_n_s32): Remove. > (vrshrnbq_m_n_s16): Remove. > (vrshrnbq_m_n_u32): Remove. > (vrshrnbq_m_n_u16): Remove. > (vrshrntq_m_n_s32): Remove. > (vrshrntq_m_n_s16): Remove. > (vrshrntq_m_n_u32): Remove. > (vrshrntq_m_n_u16): Remove. > (__arm_vrshrnbq_n_s16): Remove. > (__arm_vrshrntq_n_s16): Remove. > (__arm_vrshrnbq_n_u16): Remove. > (__arm_vrshrntq_n_u16): Remove. > (__arm_vrshrnbq_n_s32): Remove. > (__arm_vrshrntq_n_s32): Remove. > (__arm_vrshrnbq_n_u32): Remove. > (__arm_vrshrntq_n_u32): Remove. > (__arm_vrshrnbq_m_n_s32): Remove. > (__arm_vrshrnbq_m_n_s16): Remove. > (__arm_vrshrnbq_m_n_u32): Remove. > (__arm_vrshrnbq_m_n_u16): Remove. > (__arm_vrshrntq_m_n_s32): Remove. > (__arm_vrshrntq_m_n_s16): Remove. > (__arm_vrshrntq_m_n_u32): Remove. > (__arm_vrshrntq_m_n_u16): Remove. > (__arm_vrshrnbq): Remove. > (__arm_vrshrntq): Remove. > (__arm_vrshrnbq_m): Remove. > (__arm_vrshrntq_m): Remove. > (vqshrnbq): Remove. > (vqshrntq): Remove. > (vqshrnbq_m): Remove. > (vqshrntq_m): Remove. > (vqshrnbq_n_s16): Remove. > (vqshrntq_n_s16): Remove. > (vqshrnbq_n_u16): Remove. > (vqshrntq_n_u16): Remove. > (vqshrnbq_n_s32): Remove. > (vqshrntq_n_s32): Remove. > (vqshrnbq_n_u32): Remove. > (vqshrntq_n_u32): Remove. > (vqshrnbq_m_n_s32): Remove. > (vqshrnbq_m_n_s16): Remove. > (vqshrnbq_m_n_u32): Remove. > (vqshrnbq_m_n_u16): Remove. > (vqshrntq_m_n_s32): Remove. > (vqshrntq_m_n_s16): Remove. > (vqshrntq_m_n_u32): Remove. > (vqshrntq_m_n_u16): Remove. > (__arm_vqshrnbq_n_s16): Remove. > (__arm_vqshrntq_n_s16): Remove. > (__arm_vqshrnbq_n_u16): Remove. > (__arm_vqshrntq_n_u16): Remove. > (__arm_vqshrnbq_n_s32): Remove. > (__arm_vqshrntq_n_s32): Remove. > (__arm_vqshrnbq_n_u32): Remove. > (__arm_vqshrntq_n_u32): Remove. > (__arm_vqshrnbq_m_n_s32): Remove. > (__arm_vqshrnbq_m_n_s16): Remove. > (__arm_vqshrnbq_m_n_u32): Remove. > (__arm_vqshrnbq_m_n_u16): Remove. > (__arm_vqshrntq_m_n_s32): Remove. > (__arm_vqshrntq_m_n_s16): Remove. > (__arm_vqshrntq_m_n_u32): Remove. > (__arm_vqshrntq_m_n_u16): Remove. > (__arm_vqshrnbq): Remove. > (__arm_vqshrntq): Remove. > (__arm_vqshrnbq_m): Remove. > (__arm_vqshrntq_m): Remove. > (vqrshrnbq): Remove. > (vqrshrntq): Remove. > (vqrshrnbq_m): Remove. > (vqrshrntq_m): Remove. > (vqrshrnbq_n_s16): Remove. > (vqrshrnbq_n_u16): Remove. > (vqrshrnbq_n_s32): Remove. > (vqrshrnbq_n_u32): Remove. > (vqrshrntq_n_s16): Remove. > (vqrshrntq_n_u16): Remove. > (vqrshrntq_n_s32): Remove. > (vqrshrntq_n_u32): Remove. > (vqrshrnbq_m_n_s32): Remove. > (vqrshrnbq_m_n_s16): Remove. > (vqrshrnbq_m_n_u32): Remove. > (vqrshrnbq_m_n_u16): Remove. > (vqrshrntq_m_n_s32): Remove. > (vqrshrntq_m_n_s16): Remove. > (vqrshrntq_m_n_u32): Remove. > (vqrshrntq_m_n_u16): Remove. > (__arm_vqrshrnbq_n_s16): Remove. > (__arm_vqrshrnbq_n_u16): Remove. > (__arm_vqrshrnbq_n_s32): Remove. > (__arm_vqrshrnbq_n_u32): Remove. > (__arm_vqrshrntq_n_s16): Remove. > (__arm_vqrshrntq_n_u16): Remove. > (__arm_vqrshrntq_n_s32): Remove. > (__arm_vqrshrntq_n_u32): Remove. > (__arm_vqrshrnbq_m_n_s32): Remove. > (__arm_vqrshrnbq_m_n_s16): Remove. > (__arm_vqrshrnbq_m_n_u32): Remove. > (__arm_vqrshrnbq_m_n_u16): Remove. > (__arm_vqrshrntq_m_n_s32): Remove. > (__arm_vqrshrntq_m_n_s16): Remove. > (__arm_vqrshrntq_m_n_u32): Remove. > (__arm_vqrshrntq_m_n_u16): Remove. > (__arm_vqrshrnbq): Remove. > (__arm_vqrshrntq): Remove. > (__arm_vqrshrnbq_m): Remove. > (__arm_vqrshrntq_m): Remove. > --- > gcc/config/arm/arm-mve-builtins-base.cc | 17 + > gcc/config/arm/arm-mve-builtins-base.def | 8 + > gcc/config/arm/arm-mve-builtins-base.h | 8 + > gcc/config/arm/arm-mve-builtins.cc | 11 +- > gcc/config/arm/arm_mve.h | 1196 +--------------------- > 5 files changed, 65 insertions(+), 1175 deletions(-) >=20 > diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm= - > mve-builtins-base.cc > index 1839d5cb1a5..c95abe70239 100644 > --- a/gcc/config/arm/arm-mve-builtins-base.cc > +++ b/gcc/config/arm/arm-mve-builtins-base.cc > @@ -175,6 +175,15 @@ namespace arm_mve { > UNSPEC##_M_S, UNSPEC##_M_U, UNSPEC##_M_F, > \ > -1, -1, -1)) >=20 > + /* Helper for builtins with only unspec codes, _m predicated > + overrides, only _n version, no floating-point. */ > +#define FUNCTION_ONLY_N_NO_F(NAME, UNSPEC) FUNCTION > \ > + (NAME, unspec_mve_function_exact_insn, \ > + (-1, -1, -1, \ > + UNSPEC##_N_S, UNSPEC##_N_U, -1, \ > + -1, -1, -1, \ > + UNSPEC##_M_N_S, UNSPEC##_M_N_U, -1)) > + > FUNCTION_WITHOUT_N (vabdq, VABDQ) > FUNCTION_WITH_RTX_M_N (vaddq, PLUS, VADDQ) > FUNCTION_WITH_RTX_M (vandq, AND, VANDQ) > @@ -192,12 +201,20 @@ FUNCTION_WITH_M_N_NO_U_F (vqdmulhq, > VQDMULHQ) > FUNCTION_WITH_M_N_NO_F (vqrshlq, VQRSHLQ) > FUNCTION_WITH_M_N_NO_U_F (vqrdmulhq, VQRDMULHQ) > FUNCTION_WITH_M_N_R (vqshlq, VQSHLQ) > +FUNCTION_ONLY_N_NO_F (vqrshrnbq, VQRSHRNBQ) > +FUNCTION_ONLY_N_NO_F (vqrshrntq, VQRSHRNTQ) > +FUNCTION_ONLY_N_NO_F (vqshrnbq, VQSHRNBQ) > +FUNCTION_ONLY_N_NO_F (vqshrntq, VQSHRNTQ) > FUNCTION_WITH_M_N_NO_F (vqsubq, VQSUBQ) > FUNCTION (vreinterpretq, vreinterpretq_impl,) > FUNCTION_WITHOUT_N_NO_F (vrhaddq, VRHADDQ) > FUNCTION_WITHOUT_N_NO_F (vrmulhq, VRMULHQ) > FUNCTION_WITH_M_N_NO_F (vrshlq, VRSHLQ) > +FUNCTION_ONLY_N_NO_F (vrshrnbq, VRSHRNBQ) > +FUNCTION_ONLY_N_NO_F (vrshrntq, VRSHRNTQ) > FUNCTION_WITH_M_N_R (vshlq, VSHLQ) > +FUNCTION_ONLY_N_NO_F (vshrnbq, VSHRNBQ) > +FUNCTION_ONLY_N_NO_F (vshrntq, VSHRNTQ) > FUNCTION_WITH_RTX_M_N (vsubq, MINUS, VSUBQ) > FUNCTION (vuninitializedq, vuninitializedq_impl,) >=20 > diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/ar= m- > mve-builtins-base.def > index 3b42bf46e81..3dd40086663 100644 > --- a/gcc/config/arm/arm-mve-builtins-base.def > +++ b/gcc/config/arm/arm-mve-builtins-base.def > @@ -34,15 +34,23 @@ DEF_MVE_FUNCTION (vqaddq, binary_opt_n, > all_integer, m_or_none) > DEF_MVE_FUNCTION (vqdmulhq, binary_opt_n, all_signed, m_or_none) > DEF_MVE_FUNCTION (vqrdmulhq, binary_opt_n, all_signed, m_or_none) > DEF_MVE_FUNCTION (vqrshlq, binary_round_lshift, all_integer, m_or_none) > +DEF_MVE_FUNCTION (vqrshrnbq, binary_rshift_narrow, integer_16_32, > m_or_none) > +DEF_MVE_FUNCTION (vqrshrntq, binary_rshift_narrow, integer_16_32, > m_or_none) > DEF_MVE_FUNCTION (vqshlq, binary_lshift, all_integer, m_or_none) > DEF_MVE_FUNCTION (vqshlq, binary_lshift_r, all_integer, m_or_none) > +DEF_MVE_FUNCTION (vqshrnbq, binary_rshift_narrow, integer_16_32, > m_or_none) > +DEF_MVE_FUNCTION (vqshrntq, binary_rshift_narrow, integer_16_32, > m_or_none) > DEF_MVE_FUNCTION (vqsubq, binary_opt_n, all_integer, m_or_none) > DEF_MVE_FUNCTION (vreinterpretq, unary_convert, reinterpret_integer, > none) > DEF_MVE_FUNCTION (vrhaddq, binary, all_integer, mx_or_none) > DEF_MVE_FUNCTION (vrmulhq, binary, all_integer, mx_or_none) > DEF_MVE_FUNCTION (vrshlq, binary_round_lshift, all_integer, mx_or_none) > +DEF_MVE_FUNCTION (vrshrnbq, binary_rshift_narrow, integer_16_32, > m_or_none) > +DEF_MVE_FUNCTION (vrshrntq, binary_rshift_narrow, integer_16_32, > m_or_none) > DEF_MVE_FUNCTION (vshlq, binary_lshift, all_integer, mx_or_none) > DEF_MVE_FUNCTION (vshlq, binary_lshift_r, all_integer, m_or_none) // "_r= " > forms do not support the "x" predicate > +DEF_MVE_FUNCTION (vshrnbq, binary_rshift_narrow, integer_16_32, > m_or_none) > +DEF_MVE_FUNCTION (vshrntq, binary_rshift_narrow, integer_16_32, > m_or_none) > DEF_MVE_FUNCTION (vsubq, binary_opt_n, all_integer, mx_or_none) > DEF_MVE_FUNCTION (vuninitializedq, inherent, all_integer_with_64, none) > #undef REQUIRES_FLOAT > diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm- > mve-builtins-base.h > index 81d10f4a8f4..9e11ac83681 100644 > --- a/gcc/config/arm/arm-mve-builtins-base.h > +++ b/gcc/config/arm/arm-mve-builtins-base.h > @@ -39,13 +39,21 @@ extern const function_base *const vqaddq; > extern const function_base *const vqdmulhq; > extern const function_base *const vqrdmulhq; > extern const function_base *const vqrshlq; > +extern const function_base *const vqrshrnbq; > +extern const function_base *const vqrshrntq; > extern const function_base *const vqshlq; > +extern const function_base *const vqshrnbq; > +extern const function_base *const vqshrntq; > extern const function_base *const vqsubq; > extern const function_base *const vreinterpretq; > extern const function_base *const vrhaddq; > extern const function_base *const vrmulhq; > extern const function_base *const vrshlq; > +extern const function_base *const vrshrnbq; > +extern const function_base *const vrshrntq; > extern const function_base *const vshlq; > +extern const function_base *const vshrnbq; > +extern const function_base *const vshrntq; > extern const function_base *const vsubq; > extern const function_base *const vuninitializedq; >=20 > diff --git a/gcc/config/arm/arm-mve-builtins.cc b/gcc/config/arm/arm-mve- > builtins.cc > index c25b1be9903..667bbc58483 100644 > --- a/gcc/config/arm/arm-mve-builtins.cc > +++ b/gcc/config/arm/arm-mve-builtins.cc > @@ -672,7 +672,16 @@ function_instance::has_inactive_argument () const > if (mode_suffix_id =3D=3D MODE_r > || (base =3D=3D functions::vorrq && mode_suffix_id =3D=3D MODE_n) > || (base =3D=3D functions::vqrshlq && mode_suffix_id =3D=3D MODE_n= ) > - || (base =3D=3D functions::vrshlq && mode_suffix_id =3D=3D MODE_n)= ) > + || base =3D=3D functions::vqrshrnbq > + || base =3D=3D functions::vqrshrntq > + || base =3D=3D functions::vqshrnbq > + || base =3D=3D functions::vqshrntq > + || (base =3D=3D functions::vrshlq && mode_suffix_id =3D=3D MODE_n) > + || base =3D=3D functions::vrshrnbq > + || base =3D=3D functions::vrshrntq > + || base =3D=3D functions::vshrnbq > + || base =3D=3D functions::vshrntq > + ) ... The ')' should be on the previous line. Thanks, Kyrill > return false; >=20 > return true; > diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h > index 5fbea52c8ef..ed7852e2460 100644 > --- a/gcc/config/arm/arm_mve.h > +++ b/gcc/config/arm/arm_mve.h > @@ -113,7 +113,6 @@ > #define vrmlaldavhxq(__a, __b) __arm_vrmlaldavhxq(__a, __b) > #define vabavq(__a, __b, __c) __arm_vabavq(__a, __b, __c) > #define vbicq_m_n(__a, __imm, __p) __arm_vbicq_m_n(__a, __imm, __p) > -#define vqrshrnbq(__a, __b, __imm) __arm_vqrshrnbq(__a, __b, __imm) > #define vqrshrunbq(__a, __b, __imm) __arm_vqrshrunbq(__a, __b, __imm) > #define vrmlaldavhaq(__a, __b, __c) __arm_vrmlaldavhaq(__a, __b, __c) > #define vshlcq(__a, __b, __imm) __arm_vshlcq(__a, __b, __imm) > @@ -176,13 +175,6 @@ > #define vrmlaldavhxq_p(__a, __b, __p) __arm_vrmlaldavhxq_p(__a, __b, > __p) > #define vrmlsldavhq_p(__a, __b, __p) __arm_vrmlsldavhq_p(__a, __b, __p) > #define vrmlsldavhxq_p(__a, __b, __p) __arm_vrmlsldavhxq_p(__a, __b, > __p) > -#define vqrshrntq(__a, __b, __imm) __arm_vqrshrntq(__a, __b, __imm) > -#define vqshrnbq(__a, __b, __imm) __arm_vqshrnbq(__a, __b, __imm) > -#define vqshrntq(__a, __b, __imm) __arm_vqshrntq(__a, __b, __imm) > -#define vrshrnbq(__a, __b, __imm) __arm_vrshrnbq(__a, __b, __imm) > -#define vrshrntq(__a, __b, __imm) __arm_vrshrntq(__a, __b, __imm) > -#define vshrnbq(__a, __b, __imm) __arm_vshrnbq(__a, __b, __imm) > -#define vshrntq(__a, __b, __imm) __arm_vshrntq(__a, __b, __imm) > #define vmlaldavaq(__a, __b, __c) __arm_vmlaldavaq(__a, __b, __c) > #define vmlaldavaxq(__a, __b, __c) __arm_vmlaldavaxq(__a, __b, __c) > #define vmlsldavaq(__a, __b, __c) __arm_vmlsldavaq(__a, __b, __c) > @@ -244,24 +236,16 @@ > #define vmulltq_poly_m(__inactive, __a, __b, __p) > __arm_vmulltq_poly_m(__inactive, __a, __b, __p) > #define vqdmullbq_m(__inactive, __a, __b, __p) > __arm_vqdmullbq_m(__inactive, __a, __b, __p) > #define vqdmulltq_m(__inactive, __a, __b, __p) > __arm_vqdmulltq_m(__inactive, __a, __b, __p) > -#define vqrshrnbq_m(__a, __b, __imm, __p) __arm_vqrshrnbq_m(__a, __b, > __imm, __p) > -#define vqrshrntq_m(__a, __b, __imm, __p) __arm_vqrshrntq_m(__a, __b, > __imm, __p) > #define vqrshrunbq_m(__a, __b, __imm, __p) __arm_vqrshrunbq_m(__a, > __b, __imm, __p) > #define vqrshruntq_m(__a, __b, __imm, __p) __arm_vqrshruntq_m(__a, > __b, __imm, __p) > -#define vqshrnbq_m(__a, __b, __imm, __p) __arm_vqshrnbq_m(__a, __b, > __imm, __p) > -#define vqshrntq_m(__a, __b, __imm, __p) __arm_vqshrntq_m(__a, __b, > __imm, __p) > #define vqshrunbq_m(__a, __b, __imm, __p) __arm_vqshrunbq_m(__a, > __b, __imm, __p) > #define vqshruntq_m(__a, __b, __imm, __p) __arm_vqshruntq_m(__a, __b, > __imm, __p) > #define vrmlaldavhaq_p(__a, __b, __c, __p) __arm_vrmlaldavhaq_p(__a, > __b, __c, __p) > #define vrmlaldavhaxq_p(__a, __b, __c, __p) __arm_vrmlaldavhaxq_p(__a, > __b, __c, __p) > #define vrmlsldavhaq_p(__a, __b, __c, __p) __arm_vrmlsldavhaq_p(__a, > __b, __c, __p) > #define vrmlsldavhaxq_p(__a, __b, __c, __p) __arm_vrmlsldavhaxq_p(__a, > __b, __c, __p) > -#define vrshrnbq_m(__a, __b, __imm, __p) __arm_vrshrnbq_m(__a, __b, > __imm, __p) > -#define vrshrntq_m(__a, __b, __imm, __p) __arm_vrshrntq_m(__a, __b, > __imm, __p) > #define vshllbq_m(__inactive, __a, __imm, __p) > __arm_vshllbq_m(__inactive, __a, __imm, __p) > #define vshlltq_m(__inactive, __a, __imm, __p) __arm_vshlltq_m(__inactiv= e, > __a, __imm, __p) > -#define vshrnbq_m(__a, __b, __imm, __p) __arm_vshrnbq_m(__a, __b, > __imm, __p) > -#define vshrntq_m(__a, __b, __imm, __p) __arm_vshrntq_m(__a, __b, > __imm, __p) > #define vstrbq_scatter_offset(__base, __offset, __value) > __arm_vstrbq_scatter_offset(__base, __offset, __value) > #define vstrbq(__addr, __value) __arm_vstrbq(__addr, __value) > #define vstrwq_scatter_base(__addr, __offset, __value) > __arm_vstrwq_scatter_base(__addr, __offset, __value) > @@ -905,10 +889,6 @@ > #define vcvtq_m_f16_u16(__inactive, __a, __p) > __arm_vcvtq_m_f16_u16(__inactive, __a, __p) > #define vcvtq_m_f32_s32(__inactive, __a, __p) > __arm_vcvtq_m_f32_s32(__inactive, __a, __p) > #define vcvtq_m_f32_u32(__inactive, __a, __p) > __arm_vcvtq_m_f32_u32(__inactive, __a, __p) > -#define vqrshrnbq_n_s16(__a, __b, __imm) __arm_vqrshrnbq_n_s16(__a, > __b, __imm) > -#define vqrshrnbq_n_u16(__a, __b, __imm) __arm_vqrshrnbq_n_u16(__a, > __b, __imm) > -#define vqrshrnbq_n_s32(__a, __b, __imm) __arm_vqrshrnbq_n_s32(__a, > __b, __imm) > -#define vqrshrnbq_n_u32(__a, __b, __imm) __arm_vqrshrnbq_n_u32(__a, > __b, __imm) > #define vqrshrunbq_n_s16(__a, __b, __imm) > __arm_vqrshrunbq_n_s16(__a, __b, __imm) > #define vqrshrunbq_n_s32(__a, __b, __imm) > __arm_vqrshrunbq_n_s32(__a, __b, __imm) > #define vrmlaldavhaq_s32(__a, __b, __c) __arm_vrmlaldavhaq_s32(__a, > __b, __c) > @@ -1167,13 +1147,6 @@ > #define vrev16q_m_u8(__inactive, __a, __p) > __arm_vrev16q_m_u8(__inactive, __a, __p) > #define vrmlaldavhq_p_u32(__a, __b, __p) __arm_vrmlaldavhq_p_u32(__a, > __b, __p) > #define vmvnq_m_n_s16(__inactive, __imm, __p) > __arm_vmvnq_m_n_s16(__inactive, __imm, __p) > -#define vqrshrntq_n_s16(__a, __b, __imm) __arm_vqrshrntq_n_s16(__a, > __b, __imm) > -#define vqshrnbq_n_s16(__a, __b, __imm) __arm_vqshrnbq_n_s16(__a, > __b, __imm) > -#define vqshrntq_n_s16(__a, __b, __imm) __arm_vqshrntq_n_s16(__a, > __b, __imm) > -#define vrshrnbq_n_s16(__a, __b, __imm) __arm_vrshrnbq_n_s16(__a, > __b, __imm) > -#define vrshrntq_n_s16(__a, __b, __imm) __arm_vrshrntq_n_s16(__a, __b, > __imm) > -#define vshrnbq_n_s16(__a, __b, __imm) __arm_vshrnbq_n_s16(__a, __b, > __imm) > -#define vshrntq_n_s16(__a, __b, __imm) __arm_vshrntq_n_s16(__a, __b, > __imm) > #define vcmlaq_f16(__a, __b, __c) __arm_vcmlaq_f16(__a, __b, __c) > #define vcmlaq_rot180_f16(__a, __b, __c) __arm_vcmlaq_rot180_f16(__a, > __b, __c) > #define vcmlaq_rot270_f16(__a, __b, __c) __arm_vcmlaq_rot270_f16(__a, > __b, __c) > @@ -1239,13 +1212,6 @@ > #define vcvtq_m_u16_f16(__inactive, __a, __p) > __arm_vcvtq_m_u16_f16(__inactive, __a, __p) > #define vqmovunbq_m_s16(__a, __b, __p) __arm_vqmovunbq_m_s16(__a, > __b, __p) > #define vqmovuntq_m_s16(__a, __b, __p) __arm_vqmovuntq_m_s16(__a, > __b, __p) > -#define vqrshrntq_n_u16(__a, __b, __imm) __arm_vqrshrntq_n_u16(__a, > __b, __imm) > -#define vqshrnbq_n_u16(__a, __b, __imm) __arm_vqshrnbq_n_u16(__a, > __b, __imm) > -#define vqshrntq_n_u16(__a, __b, __imm) __arm_vqshrntq_n_u16(__a, > __b, __imm) > -#define vrshrnbq_n_u16(__a, __b, __imm) __arm_vrshrnbq_n_u16(__a, > __b, __imm) > -#define vrshrntq_n_u16(__a, __b, __imm) __arm_vrshrntq_n_u16(__a, __b, > __imm) > -#define vshrnbq_n_u16(__a, __b, __imm) __arm_vshrnbq_n_u16(__a, __b, > __imm) > -#define vshrntq_n_u16(__a, __b, __imm) __arm_vshrntq_n_u16(__a, __b, > __imm) > #define vmlaldavaq_u16(__a, __b, __c) __arm_vmlaldavaq_u16(__a, __b, > __c) > #define vmlaldavq_p_u16(__a, __b, __p) __arm_vmlaldavq_p_u16(__a, __b, > __p) > #define vmovlbq_m_u8(__inactive, __a, __p) > __arm_vmovlbq_m_u8(__inactive, __a, __p) > @@ -1256,13 +1222,6 @@ > #define vqmovntq_m_u16(__a, __b, __p) __arm_vqmovntq_m_u16(__a, > __b, __p) > #define vrev32q_m_u8(__inactive, __a, __p) > __arm_vrev32q_m_u8(__inactive, __a, __p) > #define vmvnq_m_n_s32(__inactive, __imm, __p) > __arm_vmvnq_m_n_s32(__inactive, __imm, __p) > -#define vqrshrntq_n_s32(__a, __b, __imm) __arm_vqrshrntq_n_s32(__a, > __b, __imm) > -#define vqshrnbq_n_s32(__a, __b, __imm) __arm_vqshrnbq_n_s32(__a, > __b, __imm) > -#define vqshrntq_n_s32(__a, __b, __imm) __arm_vqshrntq_n_s32(__a, > __b, __imm) > -#define vrshrnbq_n_s32(__a, __b, __imm) __arm_vrshrnbq_n_s32(__a, > __b, __imm) > -#define vrshrntq_n_s32(__a, __b, __imm) __arm_vrshrntq_n_s32(__a, __b, > __imm) > -#define vshrnbq_n_s32(__a, __b, __imm) __arm_vshrnbq_n_s32(__a, __b, > __imm) > -#define vshrntq_n_s32(__a, __b, __imm) __arm_vshrntq_n_s32(__a, __b, > __imm) > #define vcmlaq_f32(__a, __b, __c) __arm_vcmlaq_f32(__a, __b, __c) > #define vcmlaq_rot180_f32(__a, __b, __c) __arm_vcmlaq_rot180_f32(__a, > __b, __c) > #define vcmlaq_rot270_f32(__a, __b, __c) __arm_vcmlaq_rot270_f32(__a, > __b, __c) > @@ -1328,13 +1287,6 @@ > #define vcvtq_m_u32_f32(__inactive, __a, __p) > __arm_vcvtq_m_u32_f32(__inactive, __a, __p) > #define vqmovunbq_m_s32(__a, __b, __p) __arm_vqmovunbq_m_s32(__a, > __b, __p) > #define vqmovuntq_m_s32(__a, __b, __p) __arm_vqmovuntq_m_s32(__a, > __b, __p) > -#define vqrshrntq_n_u32(__a, __b, __imm) __arm_vqrshrntq_n_u32(__a, > __b, __imm) > -#define vqshrnbq_n_u32(__a, __b, __imm) __arm_vqshrnbq_n_u32(__a, > __b, __imm) > -#define vqshrntq_n_u32(__a, __b, __imm) __arm_vqshrntq_n_u32(__a, > __b, __imm) > -#define vrshrnbq_n_u32(__a, __b, __imm) __arm_vrshrnbq_n_u32(__a, > __b, __imm) > -#define vrshrntq_n_u32(__a, __b, __imm) __arm_vrshrntq_n_u32(__a, __b, > __imm) > -#define vshrnbq_n_u32(__a, __b, __imm) __arm_vshrnbq_n_u32(__a, __b, > __imm) > -#define vshrntq_n_u32(__a, __b, __imm) __arm_vshrntq_n_u32(__a, __b, > __imm) > #define vmlaldavaq_u32(__a, __b, __c) __arm_vmlaldavaq_u32(__a, __b, > __c) > #define vmlaldavq_p_u32(__a, __b, __p) __arm_vmlaldavq_p_u32(__a, __b, > __p) > #define vmovlbq_m_u16(__inactive, __a, __p) > __arm_vmovlbq_m_u16(__inactive, __a, __p) > @@ -1514,26 +1466,10 @@ > #define vqdmulltq_m_n_s16(__inactive, __a, __b, __p) > __arm_vqdmulltq_m_n_s16(__inactive, __a, __b, __p) > #define vqdmulltq_m_s32(__inactive, __a, __b, __p) > __arm_vqdmulltq_m_s32(__inactive, __a, __b, __p) > #define vqdmulltq_m_s16(__inactive, __a, __b, __p) > __arm_vqdmulltq_m_s16(__inactive, __a, __b, __p) > -#define vqrshrnbq_m_n_s32(__a, __b, __imm, __p) > __arm_vqrshrnbq_m_n_s32(__a, __b, __imm, __p) > -#define vqrshrnbq_m_n_s16(__a, __b, __imm, __p) > __arm_vqrshrnbq_m_n_s16(__a, __b, __imm, __p) > -#define vqrshrnbq_m_n_u32(__a, __b, __imm, __p) > __arm_vqrshrnbq_m_n_u32(__a, __b, __imm, __p) > -#define vqrshrnbq_m_n_u16(__a, __b, __imm, __p) > __arm_vqrshrnbq_m_n_u16(__a, __b, __imm, __p) > -#define vqrshrntq_m_n_s32(__a, __b, __imm, __p) > __arm_vqrshrntq_m_n_s32(__a, __b, __imm, __p) > -#define vqrshrntq_m_n_s16(__a, __b, __imm, __p) > __arm_vqrshrntq_m_n_s16(__a, __b, __imm, __p) > -#define vqrshrntq_m_n_u32(__a, __b, __imm, __p) > __arm_vqrshrntq_m_n_u32(__a, __b, __imm, __p) > -#define vqrshrntq_m_n_u16(__a, __b, __imm, __p) > __arm_vqrshrntq_m_n_u16(__a, __b, __imm, __p) > #define vqrshrunbq_m_n_s32(__a, __b, __imm, __p) > __arm_vqrshrunbq_m_n_s32(__a, __b, __imm, __p) > #define vqrshrunbq_m_n_s16(__a, __b, __imm, __p) > __arm_vqrshrunbq_m_n_s16(__a, __b, __imm, __p) > #define vqrshruntq_m_n_s32(__a, __b, __imm, __p) > __arm_vqrshruntq_m_n_s32(__a, __b, __imm, __p) > #define vqrshruntq_m_n_s16(__a, __b, __imm, __p) > __arm_vqrshruntq_m_n_s16(__a, __b, __imm, __p) > -#define vqshrnbq_m_n_s32(__a, __b, __imm, __p) > __arm_vqshrnbq_m_n_s32(__a, __b, __imm, __p) > -#define vqshrnbq_m_n_s16(__a, __b, __imm, __p) > __arm_vqshrnbq_m_n_s16(__a, __b, __imm, __p) > -#define vqshrnbq_m_n_u32(__a, __b, __imm, __p) > __arm_vqshrnbq_m_n_u32(__a, __b, __imm, __p) > -#define vqshrnbq_m_n_u16(__a, __b, __imm, __p) > __arm_vqshrnbq_m_n_u16(__a, __b, __imm, __p) > -#define vqshrntq_m_n_s32(__a, __b, __imm, __p) > __arm_vqshrntq_m_n_s32(__a, __b, __imm, __p) > -#define vqshrntq_m_n_s16(__a, __b, __imm, __p) > __arm_vqshrntq_m_n_s16(__a, __b, __imm, __p) > -#define vqshrntq_m_n_u32(__a, __b, __imm, __p) > __arm_vqshrntq_m_n_u32(__a, __b, __imm, __p) > -#define vqshrntq_m_n_u16(__a, __b, __imm, __p) > __arm_vqshrntq_m_n_u16(__a, __b, __imm, __p) > #define vqshrunbq_m_n_s32(__a, __b, __imm, __p) > __arm_vqshrunbq_m_n_s32(__a, __b, __imm, __p) > #define vqshrunbq_m_n_s16(__a, __b, __imm, __p) > __arm_vqshrunbq_m_n_s16(__a, __b, __imm, __p) > #define vqshruntq_m_n_s32(__a, __b, __imm, __p) > __arm_vqshruntq_m_n_s32(__a, __b, __imm, __p) > @@ -1543,14 +1479,6 @@ > #define vrmlaldavhaxq_p_s32(__a, __b, __c, __p) > __arm_vrmlaldavhaxq_p_s32(__a, __b, __c, __p) > #define vrmlsldavhaq_p_s32(__a, __b, __c, __p) > __arm_vrmlsldavhaq_p_s32(__a, __b, __c, __p) > #define vrmlsldavhaxq_p_s32(__a, __b, __c, __p) > __arm_vrmlsldavhaxq_p_s32(__a, __b, __c, __p) > -#define vrshrnbq_m_n_s32(__a, __b, __imm, __p) > __arm_vrshrnbq_m_n_s32(__a, __b, __imm, __p) > -#define vrshrnbq_m_n_s16(__a, __b, __imm, __p) > __arm_vrshrnbq_m_n_s16(__a, __b, __imm, __p) > -#define vrshrnbq_m_n_u32(__a, __b, __imm, __p) > __arm_vrshrnbq_m_n_u32(__a, __b, __imm, __p) > -#define vrshrnbq_m_n_u16(__a, __b, __imm, __p) > __arm_vrshrnbq_m_n_u16(__a, __b, __imm, __p) > -#define vrshrntq_m_n_s32(__a, __b, __imm, __p) > __arm_vrshrntq_m_n_s32(__a, __b, __imm, __p) > -#define vrshrntq_m_n_s16(__a, __b, __imm, __p) > __arm_vrshrntq_m_n_s16(__a, __b, __imm, __p) > -#define vrshrntq_m_n_u32(__a, __b, __imm, __p) > __arm_vrshrntq_m_n_u32(__a, __b, __imm, __p) > -#define vrshrntq_m_n_u16(__a, __b, __imm, __p) > __arm_vrshrntq_m_n_u16(__a, __b, __imm, __p) > #define vshllbq_m_n_s8(__inactive, __a, __imm, __p) > __arm_vshllbq_m_n_s8(__inactive, __a, __imm, __p) > #define vshllbq_m_n_s16(__inactive, __a, __imm, __p) > __arm_vshllbq_m_n_s16(__inactive, __a, __imm, __p) > #define vshllbq_m_n_u8(__inactive, __a, __imm, __p) > __arm_vshllbq_m_n_u8(__inactive, __a, __imm, __p) > @@ -1559,14 +1487,6 @@ > #define vshlltq_m_n_s16(__inactive, __a, __imm, __p) > __arm_vshlltq_m_n_s16(__inactive, __a, __imm, __p) > #define vshlltq_m_n_u8(__inactive, __a, __imm, __p) > __arm_vshlltq_m_n_u8(__inactive, __a, __imm, __p) > #define vshlltq_m_n_u16(__inactive, __a, __imm, __p) > __arm_vshlltq_m_n_u16(__inactive, __a, __imm, __p) > -#define vshrnbq_m_n_s32(__a, __b, __imm, __p) > __arm_vshrnbq_m_n_s32(__a, __b, __imm, __p) > -#define vshrnbq_m_n_s16(__a, __b, __imm, __p) > __arm_vshrnbq_m_n_s16(__a, __b, __imm, __p) > -#define vshrnbq_m_n_u32(__a, __b, __imm, __p) > __arm_vshrnbq_m_n_u32(__a, __b, __imm, __p) > -#define vshrnbq_m_n_u16(__a, __b, __imm, __p) > __arm_vshrnbq_m_n_u16(__a, __b, __imm, __p) > -#define vshrntq_m_n_s32(__a, __b, __imm, __p) > __arm_vshrntq_m_n_s32(__a, __b, __imm, __p) > -#define vshrntq_m_n_s16(__a, __b, __imm, __p) > __arm_vshrntq_m_n_s16(__a, __b, __imm, __p) > -#define vshrntq_m_n_u32(__a, __b, __imm, __p) > __arm_vshrntq_m_n_u32(__a, __b, __imm, __p) > -#define vshrntq_m_n_u16(__a, __b, __imm, __p) > __arm_vshrntq_m_n_u16(__a, __b, __imm, __p) > #define vbicq_m_f32(__inactive, __a, __b, __p) > __arm_vbicq_m_f32(__inactive, __a, __b, __p) > #define vbicq_m_f16(__inactive, __a, __b, __p) > __arm_vbicq_m_f16(__inactive, __a, __b, __p) > #define vbrsrq_m_n_f32(__inactive, __a, __b, __p) > __arm_vbrsrq_m_n_f32(__inactive, __a, __b, __p) > @@ -4525,34 +4445,6 @@ __arm_vbicq_m_n_u32 (uint32x4_t __a, const int > __imm, mve_pred16_t __p) > return __builtin_mve_vbicq_m_n_uv4si (__a, __imm, __p); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vqrshrnbq_n_sv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vqrshrnbq_n_uv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vqrshrnbq_n_sv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vqrshrnbq_n_uv4si (__a, __b, __imm); > -} > - > __extension__ extern __inline uint8x16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vqrshrunbq_n_s16 (uint8x16_t __a, int16x8_t __b, const int __imm) > @@ -6316,55 +6208,6 @@ __arm_vmvnq_m_n_s16 (int16x8_t __inactive, > const int __imm, mve_pred16_t __p) > return __builtin_mve_vmvnq_m_n_sv8hi (__inactive, __imm, __p); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vqrshrntq_n_sv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vqshrnbq_n_sv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vqshrntq_n_sv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vrshrnbq_n_sv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vrshrntq_n_sv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vshrnbq_n_sv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vshrntq_n_sv8hi (__a, __b, __imm); > -} > - > __extension__ extern __inline int64_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmlaldavaq_s16 (int64_t __a, int16x8_t __b, int16x8_t __c) > @@ -6512,55 +6355,6 @@ __arm_vqmovuntq_m_s16 (uint8x16_t __a, > int16x8_t __b, mve_pred16_t __p) > return __builtin_mve_vqmovuntq_m_sv8hi (__a, __b, __p); > } >=20 > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vqrshrntq_n_uv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vqshrnbq_n_uv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vqshrntq_n_uv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vrshrnbq_n_uv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vrshrntq_n_uv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vshrnbq_n_uv8hi (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __builtin_mve_vshrntq_n_uv8hi (__a, __b, __imm); > -} > - > __extension__ extern __inline uint64_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmlaldavaq_u16 (uint64_t __a, uint16x8_t __b, uint16x8_t __c) > @@ -6631,55 +6425,6 @@ __arm_vmvnq_m_n_s32 (int32x4_t __inactive, > const int __imm, mve_pred16_t __p) > return __builtin_mve_vmvnq_m_n_sv4si (__inactive, __imm, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vqrshrntq_n_sv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vqshrnbq_n_sv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vqshrntq_n_sv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vrshrnbq_n_sv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vrshrntq_n_sv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vshrnbq_n_sv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vshrntq_n_sv4si (__a, __b, __imm); > -} > - > __extension__ extern __inline int64_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmlaldavaq_s32 (int64_t __a, int32x4_t __b, int32x4_t __c) > @@ -6827,55 +6572,6 @@ __arm_vqmovuntq_m_s32 (uint16x8_t __a, > int32x4_t __b, mve_pred16_t __p) > return __builtin_mve_vqmovuntq_m_sv4si (__a, __b, __p); > } >=20 > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vqrshrntq_n_uv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vqshrnbq_n_uv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vqshrntq_n_uv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vrshrnbq_n_uv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vrshrntq_n_uv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vshrnbq_n_uv4si (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm) > -{ > - return __builtin_mve_vshrntq_n_uv4si (__a, __b, __imm); > -} > - > __extension__ extern __inline uint64_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmlaldavaq_u32 (uint64_t __a, uint32x4_t __b, uint32x4_t __c) > @@ -8101,62 +7797,6 @@ __arm_vqdmulltq_m_s16 (int32x4_t __inactive, > int16x8_t __a, int16x8_t __b, mve_p > return __builtin_mve_vqdmulltq_m_sv8hi (__inactive, __a, __b, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vqrshrnbq_m_n_sv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vqrshrnbq_m_n_sv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int > __imm, mve_pred16_t __p) > -{ > - return __builtin_mve_vqrshrnbq_m_n_uv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int > __imm, mve_pred16_t __p) > -{ > - return __builtin_mve_vqrshrnbq_m_n_uv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vqrshrntq_m_n_sv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vqrshrntq_m_n_sv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int > __imm, mve_pred16_t __p) > -{ > - return __builtin_mve_vqrshrntq_m_n_uv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int > __imm, mve_pred16_t __p) > -{ > - return __builtin_mve_vqrshrntq_m_n_uv8hi (__a, __b, __imm, __p); > -} > - > __extension__ extern __inline uint16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vqrshrunbq_m_n_s32 (uint16x8_t __a, int32x4_t __b, const int > __imm, mve_pred16_t __p) > @@ -8185,62 +7825,6 @@ __arm_vqrshruntq_m_n_s16 (uint8x16_t __a, > int16x8_t __b, const int __imm, mve_pr > return __builtin_mve_vqrshruntq_m_n_sv8hi (__a, __b, __imm, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vqshrnbq_m_n_sv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vqshrnbq_m_n_sv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int > __imm, mve_pred16_t __p) > -{ > - return __builtin_mve_vqshrnbq_m_n_uv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int > __imm, mve_pred16_t __p) > -{ > - return __builtin_mve_vqshrnbq_m_n_uv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vqshrntq_m_n_sv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vqshrntq_m_n_sv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vqshrntq_m_n_uv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vqshrntq_m_n_uv8hi (__a, __b, __imm, __p); > -} > - > __extension__ extern __inline uint16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vqshrunbq_m_n_s32 (uint16x8_t __a, int32x4_t __b, const int > __imm, mve_pred16_t __p) > @@ -8304,62 +7888,6 @@ __arm_vrmlsldavhaxq_p_s32 (int64_t __a, > int32x4_t __b, int32x4_t __c, mve_pred16 > return __builtin_mve_vrmlsldavhaxq_p_sv4si (__a, __b, __c, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vrshrnbq_m_n_sv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vrshrnbq_m_n_sv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int > __imm, mve_pred16_t __p) > -{ > - return __builtin_mve_vrshrnbq_m_n_uv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int > __imm, mve_pred16_t __p) > -{ > - return __builtin_mve_vrshrnbq_m_n_uv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vrshrntq_m_n_sv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vrshrntq_m_n_sv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vrshrntq_m_n_uv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vrshrntq_m_n_uv8hi (__a, __b, __imm, __p); > -} > - > __extension__ extern __inline int16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vshllbq_m_n_s8 (int16x8_t __inactive, int8x16_t __a, const int > __imm, mve_pred16_t __p) > @@ -8416,62 +7944,6 @@ __arm_vshlltq_m_n_u16 (uint32x4_t __inactive, > uint16x8_t __a, const int __imm, m > return __builtin_mve_vshlltq_m_n_uv8hi (__inactive, __a, __imm, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vshrnbq_m_n_sv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vshrnbq_m_n_sv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vshrnbq_m_n_uv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vshrnbq_m_n_uv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_m_n_s32 (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vshrntq_m_n_sv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_m_n_s16 (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vshrntq_m_n_sv8hi (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_m_n_u32 (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vshrntq_m_n_uv4si (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_m_n_u16 (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __builtin_mve_vshrntq_m_n_uv8hi (__a, __b, __imm, __p); > -} > - > __extension__ extern __inline void > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vstrbq_scatter_offset_s8 (int8_t * __base, uint8x16_t __offset, > int8x16_t __value) > @@ -16926,34 +16398,6 @@ __arm_vbicq_m_n (uint32x4_t __a, const int > __imm, mve_pred16_t __p) > return __arm_vbicq_m_n_u32 (__a, __imm, __p); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __arm_vqrshrnbq_n_s16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __arm_vqrshrnbq_n_u16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vqrshrnbq_n_s32 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq (uint16x8_t __a, uint32x4_t __b, const int __imm) > -{ > - return __arm_vqrshrnbq_n_u32 (__a, __b, __imm); > -} > - > __extension__ extern __inline uint8x16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vqrshrunbq (uint8x16_t __a, int16x8_t __b, const int __imm) > @@ -18704,55 +18148,6 @@ __arm_vmvnq_m (int16x8_t __inactive, const > int __imm, mve_pred16_t __p) > return __arm_vmvnq_m_n_s16 (__inactive, __imm, __p); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __arm_vqrshrntq_n_s16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __arm_vqshrnbq_n_s16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __arm_vqshrntq_n_s16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __arm_vrshrnbq_n_s16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __arm_vrshrntq_n_s16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __arm_vshrnbq_n_s16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq (int8x16_t __a, int16x8_t __b, const int __imm) > -{ > - return __arm_vshrntq_n_s16 (__a, __b, __imm); > -} > - > __extension__ extern __inline int64_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmlaldavaq (int64_t __a, int16x8_t __b, int16x8_t __c) > @@ -18900,55 +18295,6 @@ __arm_vqmovuntq_m (uint8x16_t __a, > int16x8_t __b, mve_pred16_t __p) > return __arm_vqmovuntq_m_s16 (__a, __b, __p); > } >=20 > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __arm_vqrshrntq_n_u16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __arm_vqshrnbq_n_u16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __arm_vqshrntq_n_u16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __arm_vrshrnbq_n_u16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __arm_vrshrntq_n_u16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __arm_vshrnbq_n_u16 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq (uint8x16_t __a, uint16x8_t __b, const int __imm) > -{ > - return __arm_vshrntq_n_u16 (__a, __b, __imm); > -} > - > __extension__ extern __inline uint64_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmlaldavaq (uint64_t __a, uint16x8_t __b, uint16x8_t __c) > @@ -19019,55 +18365,6 @@ __arm_vmvnq_m (int32x4_t __inactive, const > int __imm, mve_pred16_t __p) > return __arm_vmvnq_m_n_s32 (__inactive, __imm, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vqrshrntq_n_s32 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vqshrnbq_n_s32 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vqshrntq_n_s32 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vrshrnbq_n_s32 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vrshrntq_n_s32 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vshrnbq_n_s32 (__a, __b, __imm); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq (int16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vshrntq_n_s32 (__a, __b, __imm); > -} > - > __extension__ extern __inline int64_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmlaldavaq (int64_t __a, int32x4_t __b, int32x4_t __c) > @@ -19152,116 +18449,67 @@ __arm_vmovntq_m (int16x8_t __a, int32x4_t > __b, mve_pred16_t __p) > return __arm_vmovntq_m_s32 (__a, __b, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqmovnbq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) > -{ > - return __arm_vqmovnbq_m_s32 (__a, __b, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqmovntq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) > -{ > - return __arm_vqmovntq_m_s32 (__a, __b, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrev32q_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) > -{ > - return __arm_vrev32q_m_s16 (__inactive, __a, __p); > -} > - > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmvnq_m (uint32x4_t __inactive, const int __imm, mve_pred16_t > __p) > -{ > - return __arm_vmvnq_m_n_u32 (__inactive, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshruntq (uint16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vqrshruntq_n_s32 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrunbq (uint16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vqshrunbq_n_s32 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshruntq (uint16x8_t __a, int32x4_t __b, const int __imm) > -{ > - return __arm_vqshruntq_n_s32 (__a, __b, __imm); > -} > - > -__extension__ extern __inline uint16x8_t > +__extension__ extern __inline int16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqmovunbq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p) > +__arm_vqmovnbq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) > { > - return __arm_vqmovunbq_m_s32 (__a, __b, __p); > + return __arm_vqmovnbq_m_s32 (__a, __b, __p); > } >=20 > -__extension__ extern __inline uint16x8_t > +__extension__ extern __inline int16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqmovuntq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p) > +__arm_vqmovntq_m (int16x8_t __a, int32x4_t __b, mve_pred16_t __p) > { > - return __arm_vqmovuntq_m_s32 (__a, __b, __p); > + return __arm_vqmovntq_m_s32 (__a, __b, __p); > } >=20 > -__extension__ extern __inline uint16x8_t > +__extension__ extern __inline int16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq (uint16x8_t __a, uint32x4_t __b, const int __imm) > +__arm_vrev32q_m (int16x8_t __inactive, int16x8_t __a, mve_pred16_t __p) > { > - return __arm_vqrshrntq_n_u32 (__a, __b, __imm); > + return __arm_vrev32q_m_s16 (__inactive, __a, __p); > } >=20 > -__extension__ extern __inline uint16x8_t > +__extension__ extern __inline uint32x4_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq (uint16x8_t __a, uint32x4_t __b, const int __imm) > +__arm_vmvnq_m (uint32x4_t __inactive, const int __imm, mve_pred16_t > __p) > { > - return __arm_vqshrnbq_n_u32 (__a, __b, __imm); > + return __arm_vmvnq_m_n_u32 (__inactive, __imm, __p); > } >=20 > __extension__ extern __inline uint16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq (uint16x8_t __a, uint32x4_t __b, const int __imm) > +__arm_vqrshruntq (uint16x8_t __a, int32x4_t __b, const int __imm) > { > - return __arm_vqshrntq_n_u32 (__a, __b, __imm); > + return __arm_vqrshruntq_n_s32 (__a, __b, __imm); > } >=20 > __extension__ extern __inline uint16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq (uint16x8_t __a, uint32x4_t __b, const int __imm) > +__arm_vqshrunbq (uint16x8_t __a, int32x4_t __b, const int __imm) > { > - return __arm_vrshrnbq_n_u32 (__a, __b, __imm); > + return __arm_vqshrunbq_n_s32 (__a, __b, __imm); > } >=20 > __extension__ extern __inline uint16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq (uint16x8_t __a, uint32x4_t __b, const int __imm) > +__arm_vqshruntq (uint16x8_t __a, int32x4_t __b, const int __imm) > { > - return __arm_vrshrntq_n_u32 (__a, __b, __imm); > + return __arm_vqshruntq_n_s32 (__a, __b, __imm); > } >=20 > __extension__ extern __inline uint16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq (uint16x8_t __a, uint32x4_t __b, const int __imm) > +__arm_vqmovunbq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p) > { > - return __arm_vshrnbq_n_u32 (__a, __b, __imm); > + return __arm_vqmovunbq_m_s32 (__a, __b, __p); > } >=20 > __extension__ extern __inline uint16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq (uint16x8_t __a, uint32x4_t __b, const int __imm) > +__arm_vqmovuntq_m (uint16x8_t __a, int32x4_t __b, mve_pred16_t __p) > { > - return __arm_vshrntq_n_u32 (__a, __b, __imm); > + return __arm_vqmovuntq_m_s32 (__a, __b, __p); > } >=20 > __extension__ extern __inline uint64_t > @@ -20489,62 +19737,6 @@ __arm_vqdmulltq_m (int32x4_t __inactive, > int16x8_t __a, int16x8_t __b, mve_pred1 > return __arm_vqdmulltq_m_s16 (__inactive, __a, __b, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_m (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqrshrnbq_m_n_s32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_m (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqrshrnbq_m_n_s16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_m (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqrshrnbq_m_n_u32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrnbq_m (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqrshrnbq_m_n_u16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_m (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqrshrntq_m_n_s32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_m (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqrshrntq_m_n_s16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_m (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqrshrntq_m_n_u32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqrshrntq_m (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqrshrntq_m_n_u16 (__a, __b, __imm, __p); > -} > - > __extension__ extern __inline uint16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vqrshrunbq_m (uint16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > @@ -20573,62 +19765,6 @@ __arm_vqrshruntq_m (uint8x16_t __a, > int16x8_t __b, const int __imm, mve_pred16_t > return __arm_vqrshruntq_m_n_s16 (__a, __b, __imm, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_m (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqshrnbq_m_n_s32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_m (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqshrnbq_m_n_s16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_m (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqshrnbq_m_n_u32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrnbq_m (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqshrnbq_m_n_u16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_m (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqshrntq_m_n_s32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_m (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqshrntq_m_n_s16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_m (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqshrntq_m_n_u32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vqshrntq_m (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vqshrntq_m_n_u16 (__a, __b, __imm, __p); > -} > - > __extension__ extern __inline uint16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vqshrunbq_m (uint16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > @@ -20692,62 +19828,6 @@ __arm_vrmlsldavhaxq_p (int64_t __a, int32x4_t > __b, int32x4_t __c, mve_pred16_t _ > return __arm_vrmlsldavhaxq_p_s32 (__a, __b, __c, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_m (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vrshrnbq_m_n_s32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_m (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vrshrnbq_m_n_s16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_m (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vrshrnbq_m_n_u32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrnbq_m (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vrshrnbq_m_n_u16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_m (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vrshrntq_m_n_s32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_m (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vrshrntq_m_n_s16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_m (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vrshrntq_m_n_u32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vrshrntq_m (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vrshrntq_m_n_u16 (__a, __b, __imm, __p); > -} > - > __extension__ extern __inline int16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vshllbq_m (int16x8_t __inactive, int8x16_t __a, const int __imm, > mve_pred16_t __p) > @@ -20804,62 +19884,6 @@ __arm_vshlltq_m (uint32x4_t __inactive, > uint16x8_t __a, const int __imm, mve_pre > return __arm_vshlltq_m_n_u16 (__inactive, __a, __imm, __p); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_m (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vshrnbq_m_n_s32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_m (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vshrnbq_m_n_s16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_m (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vshrnbq_m_n_u32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrnbq_m (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vshrnbq_m_n_u16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_m (int16x8_t __a, int32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vshrntq_m_n_s32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_m (int8x16_t __a, int16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vshrntq_m_n_s16 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_m (uint16x8_t __a, uint32x4_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vshrntq_m_n_u32 (__a, __b, __imm, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vshrntq_m (uint8x16_t __a, uint16x8_t __b, const int __imm, > mve_pred16_t __p) > -{ > - return __arm_vshrntq_m_n_u16 (__a, __b, __imm, __p); > -} > - > __extension__ extern __inline void > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vstrbq_scatter_offset (int8_t * __base, uint8x16_t __offset, int8x= 16_t > __value) > @@ -26775,14 +25799,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint16x8_t]: __arm_vbicq_m_n_u16 > (__ARM_mve_coerce(__p0, uint16x8_t), p1, p2), \ > int (*)[__ARM_mve_type_uint32x4_t]: __arm_vbicq_m_n_u32 > (__ARM_mve_coerce(__p0, uint32x4_t), p1, p2));}) >=20 > -#define __arm_vqrshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqrshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqrshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqrshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqrshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > #define __arm_vqrshrunbq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -27006,14 +26022,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]: > __arm_vmovltq_m_u8 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint8x16_t), p2), \ > int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]: > __arm_vmovltq_m_u16 (__ARM_mve_coerce(__p0, uint32x4_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2));}) >=20 > -#define __arm_vshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > #define __arm_vcvtaq_m(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -27350,14 +26358,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_fp_n]: > __arm_vcmpgeq_n_f16 (__ARM_mve_coerce(__p0, float16x8_t), > __ARM_mve_coerce2(p1, double)), \ > int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_fp_n]: > __arm_vcmpgeq_n_f32 (__ARM_mve_coerce(__p0, float32x4_t), > __ARM_mve_coerce2(p1, double)));}) >=20 > -#define __arm_vrshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vrshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vrshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vrshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vrshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > #define __arm_vrev16q_m(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -27370,22 +26370,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqshruntq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqshruntq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2));}) >=20 > -#define __arm_vqshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > -#define __arm_vqshrntq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > #define __arm_vqrshruntq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -27420,14 +26404,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqmovuntq_m_s16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqmovuntq_m_s32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2));}) >=20 > -#define __arm_vqrshrntq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqrshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqrshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqrshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqrshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > #define __arm_vqrshruntq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -28568,14 +27544,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint16x8_t]: __arm_vbicq_m_n_u16 > (__ARM_mve_coerce(__p0, uint16x8_t), p1, p2), \ > int (*)[__ARM_mve_type_uint32x4_t]: __arm_vbicq_m_n_u32 > (__ARM_mve_coerce(__p0, uint32x4_t), p1, p2));}) >=20 > -#define __arm_vqrshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqrshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqrshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqrshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqrshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > #define __arm_vqrshrunbq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -28885,22 +27853,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vmovntq_m_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vmovntq_m_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) >=20 > -#define __arm_vshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > -#define __arm_vrshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vrshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vrshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vrshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vrshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > #define __arm_vrev32q_m(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -28921,36 +27873,12 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: > __arm_vrev16q_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int8x16_t), p2), \ > int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: > __arm_vrev16q_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint8x16_t), p2));}) >=20 > -#define __arm_vqshrntq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > #define __arm_vqrshruntq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqrshruntq_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqrshruntq_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2));}) >=20 > -#define __arm_vqrshrntq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqrshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqrshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqrshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqrshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > -#define __arm_vqshrnbq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqshrnbq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqshrnbq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqshrnbq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqshrnbq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > #define __arm_vqmovuntq_m(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -29474,22 +28402,6 @@ extern void *__ARM_undef; >=20 > #endif /* MVE Integer. */ >=20 > -#define __arm_vshrntq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) > - > - > -#define __arm_vrshrntq(p0,p1,p2) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vrshrntq_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vrshrntq_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vrshrntq_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vrshrntq_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2));}) >=20 >=20 > #define __arm_vmvnq_x(p1,p2) ({ __typeof(p1) __p1 =3D (p1); \ > @@ -29798,22 +28710,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]: > __arm_vshllbq_m_n_u8 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \ > int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]: > __arm_vshllbq_m_n_u16 (__ARM_mve_coerce(__p0, uint32x4_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2, p3));}) >=20 > -#define __arm_vshrntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vshrntq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vshrntq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vshrntq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vshrntq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));}) > - > -#define __arm_vshrnbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vshrnbq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vshrnbq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vshrnbq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vshrnbq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));}) > - > #define __arm_vshlltq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -29822,14 +28718,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint8x16_t]: > __arm_vshlltq_m_n_u8 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint8x16_t), p2, p3), \ > int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint16x8_t]: > __arm_vshlltq_m_n_u16 (__ARM_mve_coerce(__p0, uint32x4_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2, p3));}) >=20 > -#define __arm_vrshrntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vrshrntq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vrshrntq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vrshrntq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vrshrntq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));}) > - > #define __arm_vqshruntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -29842,22 +28730,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqshrunbq_m_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqshrunbq_m_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2, p3));}) >=20 > -#define __arm_vqrshrnbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqrshrnbq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqrshrnbq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqrshrnbq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqrshrnbq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));}) > - > -#define __arm_vqrshrntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqrshrntq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqrshrntq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqrshrntq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqrshrntq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));}) > - > #define __arm_vqrshrunbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -29870,30 +28742,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqrshruntq_m_n_s16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqrshruntq_m_n_s32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2, p3));}) >=20 > -#define __arm_vqshrnbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqshrnbq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqshrnbq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqshrnbq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqshrnbq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));}) > - > -#define __arm_vqshrntq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vqshrntq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vqshrntq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vqshrntq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vqshrntq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));}) > - > -#define __arm_vrshrnbq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int16x8_t]: > __arm_vrshrnbq_m_n_s16 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int32x4_t]: > __arm_vrshrnbq_m_n_s32 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int32x4_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint16x8_t]: > __arm_vrshrnbq_m_n_u16 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint16x8_t), p2, p3), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint32x4_t]: > __arm_vrshrnbq_m_n_u32 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint32x4_t), p2, p3));}) > - > #define __arm_vmlaldavaq_p(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > __typeof(p2) __p2 =3D (p2); \ > -- > 2.34.1