From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR02-DB5-obe.outbound.protection.outlook.com (mail-db5eur02on2058.outbound.protection.outlook.com [40.107.249.58]) by sourceware.org (Postfix) with ESMTPS id 869B2385B53E for ; Fri, 5 May 2023 10:59:52 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 869B2385B53E Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NNQreSE8OsN+mI7ewX33+Zcz3LthW9n8BUeGS+Q+7cQ=; b=QJPci8pK8gEhWJ23mOCWKVX61f7E9JWf1ekLeuDvLHrQDvVAN3EKeoPR1kKV0kka9ZXrwqRIDTwiI705qCtgjdMFJiEkGgb0N3tAgsG9XU8iWSeBJ8GhDLOdJar2PJPZsfNPz9QQnylTtdcxD11arfkApBBxyKdPc8O33mFJdgM= Received: from AS9PR05CA0156.eurprd05.prod.outlook.com (2603:10a6:20b:496::18) by DBAPR08MB5702.eurprd08.prod.outlook.com (2603:10a6:10:1a3::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 10:59:48 +0000 Received: from AM7EUR03FT029.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:496:cafe::1b) by AS9PR05CA0156.outlook.office365.com (2603:10a6:20b:496::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 10:59:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT029.mail.protection.outlook.com (100.127.140.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 10:59:48 +0000 Received: ("Tessian outbound e13c2446394c:v136"); Fri, 05 May 2023 10:59:47 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 7bbae1b7d7ceda1c X-CR-MTA-TID: 64aa7808 Received: from ac7db6c30ac7.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id AAD6ABDD-4BB3-4D76-9A48-2B4445E7A2FB.1; Fri, 05 May 2023 10:59:37 +0000 Received: from EUR01-DB5-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ac7db6c30ac7.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 10:59:37 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JqFW90h1vp9rcKRyZAaQvEJvbWyzCQd1RtJARJnPBBjYklMTFvD4S43djWbaqE7o6m2s9O3WQ8GIFBFw9QOSKzH5872Puyn5ankBQm6LyvdEtWTrvsGr+hcU2ov+jnzDdn9XMI1U5SDrnnwnN+wuWp/N+/IZXC/4VFp3VnmtSwxZEWNfwb4MzvqzOlUhsDSbkZQEqcjvXorFjSsp8fdX4xDrCey9WBBco2OYhocWIweHDrPppHf2W8OKvF7hufhK5E7zKvx7MGcs7WGESfcY5NuAxHyTs9MTaRXoLYANJqrM5QMfmHUFNlv0KTWVQeP4vv719tUxaPA6H0tlhD+OAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NNQreSE8OsN+mI7ewX33+Zcz3LthW9n8BUeGS+Q+7cQ=; b=lnzdo8irH9hbQXO3UlrEQ92bk/+hhQb+CYYNnOUiteuPJhCJOrtSDRpwpFC9NeyB3KXkIK8s9pmAobU56R2e0BY2FRfwgIIBYGk12SkVl5/M01bl1qaJfvCELoZnw5OXZGyDDpZEARpwRV+WAyPCmYoLOJYrUm+ez6Ec+qjc8taz2U8z+RyhKbkJTSO5hUAdrSgMaQXIKkfi/xZY4wIkNajt1ep1aEtixMt1m2Za4eLCi7eDFnCIkZIc3w6VQNz0PaDV6CiPehiwfNiWG5xlDq07+Lzd6WpSbrjkDw58On4nOzHZ32GZH2aLroWyr6SL36vNaCAeccuB9jv66Ney2Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NNQreSE8OsN+mI7ewX33+Zcz3LthW9n8BUeGS+Q+7cQ=; b=QJPci8pK8gEhWJ23mOCWKVX61f7E9JWf1ekLeuDvLHrQDvVAN3EKeoPR1kKV0kka9ZXrwqRIDTwiI705qCtgjdMFJiEkGgb0N3tAgsG9XU8iWSeBJ8GhDLOdJar2PJPZsfNPz9QQnylTtdcxD11arfkApBBxyKdPc8O33mFJdgM= Received: from PAXPR08MB6926.eurprd08.prod.outlook.com (2603:10a6:102:138::24) by AS2PR08MB10154.eurprd08.prod.outlook.com (2603:10a6:20b:62e::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Fri, 5 May 2023 10:59:33 +0000 Received: from PAXPR08MB6926.eurprd08.prod.outlook.com ([fe80::db73:66ba:ae70:1ff1]) by PAXPR08MB6926.eurprd08.prod.outlook.com ([fe80::db73:66ba:ae70:1ff1%3]) with mapi id 15.20.6363.027; Fri, 5 May 2023 10:59:33 +0000 From: Kyrylo Tkachov To: Christophe Lyon , "gcc-patches@gcc.gnu.org" , Richard Earnshaw , Richard Sandiford CC: Christophe Lyon Subject: RE: [PATCH 14/23] arm: [MVE intrinsics] rework vmaxq vminq Thread-Topic: [PATCH 14/23] arm: [MVE intrinsics] rework vmaxq vminq Thread-Index: AQHZfy0zYVcwUZycPkqMZgjWvt5BUa9LgvQw Date: Fri, 5 May 2023 10:59:33 +0000 Message-ID: References: <20230505083930.101210-1-christophe.lyon@arm.com> <20230505083930.101210-14-christophe.lyon@arm.com> In-Reply-To: <20230505083930.101210-14-christophe.lyon@arm.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; x-ms-traffictypediagnostic: PAXPR08MB6926:EE_|AS2PR08MB10154:EE_|AM7EUR03FT029:EE_|DBAPR08MB5702:EE_ X-MS-Office365-Filtering-Correlation-Id: 848ac98c-2a8f-4833-675f-08db4d57d7cc x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: ml0mSSJHXlTHZa3svf4QhkLfrN1JGKnQ6rFCsISsK3KC/xJr3mIAotgQoj768rxYoaxE195g0BAMDxX2g2xyxTd3b1T7xDyMde4otbaIufpWAHo66VJdkhYqLZtGA05K78ti7DI99/PCnH02ReJiBZyV8jLoSTLoi587eeWEXWcm38T4bfXsKnWkvB7TbSsuauk9lOe/o3txMLd/d8cxgDc2bKYgp2NFYiIx8ZRgJ2jRtJTld425ewOMrKl/nn7nk5G/XsI82rXy/OMFd3+B1xpv0+jzCJzCOROsLF9SOX41jaiDLjcsmg2jsOsy1jq8zJ4zgkeskSsM5v4abayTapO/7i7yzjfYrAKMT9Np8JN3MdcQw/RRJKXI6A6O1lQ6a4NzLcOxhFv3zEichH71PB7gi8mAUU1n+Bz/cF+cgno6Y/4rl+eIDRaRow1B3I9RBXWuXyNkPsWG3L/At0OU5W+nMUSIoLj8TrcP1OaF2gwiUPpWtZFLg38vpEm1NT4601CRWswQ7j3GikqT35McHrWWNp9ZQy4qPutanFdzV60gOJMeBNwwE++dOumSZ+MKARAZzi1enFGn//N96hNqCa4ixl8jvL40kDrXDVQdoZ00WorB7jo/U01unsLJYzTf X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB6926.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(136003)(39860400002)(396003)(376002)(451199021)(83380400001)(186003)(55016003)(86362001)(38070700005)(38100700002)(33656002)(122000001)(5660300002)(52536014)(8676002)(8936002)(66476007)(66556008)(64756008)(6636002)(4326008)(478600001)(66946007)(76116006)(66446008)(71200400001)(7696005)(316002)(41300700001)(26005)(6506007)(9686003)(2906002)(30864003)(53546011)(110136005)(559001)(579004);DIR:OUT;SFP:1101; Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10154 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT029.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 8435c9b3-8fe0-433c-4b9c-08db4d57cf22 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GMI2EWaeEucnEQrhiVxAPw4C8bG7pTGCBo7UHQVI5hWlhMuFAJtBgd5o8T297Bw3Hfur4d2WzwXISFz6b9G8TOgs8qYYYutWY2Nv9Y5pme+Svj3W1cxC4EEb7Wp4b9wEYrPrpYxDdSkI1osQeMTMxhC1eD/KBBBoaZ00CZug7Prh0S1bnJtZUDFHTvs7lCJbSun2Clpc4EhdSrIzbEh40N797LsAfpOR+xUW54Q4lpVcRKJJhTtKK1SdnxJQ7uoNPsQn2pX5mhqP7jWe6uSsCoVr1XWeOJAKLXxzIDfaiGlwqRxIvPI61Cgpa8+IS82daGpGVQcUFbQXQ2Hrn/K8rxzwfwGsW1hDnounXXZ9CPo1ju8VF/ybCAgIvqlf2UUSQWNt6+K2tzj8U83SAy8qReWGqJpPHT8ae6oI3AU6Ow30UgU9gvuEZOHPYexxbAnr4bIXe8rYRz1wKBfpCYhJGjoOXGb1m0Rr+U4P5fo1ifjI7Zbi0O+D//FZ7bwIY4DE+lg1v1gWfbQUlDqztk27xdgVnFbhLtmVRp2POOBxAqS4Agv31HZX2GiKl4z1KXUXJJfRYFqBJAvnSaNgmOCDf50MCFcfBmE33y227Jfa96qwx/0tIffU0eNrFw4wyReLqqd/ci2jAvTBYRAfdlNw1kgqkkAk3RvX7nr6ml+cHCyhmFgKFfeFjp4kTEMQDkr+WwtgxSY7XCLRvL6SbMDLfktwa4i0JNR5lB6S3RBw/Mo= X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39850400004)(396003)(136003)(451199021)(40470700004)(46966006)(36840700001)(47076005)(83380400001)(40460700003)(186003)(336012)(2906002)(30864003)(34020700004)(40480700001)(33656002)(86362001)(82310400005)(82740400003)(356005)(81166007)(36860700001)(55016003)(41300700001)(8936002)(8676002)(5660300002)(52536014)(6636002)(4326008)(70586007)(70206006)(7696005)(316002)(478600001)(26005)(6506007)(53546011)(9686003)(110136005)(559001)(579004);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 10:59:48.0266 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 848ac98c-2a8f-4833-675f-08db4d57d7cc X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT029.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5702 X-Spam-Status: No, score=-11.5 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,KAM_DMARC_NONE,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: > -----Original Message----- > From: Christophe Lyon > Sent: Friday, May 5, 2023 9:39 AM > To: gcc-patches@gcc.gnu.org; Kyrylo Tkachov ; > Richard Earnshaw ; Richard Sandiford > > Cc: Christophe Lyon > Subject: [PATCH 14/23] arm: [MVE intrinsics] rework vmaxq vminq >=20 > Implement vmaxq and vminq using the new MVE builtins framework. Ok. Thanks, Kyrill >=20 > 2022-09-08 Christophe Lyon >=20 > gcc/ > * config/arm/arm-mve-builtins-base.cc > (FUNCTION_WITH_RTX_M_NO_F): New. > (vmaxq, vminq): New. > * config/arm/arm-mve-builtins-base.def (vmaxq, vminq): New. > * config/arm/arm-mve-builtins-base.h (vmaxq, vminq): New. > * config/arm/arm_mve.h (vminq): Remove. > (vmaxq): Remove. > (vmaxq_m): Remove. > (vminq_m): Remove. > (vminq_x): Remove. > (vmaxq_x): Remove. > (vminq_u8): Remove. > (vmaxq_u8): Remove. > (vminq_s8): Remove. > (vmaxq_s8): Remove. > (vminq_u16): Remove. > (vmaxq_u16): Remove. > (vminq_s16): Remove. > (vmaxq_s16): Remove. > (vminq_u32): Remove. > (vmaxq_u32): Remove. > (vminq_s32): Remove. > (vmaxq_s32): Remove. > (vmaxq_m_s8): Remove. > (vmaxq_m_s32): Remove. > (vmaxq_m_s16): Remove. > (vmaxq_m_u8): Remove. > (vmaxq_m_u32): Remove. > (vmaxq_m_u16): Remove. > (vminq_m_s8): Remove. > (vminq_m_s32): Remove. > (vminq_m_s16): Remove. > (vminq_m_u8): Remove. > (vminq_m_u32): Remove. > (vminq_m_u16): Remove. > (vminq_x_s8): Remove. > (vminq_x_s16): Remove. > (vminq_x_s32): Remove. > (vminq_x_u8): Remove. > (vminq_x_u16): Remove. > (vminq_x_u32): Remove. > (vmaxq_x_s8): Remove. > (vmaxq_x_s16): Remove. > (vmaxq_x_s32): Remove. > (vmaxq_x_u8): Remove. > (vmaxq_x_u16): Remove. > (vmaxq_x_u32): Remove. > (__arm_vminq_u8): Remove. > (__arm_vmaxq_u8): Remove. > (__arm_vminq_s8): Remove. > (__arm_vmaxq_s8): Remove. > (__arm_vminq_u16): Remove. > (__arm_vmaxq_u16): Remove. > (__arm_vminq_s16): Remove. > (__arm_vmaxq_s16): Remove. > (__arm_vminq_u32): Remove. > (__arm_vmaxq_u32): Remove. > (__arm_vminq_s32): Remove. > (__arm_vmaxq_s32): Remove. > (__arm_vmaxq_m_s8): Remove. > (__arm_vmaxq_m_s32): Remove. > (__arm_vmaxq_m_s16): Remove. > (__arm_vmaxq_m_u8): Remove. > (__arm_vmaxq_m_u32): Remove. > (__arm_vmaxq_m_u16): Remove. > (__arm_vminq_m_s8): Remove. > (__arm_vminq_m_s32): Remove. > (__arm_vminq_m_s16): Remove. > (__arm_vminq_m_u8): Remove. > (__arm_vminq_m_u32): Remove. > (__arm_vminq_m_u16): Remove. > (__arm_vminq_x_s8): Remove. > (__arm_vminq_x_s16): Remove. > (__arm_vminq_x_s32): Remove. > (__arm_vminq_x_u8): Remove. > (__arm_vminq_x_u16): Remove. > (__arm_vminq_x_u32): Remove. > (__arm_vmaxq_x_s8): Remove. > (__arm_vmaxq_x_s16): Remove. > (__arm_vmaxq_x_s32): Remove. > (__arm_vmaxq_x_u8): Remove. > (__arm_vmaxq_x_u16): Remove. > (__arm_vmaxq_x_u32): Remove. > (__arm_vminq): Remove. > (__arm_vmaxq): Remove. > (__arm_vmaxq_m): Remove. > (__arm_vminq_m): Remove. > (__arm_vminq_x): Remove. > (__arm_vmaxq_x): Remove. > --- > gcc/config/arm/arm-mve-builtins-base.cc | 11 + > gcc/config/arm/arm-mve-builtins-base.def | 2 + > gcc/config/arm/arm-mve-builtins-base.h | 2 + > gcc/config/arm/arm_mve.h | 628 ----------------------- > 4 files changed, 15 insertions(+), 628 deletions(-) >=20 > diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm= - > mve-builtins-base.cc > index 4bebf86f784..1839d5cb1a5 100644 > --- a/gcc/config/arm/arm-mve-builtins-base.cc > +++ b/gcc/config/arm/arm-mve-builtins-base.cc > @@ -110,6 +110,15 @@ namespace arm_mve { > UNSPEC##_M_S, UNSPEC##_M_U, UNSPEC##_M_F, > \ > UNSPEC##_M_N_S, UNSPEC##_M_N_U, -1)) >=20 > + /* Helper for builtins with RTX codes, _m predicated override, but > + no floating-point versions. */ > +#define FUNCTION_WITH_RTX_M_NO_F(NAME, RTX_S, RTX_U, UNSPEC) > FUNCTION \ > + (NAME, unspec_based_mve_function_exact_insn, > \ > + (RTX_S, RTX_U, UNKNOWN, \ > + -1, -1, -1, \ > + UNSPEC##_M_S, UNSPEC##_M_U, -1, > \ > + -1, -1, -1)) > + > /* Helper for builtins without RTX codes, no _m predicated and no _n > overrides. */ > #define FUNCTION_WITHOUT_M_N(NAME, UNSPEC) FUNCTION > \ > @@ -173,6 +182,8 @@ FUNCTION_WITHOUT_M_N (vcreateq, VCREATEQ) > FUNCTION_WITH_RTX_M (veorq, XOR, VEORQ) > FUNCTION_WITH_M_N_NO_F (vhaddq, VHADDQ) > FUNCTION_WITH_M_N_NO_F (vhsubq, VHSUBQ) > +FUNCTION_WITH_RTX_M_NO_F (vmaxq, SMAX, UMAX, VMAXQ) > +FUNCTION_WITH_RTX_M_NO_F (vminq, SMIN, UMIN, VMINQ) > FUNCTION_WITHOUT_N_NO_F (vmulhq, VMULHQ) > FUNCTION_WITH_RTX_M_N (vmulq, MULT, VMULQ) > FUNCTION_WITH_RTX_M_N_NO_N_F (vorrq, IOR, VORRQ) > diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/ar= m- > mve-builtins-base.def > index f2e40cda2af..3b42bf46e81 100644 > --- a/gcc/config/arm/arm-mve-builtins-base.def > +++ b/gcc/config/arm/arm-mve-builtins-base.def > @@ -25,6 +25,8 @@ DEF_MVE_FUNCTION (vcreateq, create, > all_integer_with_64, none) > DEF_MVE_FUNCTION (veorq, binary, all_integer, mx_or_none) > DEF_MVE_FUNCTION (vhaddq, binary_opt_n, all_integer, mx_or_none) > DEF_MVE_FUNCTION (vhsubq, binary_opt_n, all_integer, mx_or_none) > +DEF_MVE_FUNCTION (vmaxq, binary, all_integer, mx_or_none) > +DEF_MVE_FUNCTION (vminq, binary, all_integer, mx_or_none) > DEF_MVE_FUNCTION (vmulhq, binary, all_integer, mx_or_none) > DEF_MVE_FUNCTION (vmulq, binary_opt_n, all_integer, mx_or_none) > DEF_MVE_FUNCTION (vorrq, binary_orrq, all_integer, mx_or_none) > diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm- > mve-builtins-base.h > index 5b62de6a922..81d10f4a8f4 100644 > --- a/gcc/config/arm/arm-mve-builtins-base.h > +++ b/gcc/config/arm/arm-mve-builtins-base.h > @@ -30,6 +30,8 @@ extern const function_base *const vcreateq; > extern const function_base *const veorq; > extern const function_base *const vhaddq; > extern const function_base *const vhsubq; > +extern const function_base *const vmaxq; > +extern const function_base *const vminq; > extern const function_base *const vmulhq; > extern const function_base *const vmulq; > extern const function_base *const vorrq; > diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h > index ad67dcfd024..5fbea52c8ef 100644 > --- a/gcc/config/arm/arm_mve.h > +++ b/gcc/config/arm/arm_mve.h > @@ -65,9 +65,7 @@ > #define vmullbq_int(__a, __b) __arm_vmullbq_int(__a, __b) > #define vmladavq(__a, __b) __arm_vmladavq(__a, __b) > #define vminvq(__a, __b) __arm_vminvq(__a, __b) > -#define vminq(__a, __b) __arm_vminq(__a, __b) > #define vmaxvq(__a, __b) __arm_vmaxvq(__a, __b) > -#define vmaxq(__a, __b) __arm_vmaxq(__a, __b) > #define vcmphiq(__a, __b) __arm_vcmphiq(__a, __b) > #define vcmpeqq(__a, __b) __arm_vcmpeqq(__a, __b) > #define vcmpcsq(__a, __b) __arm_vcmpcsq(__a, __b) > @@ -214,8 +212,6 @@ > #define vcaddq_rot90_m(__inactive, __a, __b, __p) > __arm_vcaddq_rot90_m(__inactive, __a, __b, __p) > #define vhcaddq_rot270_m(__inactive, __a, __b, __p) > __arm_vhcaddq_rot270_m(__inactive, __a, __b, __p) > #define vhcaddq_rot90_m(__inactive, __a, __b, __p) > __arm_vhcaddq_rot90_m(__inactive, __a, __b, __p) > -#define vmaxq_m(__inactive, __a, __b, __p) __arm_vmaxq_m(__inactive, > __a, __b, __p) > -#define vminq_m(__inactive, __a, __b, __p) __arm_vminq_m(__inactive, > __a, __b, __p) > #define vmladavaq_p(__a, __b, __c, __p) __arm_vmladavaq_p(__a, __b, __c, > __p) > #define vmladavaxq_p(__a, __b, __c, __p) __arm_vmladavaxq_p(__a, __b, > __c, __p) > #define vmlaq_m(__a, __b, __c, __p) __arm_vmlaq_m(__a, __b, __c, __p) > @@ -339,8 +335,6 @@ > #define viwdupq_x_u8(__a, __b, __imm, __p) __arm_viwdupq_x_u8(__a, > __b, __imm, __p) > #define viwdupq_x_u16(__a, __b, __imm, __p) __arm_viwdupq_x_u16(__a, > __b, __imm, __p) > #define viwdupq_x_u32(__a, __b, __imm, __p) __arm_viwdupq_x_u32(__a, > __b, __imm, __p) > -#define vminq_x(__a, __b, __p) __arm_vminq_x(__a, __b, __p) > -#define vmaxq_x(__a, __b, __p) __arm_vmaxq_x(__a, __b, __p) > #define vabsq_x(__a, __p) __arm_vabsq_x(__a, __p) > #define vclsq_x(__a, __p) __arm_vclsq_x(__a, __p) > #define vclzq_x(__a, __p) __arm_vclzq_x(__a, __p) > @@ -614,9 +608,7 @@ > #define vmullbq_int_u8(__a, __b) __arm_vmullbq_int_u8(__a, __b) > #define vmladavq_u8(__a, __b) __arm_vmladavq_u8(__a, __b) > #define vminvq_u8(__a, __b) __arm_vminvq_u8(__a, __b) > -#define vminq_u8(__a, __b) __arm_vminq_u8(__a, __b) > #define vmaxvq_u8(__a, __b) __arm_vmaxvq_u8(__a, __b) > -#define vmaxq_u8(__a, __b) __arm_vmaxq_u8(__a, __b) > #define vcmpneq_n_u8(__a, __b) __arm_vcmpneq_n_u8(__a, __b) > #define vcmphiq_u8(__a, __b) __arm_vcmphiq_u8(__a, __b) > #define vcmphiq_n_u8(__a, __b) __arm_vcmphiq_n_u8(__a, __b) > @@ -656,9 +648,7 @@ > #define vmladavxq_s8(__a, __b) __arm_vmladavxq_s8(__a, __b) > #define vmladavq_s8(__a, __b) __arm_vmladavq_s8(__a, __b) > #define vminvq_s8(__a, __b) __arm_vminvq_s8(__a, __b) > -#define vminq_s8(__a, __b) __arm_vminq_s8(__a, __b) > #define vmaxvq_s8(__a, __b) __arm_vmaxvq_s8(__a, __b) > -#define vmaxq_s8(__a, __b) __arm_vmaxq_s8(__a, __b) > #define vhcaddq_rot90_s8(__a, __b) __arm_vhcaddq_rot90_s8(__a, __b) > #define vhcaddq_rot270_s8(__a, __b) __arm_vhcaddq_rot270_s8(__a, __b) > #define vcaddq_rot90_s8(__a, __b) __arm_vcaddq_rot90_s8(__a, __b) > @@ -672,9 +662,7 @@ > #define vmullbq_int_u16(__a, __b) __arm_vmullbq_int_u16(__a, __b) > #define vmladavq_u16(__a, __b) __arm_vmladavq_u16(__a, __b) > #define vminvq_u16(__a, __b) __arm_vminvq_u16(__a, __b) > -#define vminq_u16(__a, __b) __arm_vminq_u16(__a, __b) > #define vmaxvq_u16(__a, __b) __arm_vmaxvq_u16(__a, __b) > -#define vmaxq_u16(__a, __b) __arm_vmaxq_u16(__a, __b) > #define vcmpneq_n_u16(__a, __b) __arm_vcmpneq_n_u16(__a, __b) > #define vcmphiq_u16(__a, __b) __arm_vcmphiq_u16(__a, __b) > #define vcmphiq_n_u16(__a, __b) __arm_vcmphiq_n_u16(__a, __b) > @@ -714,9 +702,7 @@ > #define vmladavxq_s16(__a, __b) __arm_vmladavxq_s16(__a, __b) > #define vmladavq_s16(__a, __b) __arm_vmladavq_s16(__a, __b) > #define vminvq_s16(__a, __b) __arm_vminvq_s16(__a, __b) > -#define vminq_s16(__a, __b) __arm_vminq_s16(__a, __b) > #define vmaxvq_s16(__a, __b) __arm_vmaxvq_s16(__a, __b) > -#define vmaxq_s16(__a, __b) __arm_vmaxq_s16(__a, __b) > #define vhcaddq_rot90_s16(__a, __b) __arm_vhcaddq_rot90_s16(__a, __b) > #define vhcaddq_rot270_s16(__a, __b) __arm_vhcaddq_rot270_s16(__a, > __b) > #define vcaddq_rot90_s16(__a, __b) __arm_vcaddq_rot90_s16(__a, __b) > @@ -730,9 +716,7 @@ > #define vmullbq_int_u32(__a, __b) __arm_vmullbq_int_u32(__a, __b) > #define vmladavq_u32(__a, __b) __arm_vmladavq_u32(__a, __b) > #define vminvq_u32(__a, __b) __arm_vminvq_u32(__a, __b) > -#define vminq_u32(__a, __b) __arm_vminq_u32(__a, __b) > #define vmaxvq_u32(__a, __b) __arm_vmaxvq_u32(__a, __b) > -#define vmaxq_u32(__a, __b) __arm_vmaxq_u32(__a, __b) > #define vcmpneq_n_u32(__a, __b) __arm_vcmpneq_n_u32(__a, __b) > #define vcmphiq_u32(__a, __b) __arm_vcmphiq_u32(__a, __b) > #define vcmphiq_n_u32(__a, __b) __arm_vcmphiq_n_u32(__a, __b) > @@ -772,9 +756,7 @@ > #define vmladavxq_s32(__a, __b) __arm_vmladavxq_s32(__a, __b) > #define vmladavq_s32(__a, __b) __arm_vmladavq_s32(__a, __b) > #define vminvq_s32(__a, __b) __arm_vminvq_s32(__a, __b) > -#define vminq_s32(__a, __b) __arm_vminq_s32(__a, __b) > #define vmaxvq_s32(__a, __b) __arm_vmaxvq_s32(__a, __b) > -#define vmaxq_s32(__a, __b) __arm_vmaxq_s32(__a, __b) > #define vhcaddq_rot90_s32(__a, __b) __arm_vhcaddq_rot90_s32(__a, __b) > #define vhcaddq_rot270_s32(__a, __b) __arm_vhcaddq_rot270_s32(__a, > __b) > #define vcaddq_rot90_s32(__a, __b) __arm_vcaddq_rot90_s32(__a, __b) > @@ -1411,18 +1393,6 @@ > #define vhcaddq_rot90_m_s8(__inactive, __a, __b, __p) > __arm_vhcaddq_rot90_m_s8(__inactive, __a, __b, __p) > #define vhcaddq_rot90_m_s32(__inactive, __a, __b, __p) > __arm_vhcaddq_rot90_m_s32(__inactive, __a, __b, __p) > #define vhcaddq_rot90_m_s16(__inactive, __a, __b, __p) > __arm_vhcaddq_rot90_m_s16(__inactive, __a, __b, __p) > -#define vmaxq_m_s8(__inactive, __a, __b, __p) > __arm_vmaxq_m_s8(__inactive, __a, __b, __p) > -#define vmaxq_m_s32(__inactive, __a, __b, __p) > __arm_vmaxq_m_s32(__inactive, __a, __b, __p) > -#define vmaxq_m_s16(__inactive, __a, __b, __p) > __arm_vmaxq_m_s16(__inactive, __a, __b, __p) > -#define vmaxq_m_u8(__inactive, __a, __b, __p) > __arm_vmaxq_m_u8(__inactive, __a, __b, __p) > -#define vmaxq_m_u32(__inactive, __a, __b, __p) > __arm_vmaxq_m_u32(__inactive, __a, __b, __p) > -#define vmaxq_m_u16(__inactive, __a, __b, __p) > __arm_vmaxq_m_u16(__inactive, __a, __b, __p) > -#define vminq_m_s8(__inactive, __a, __b, __p) > __arm_vminq_m_s8(__inactive, __a, __b, __p) > -#define vminq_m_s32(__inactive, __a, __b, __p) > __arm_vminq_m_s32(__inactive, __a, __b, __p) > -#define vminq_m_s16(__inactive, __a, __b, __p) > __arm_vminq_m_s16(__inactive, __a, __b, __p) > -#define vminq_m_u8(__inactive, __a, __b, __p) > __arm_vminq_m_u8(__inactive, __a, __b, __p) > -#define vminq_m_u32(__inactive, __a, __b, __p) > __arm_vminq_m_u32(__inactive, __a, __b, __p) > -#define vminq_m_u16(__inactive, __a, __b, __p) > __arm_vminq_m_u16(__inactive, __a, __b, __p) > #define vmladavaq_p_s8(__a, __b, __c, __p) __arm_vmladavaq_p_s8(__a, > __b, __c, __p) > #define vmladavaq_p_s32(__a, __b, __c, __p) __arm_vmladavaq_p_s32(__a, > __b, __c, __p) > #define vmladavaq_p_s16(__a, __b, __c, __p) __arm_vmladavaq_p_s16(__a, > __b, __c, __p) > @@ -1943,18 +1913,6 @@ > #define vdupq_x_n_u8(__a, __p) __arm_vdupq_x_n_u8(__a, __p) > #define vdupq_x_n_u16(__a, __p) __arm_vdupq_x_n_u16(__a, __p) > #define vdupq_x_n_u32(__a, __p) __arm_vdupq_x_n_u32(__a, __p) > -#define vminq_x_s8(__a, __b, __p) __arm_vminq_x_s8(__a, __b, __p) > -#define vminq_x_s16(__a, __b, __p) __arm_vminq_x_s16(__a, __b, __p) > -#define vminq_x_s32(__a, __b, __p) __arm_vminq_x_s32(__a, __b, __p) > -#define vminq_x_u8(__a, __b, __p) __arm_vminq_x_u8(__a, __b, __p) > -#define vminq_x_u16(__a, __b, __p) __arm_vminq_x_u16(__a, __b, __p) > -#define vminq_x_u32(__a, __b, __p) __arm_vminq_x_u32(__a, __b, __p) > -#define vmaxq_x_s8(__a, __b, __p) __arm_vmaxq_x_s8(__a, __b, __p) > -#define vmaxq_x_s16(__a, __b, __p) __arm_vmaxq_x_s16(__a, __b, __p) > -#define vmaxq_x_s32(__a, __b, __p) __arm_vmaxq_x_s32(__a, __b, __p) > -#define vmaxq_x_u8(__a, __b, __p) __arm_vmaxq_x_u8(__a, __b, __p) > -#define vmaxq_x_u16(__a, __b, __p) __arm_vmaxq_x_u16(__a, __b, __p) > -#define vmaxq_x_u32(__a, __b, __p) __arm_vmaxq_x_u32(__a, __b, __p) > #define vabsq_x_s8(__a, __p) __arm_vabsq_x_s8(__a, __p) > #define vabsq_x_s16(__a, __p) __arm_vabsq_x_s16(__a, __p) > #define vabsq_x_s32(__a, __p) __arm_vabsq_x_s32(__a, __p) > @@ -2937,13 +2895,6 @@ __arm_vminvq_u8 (uint8_t __a, uint8x16_t __b) > return __builtin_mve_vminvq_uv16qi (__a, __b); > } >=20 > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_u8 (uint8x16_t __a, uint8x16_t __b) > -{ > - return __builtin_mve_vminq_uv16qi (__a, __b); > -} > - > __extension__ extern __inline uint8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq_u8 (uint8_t __a, uint8x16_t __b) > @@ -2951,13 +2902,6 @@ __arm_vmaxvq_u8 (uint8_t __a, uint8x16_t __b) > return __builtin_mve_vmaxvq_uv16qi (__a, __b); > } >=20 > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_u8 (uint8x16_t __a, uint8x16_t __b) > -{ > - return __builtin_mve_vmaxq_uv16qi (__a, __b); > -} > - > __extension__ extern __inline mve_pred16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vcmpneq_n_u8 (uint8x16_t __a, uint8_t __b) > @@ -3233,13 +3177,6 @@ __arm_vminvq_s8 (int8_t __a, int8x16_t __b) > return __builtin_mve_vminvq_sv16qi (__a, __b); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_s8 (int8x16_t __a, int8x16_t __b) > -{ > - return __builtin_mve_vminq_sv16qi (__a, __b); > -} > - > __extension__ extern __inline int8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq_s8 (int8_t __a, int8x16_t __b) > @@ -3247,13 +3184,6 @@ __arm_vmaxvq_s8 (int8_t __a, int8x16_t __b) > return __builtin_mve_vmaxvq_sv16qi (__a, __b); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_s8 (int8x16_t __a, int8x16_t __b) > -{ > - return __builtin_mve_vmaxq_sv16qi (__a, __b); > -} > - > __extension__ extern __inline int8x16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vhcaddq_rot90_s8 (int8x16_t __a, int8x16_t __b) > @@ -3345,13 +3275,6 @@ __arm_vminvq_u16 (uint16_t __a, uint16x8_t > __b) > return __builtin_mve_vminvq_uv8hi (__a, __b); > } >=20 > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_u16 (uint16x8_t __a, uint16x8_t __b) > -{ > - return __builtin_mve_vminq_uv8hi (__a, __b); > -} > - > __extension__ extern __inline uint16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq_u16 (uint16_t __a, uint16x8_t __b) > @@ -3359,13 +3282,6 @@ __arm_vmaxvq_u16 (uint16_t __a, uint16x8_t > __b) > return __builtin_mve_vmaxvq_uv8hi (__a, __b); > } >=20 > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_u16 (uint16x8_t __a, uint16x8_t __b) > -{ > - return __builtin_mve_vmaxq_uv8hi (__a, __b); > -} > - > __extension__ extern __inline mve_pred16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vcmpneq_n_u16 (uint16x8_t __a, uint16_t __b) > @@ -3641,13 +3557,6 @@ __arm_vminvq_s16 (int16_t __a, int16x8_t __b) > return __builtin_mve_vminvq_sv8hi (__a, __b); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_s16 (int16x8_t __a, int16x8_t __b) > -{ > - return __builtin_mve_vminq_sv8hi (__a, __b); > -} > - > __extension__ extern __inline int16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq_s16 (int16_t __a, int16x8_t __b) > @@ -3655,13 +3564,6 @@ __arm_vmaxvq_s16 (int16_t __a, int16x8_t __b) > return __builtin_mve_vmaxvq_sv8hi (__a, __b); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_s16 (int16x8_t __a, int16x8_t __b) > -{ > - return __builtin_mve_vmaxq_sv8hi (__a, __b); > -} > - > __extension__ extern __inline int16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vhcaddq_rot90_s16 (int16x8_t __a, int16x8_t __b) > @@ -3753,13 +3655,6 @@ __arm_vminvq_u32 (uint32_t __a, uint32x4_t > __b) > return __builtin_mve_vminvq_uv4si (__a, __b); > } >=20 > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_u32 (uint32x4_t __a, uint32x4_t __b) > -{ > - return __builtin_mve_vminq_uv4si (__a, __b); > -} > - > __extension__ extern __inline uint32_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq_u32 (uint32_t __a, uint32x4_t __b) > @@ -3767,13 +3662,6 @@ __arm_vmaxvq_u32 (uint32_t __a, uint32x4_t > __b) > return __builtin_mve_vmaxvq_uv4si (__a, __b); > } >=20 > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_u32 (uint32x4_t __a, uint32x4_t __b) > -{ > - return __builtin_mve_vmaxq_uv4si (__a, __b); > -} > - > __extension__ extern __inline mve_pred16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vcmpneq_n_u32 (uint32x4_t __a, uint32_t __b) > @@ -4049,13 +3937,6 @@ __arm_vminvq_s32 (int32_t __a, int32x4_t __b) > return __builtin_mve_vminvq_sv4si (__a, __b); > } >=20 > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_s32 (int32x4_t __a, int32x4_t __b) > -{ > - return __builtin_mve_vminq_sv4si (__a, __b); > -} > - > __extension__ extern __inline int32_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq_s32 (int32_t __a, int32x4_t __b) > @@ -4063,13 +3944,6 @@ __arm_vmaxvq_s32 (int32_t __a, int32x4_t __b) > return __builtin_mve_vmaxvq_sv4si (__a, __b); > } >=20 > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_s32 (int32x4_t __a, int32x4_t __b) > -{ > - return __builtin_mve_vmaxq_sv4si (__a, __b); > -} > - > __extension__ extern __inline int32x4_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vhcaddq_rot90_s32 (int32x4_t __a, int32x4_t __b) > @@ -7380,90 +7254,6 @@ __arm_vhcaddq_rot90_m_s16 (int16x8_t > __inactive, int16x8_t __a, int16x8_t __b, m > return __builtin_mve_vhcaddq_rot90_m_sv8hi (__inactive, __a, __b, __p)= ; > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, > mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_sv16qi (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, > mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_sv4si (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, > mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_sv8hi (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, > mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_uv16qi (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t > __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_uv4si (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t > __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_uv8hi (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, > mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_sv16qi (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, > mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_sv4si (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, > mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_sv8hi (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, > mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_uv16qi (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t > __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_uv4si (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t > __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_uv8hi (__inactive, __a, __b, __p); > -} > - > __extension__ extern __inline int32_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmladavaq_p_s8 (int32_t __a, int8x16_t __b, int8x16_t __c, > mve_pred16_t __p) > @@ -10635,90 +10425,6 @@ __arm_vdupq_x_n_u32 (uint32_t __a, > mve_pred16_t __p) > return __builtin_mve_vdupq_m_n_uv4si (__arm_vuninitializedq_u32 (), > __a, __p); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_sv4si (__arm_vuninitializedq_s32 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_uv16qi (__arm_vuninitializedq_u8 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_uv8hi (__arm_vuninitializedq_u16 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vminq_m_uv4si (__arm_vuninitializedq_u32 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_sv4si (__arm_vuninitializedq_s32 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_uv16qi (__arm_vuninitializedq_u8 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_uv8hi (__arm_vuninitializedq_u16 (), __a, > __b, __p); > -} > - > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) > -{ > - return __builtin_mve_vmaxq_m_uv4si (__arm_vuninitializedq_u32 (), __a, > __b, __p); > -} > - > __extension__ extern __inline int8x16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vabsq_x_s8 (int8x16_t __a, mve_pred16_t __p) > @@ -15624,13 +15330,6 @@ __arm_vminvq (uint8_t __a, uint8x16_t __b) > return __arm_vminvq_u8 (__a, __b); > } >=20 > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq (uint8x16_t __a, uint8x16_t __b) > -{ > - return __arm_vminq_u8 (__a, __b); > -} > - > __extension__ extern __inline uint8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq (uint8_t __a, uint8x16_t __b) > @@ -15638,13 +15337,6 @@ __arm_vmaxvq (uint8_t __a, uint8x16_t __b) > return __arm_vmaxvq_u8 (__a, __b); > } >=20 > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq (uint8x16_t __a, uint8x16_t __b) > -{ > - return __arm_vmaxq_u8 (__a, __b); > -} > - > __extension__ extern __inline mve_pred16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vcmpneq (uint8x16_t __a, uint8_t __b) > @@ -15918,13 +15610,6 @@ __arm_vminvq (int8_t __a, int8x16_t __b) > return __arm_vminvq_s8 (__a, __b); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq (int8x16_t __a, int8x16_t __b) > -{ > - return __arm_vminq_s8 (__a, __b); > -} > - > __extension__ extern __inline int8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq (int8_t __a, int8x16_t __b) > @@ -15932,13 +15617,6 @@ __arm_vmaxvq (int8_t __a, int8x16_t __b) > return __arm_vmaxvq_s8 (__a, __b); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq (int8x16_t __a, int8x16_t __b) > -{ > - return __arm_vmaxq_s8 (__a, __b); > -} > - > __extension__ extern __inline int8x16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vhcaddq_rot90 (int8x16_t __a, int8x16_t __b) > @@ -16030,13 +15708,6 @@ __arm_vminvq (uint16_t __a, uint16x8_t __b) > return __arm_vminvq_u16 (__a, __b); > } >=20 > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq (uint16x8_t __a, uint16x8_t __b) > -{ > - return __arm_vminq_u16 (__a, __b); > -} > - > __extension__ extern __inline uint16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq (uint16_t __a, uint16x8_t __b) > @@ -16044,13 +15715,6 @@ __arm_vmaxvq (uint16_t __a, uint16x8_t __b) > return __arm_vmaxvq_u16 (__a, __b); > } >=20 > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq (uint16x8_t __a, uint16x8_t __b) > -{ > - return __arm_vmaxq_u16 (__a, __b); > -} > - > __extension__ extern __inline mve_pred16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vcmpneq (uint16x8_t __a, uint16_t __b) > @@ -16324,13 +15988,6 @@ __arm_vminvq (int16_t __a, int16x8_t __b) > return __arm_vminvq_s16 (__a, __b); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq (int16x8_t __a, int16x8_t __b) > -{ > - return __arm_vminq_s16 (__a, __b); > -} > - > __extension__ extern __inline int16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq (int16_t __a, int16x8_t __b) > @@ -16338,13 +15995,6 @@ __arm_vmaxvq (int16_t __a, int16x8_t __b) > return __arm_vmaxvq_s16 (__a, __b); > } >=20 > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq (int16x8_t __a, int16x8_t __b) > -{ > - return __arm_vmaxq_s16 (__a, __b); > -} > - > __extension__ extern __inline int16x8_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vhcaddq_rot90 (int16x8_t __a, int16x8_t __b) > @@ -16436,13 +16086,6 @@ __arm_vminvq (uint32_t __a, uint32x4_t __b) > return __arm_vminvq_u32 (__a, __b); > } >=20 > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq (uint32x4_t __a, uint32x4_t __b) > -{ > - return __arm_vminq_u32 (__a, __b); > -} > - > __extension__ extern __inline uint32_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq (uint32_t __a, uint32x4_t __b) > @@ -16450,13 +16093,6 @@ __arm_vmaxvq (uint32_t __a, uint32x4_t __b) > return __arm_vmaxvq_u32 (__a, __b); > } >=20 > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq (uint32x4_t __a, uint32x4_t __b) > -{ > - return __arm_vmaxq_u32 (__a, __b); > -} > - > __extension__ extern __inline mve_pred16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vcmpneq (uint32x4_t __a, uint32_t __b) > @@ -16730,13 +16366,6 @@ __arm_vminvq (int32_t __a, int32x4_t __b) > return __arm_vminvq_s32 (__a, __b); > } >=20 > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq (int32x4_t __a, int32x4_t __b) > -{ > - return __arm_vminq_s32 (__a, __b); > -} > - > __extension__ extern __inline int32_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmaxvq (int32_t __a, int32x4_t __b) > @@ -16744,13 +16373,6 @@ __arm_vmaxvq (int32_t __a, int32x4_t __b) > return __arm_vmaxvq_s32 (__a, __b); > } >=20 > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq (int32x4_t __a, int32x4_t __b) > -{ > - return __arm_vmaxq_s32 (__a, __b); > -} > - > __extension__ extern __inline int32x4_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vhcaddq_rot90 (int32x4_t __a, int32x4_t __b) > @@ -20020,90 +19642,6 @@ __arm_vhcaddq_rot90_m (int16x8_t __inactive, > int16x8_t __a, int16x8_t __b, mve_p > return __arm_vhcaddq_rot90_m_s16 (__inactive, __a, __b, __p); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, > mve_pred16_t __p) > -{ > - return __arm_vmaxq_m_s8 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, > mve_pred16_t __p) > -{ > - return __arm_vmaxq_m_s32 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, > mve_pred16_t __p) > -{ > - return __arm_vmaxq_m_s16 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, > mve_pred16_t __p) > -{ > - return __arm_vmaxq_m_u8 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, > mve_pred16_t __p) > -{ > - return __arm_vmaxq_m_u32 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_m (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, > mve_pred16_t __p) > -{ > - return __arm_vmaxq_m_u16 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, > mve_pred16_t __p) > -{ > - return __arm_vminq_m_s8 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, > mve_pred16_t __p) > -{ > - return __arm_vminq_m_s32 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, > mve_pred16_t __p) > -{ > - return __arm_vminq_m_s16 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, > mve_pred16_t __p) > -{ > - return __arm_vminq_m_u8 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, > mve_pred16_t __p) > -{ > - return __arm_vminq_m_u32 (__inactive, __a, __b, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_m (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, > mve_pred16_t __p) > -{ > - return __arm_vminq_m_u16 (__inactive, __a, __b, __p); > -} > - > __extension__ extern __inline int32_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vmladavaq_p (int32_t __a, int8x16_t __b, int8x16_t __c, > mve_pred16_t __p) > @@ -22806,90 +22344,6 @@ __arm_viwdupq_x_u32 (uint32_t *__a, > uint32_t __b, const int __imm, mve_pred16_t > return __arm_viwdupq_x_wb_u32 (__a, __b, __imm, __p); > } >=20 > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) > -{ > - return __arm_vminq_x_s8 (__a, __b, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) > -{ > - return __arm_vminq_x_s16 (__a, __b, __p); > -} > - > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) > -{ > - return __arm_vminq_x_s32 (__a, __b, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) > -{ > - return __arm_vminq_x_u8 (__a, __b, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) > -{ > - return __arm_vminq_x_u16 (__a, __b, __p); > -} > - > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vminq_x (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) > -{ > - return __arm_vminq_x_u32 (__a, __b, __p); > -} > - > -__extension__ extern __inline int8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) > -{ > - return __arm_vmaxq_x_s8 (__a, __b, __p); > -} > - > -__extension__ extern __inline int16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) > -{ > - return __arm_vmaxq_x_s16 (__a, __b, __p); > -} > - > -__extension__ extern __inline int32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) > -{ > - return __arm_vmaxq_x_s32 (__a, __b, __p); > -} > - > -__extension__ extern __inline uint8x16_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) > -{ > - return __arm_vmaxq_x_u8 (__a, __b, __p); > -} > - > -__extension__ extern __inline uint16x8_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) > -{ > - return __arm_vmaxq_x_u16 (__a, __b, __p); > -} > - > -__extension__ extern __inline uint32x4_t > -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > -__arm_vmaxq_x (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) > -{ > - return __arm_vmaxq_x_u32 (__a, __b, __p); > -} > - > __extension__ extern __inline int8x16_t > __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) > __arm_vabsq_x (int8x16_t __a, mve_pred16_t __p) > @@ -27274,16 +26728,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: > __arm_vhcaddq_rot90_s16 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int16x8_t)), \ > int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: > __arm_vhcaddq_rot90_s32 (__ARM_mve_coerce(__p0, int32x4_t), > __ARM_mve_coerce(__p1, int32x4_t)));}) >=20 > -#define __arm_vminq(p0,p1) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: > __arm_vminq_s8 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int8x16_t)), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: > __arm_vminq_s16 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int16x8_t)), \ > - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: > __arm_vminq_s32 (__ARM_mve_coerce(__p0, int32x4_t), > __ARM_mve_coerce(__p1, int32x4_t)), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: > __arm_vminq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint8x16_t)), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: > __arm_vminq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint16x8_t)), \ > - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: > __arm_vminq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), > __ARM_mve_coerce(__p1, uint32x4_t)));}) > - > #define __arm_vminaq(p0,p1) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -27291,16 +26735,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]: > __arm_vminaq_s16 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, int16x8_t)), \ > int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]: > __arm_vminaq_s32 (__ARM_mve_coerce(__p0, uint32x4_t), > __ARM_mve_coerce(__p1, int32x4_t)));}) >=20 > -#define __arm_vmaxq(p0,p1) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: > __arm_vmaxq_s8 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int8x16_t)), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: > __arm_vmaxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int16x8_t)), \ > - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: > __arm_vmaxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), > __ARM_mve_coerce(__p1, int32x4_t)), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: > __arm_vmaxq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint8x16_t)), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: > __arm_vmaxq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint16x8_t)), \ > - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: > __arm_vmaxq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), > __ARM_mve_coerce(__p1, uint32x4_t)));}) > - > #define __arm_vmaxaq(p0,p1) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -28867,16 +28301,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: > __arm_vmullbq_int_u16 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint16x8_t)), \ > int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: > __arm_vmullbq_int_u32 (__ARM_mve_coerce(__p0, uint32x4_t), > __ARM_mve_coerce(__p1, uint32x4_t)));}) >=20 > -#define __arm_vminq(p0,p1) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: > __arm_vminq_s8 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int8x16_t)), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: > __arm_vminq_s16 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int16x8_t)), \ > - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: > __arm_vminq_s32 (__ARM_mve_coerce(__p0, int32x4_t), > __ARM_mve_coerce(__p1, int32x4_t)), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: > __arm_vminq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint8x16_t)), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: > __arm_vminq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint16x8_t)), \ > - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: > __arm_vminq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), > __ARM_mve_coerce(__p1, uint32x4_t)));}) > - > #define __arm_vminaq(p0,p1) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -28884,16 +28308,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]: > __arm_vminaq_s16 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, int16x8_t)), \ > int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]: > __arm_vminaq_s32 (__ARM_mve_coerce(__p0, uint32x4_t), > __ARM_mve_coerce(__p1, int32x4_t)));}) >=20 > -#define __arm_vmaxq(p0,p1) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: > __arm_vmaxq_s8 (__ARM_mve_coerce(__p0, int8x16_t), > __ARM_mve_coerce(__p1, int8x16_t)), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: > __arm_vmaxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), > __ARM_mve_coerce(__p1, int16x8_t)), \ > - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: > __arm_vmaxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), > __ARM_mve_coerce(__p1, int32x4_t)), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: > __arm_vmaxq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), > __ARM_mve_coerce(__p1, uint8x16_t)), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: > __arm_vmaxq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), > __ARM_mve_coerce(__p1, uint16x8_t)), \ > - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: > __arm_vmaxq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), > __ARM_mve_coerce(__p1, uint32x4_t)));}) > - > #define __arm_vmaxaq(p0,p1) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > @@ -30608,28 +30022,6 @@ extern void *__ARM_undef; > int > (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve > _type_int16x8_t]: __arm_vhcaddq_rot90_m_s16 (__ARM_mve_coerce(__p0, > int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, > int16x8_t), p3), \ > int > (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve > _type_int32x4_t]: __arm_vhcaddq_rot90_m_s32 (__ARM_mve_coerce(__p0, > int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, > int32x4_t), p3));}) >=20 > -#define __arm_vmaxq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - __typeof(p2) __p2 =3D (p2); \ > - _Generic( (int > (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typ > eid(__p2)])0, \ > - int > (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve > _type_int8x16_t]: __arm_vmaxq_m_s8 (__ARM_mve_coerce(__p0, > int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, > int8x16_t), p3), \ > - int > (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve > _type_int16x8_t]: __arm_vmaxq_m_s16 (__ARM_mve_coerce(__p0, > int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, > int16x8_t), p3), \ > - int > (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve > _type_int32x4_t]: __arm_vmaxq_m_s32 (__ARM_mve_coerce(__p0, > int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, > int32x4_t), p3), \ > - int > (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_m > ve_type_uint8x16_t]: __arm_vmaxq_m_u8 (__ARM_mve_coerce(__p0, > uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), > __ARM_mve_coerce(__p2, uint8x16_t), p3), \ > - int > (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_m > ve_type_uint16x8_t]: __arm_vmaxq_m_u16 (__ARM_mve_coerce(__p0, > uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), > __ARM_mve_coerce(__p2, uint16x8_t), p3), \ > - int > (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_m > ve_type_uint32x4_t]: __arm_vmaxq_m_u32 (__ARM_mve_coerce(__p0, > uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), > __ARM_mve_coerce(__p2, uint32x4_t), p3));}) > - > -#define __arm_vminq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > - __typeof(p1) __p1 =3D (p1); \ > - __typeof(p2) __p2 =3D (p2); \ > - _Generic( (int > (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typ > eid(__p2)])0, \ > - int > (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve > _type_int8x16_t]: __arm_vminq_m_s8 (__ARM_mve_coerce(__p0, > int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, > int8x16_t), p3), \ > - int > (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve > _type_int16x8_t]: __arm_vminq_m_s16 (__ARM_mve_coerce(__p0, > int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, > int16x8_t), p3), \ > - int > (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve > _type_int32x4_t]: __arm_vminq_m_s32 (__ARM_mve_coerce(__p0, > int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, > int32x4_t), p3), \ > - int > (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_m > ve_type_uint8x16_t]: __arm_vminq_m_u8 (__ARM_mve_coerce(__p0, > uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), > __ARM_mve_coerce(__p2, uint8x16_t), p3), \ > - int > (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_m > ve_type_uint16x8_t]: __arm_vminq_m_u16 (__ARM_mve_coerce(__p0, > uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), > __ARM_mve_coerce(__p2, uint16x8_t), p3), \ > - int > (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_m > ve_type_uint32x4_t]: __arm_vminq_m_u32 (__ARM_mve_coerce(__p0, > uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), > __ARM_mve_coerce(__p2, uint32x4_t), p3));}) > - > #define __arm_vmlaq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > __typeof(p2) __p2 =3D (p2); \ > @@ -31068,26 +30460,6 @@ extern void *__ARM_undef; > int (*)[__ARM_mve_type_int_n][__ARM_mve_type_int16x8_t]: > __arm_vminavq_p_s16 (__p0, __ARM_mve_coerce(__p1, int16x8_t), p2), \ > int (*)[__ARM_mve_type_int_n][__ARM_mve_type_int32x4_t]: > __arm_vminavq_p_s32 (__p0, __ARM_mve_coerce(__p1, int32x4_t), p2));}) >=20 > -#define __arm_vmaxq_x(p1,p2,p3) ({ __typeof(p1) __p1 =3D (p1); \ > - __typeof(p2) __p2 =3D (p2); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: > __arm_vmaxq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), > __ARM_mve_coerce(__p2, int8x16_t), p3), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: > __arm_vmaxq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), > __ARM_mve_coerce(__p2, int16x8_t), p3), \ > - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: > __arm_vmaxq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), > __ARM_mve_coerce(__p2, int32x4_t), p3), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: > __arm_vmaxq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t), > __ARM_mve_coerce(__p2, uint8x16_t), p3), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: > __arm_vmaxq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t), > __ARM_mve_coerce(__p2, uint16x8_t), p3), \ > - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: > __arm_vmaxq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t), > __ARM_mve_coerce(__p2, uint32x4_t), p3));}) > - > -#define __arm_vminq_x(p1,p2,p3) ({ __typeof(p1) __p1 =3D (p1); \ > - __typeof(p2) __p2 =3D (p2); \ > - _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, > \ > - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: > __arm_vminq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), > __ARM_mve_coerce(__p2, int8x16_t), p3), \ > - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: > __arm_vminq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), > __ARM_mve_coerce(__p2, int16x8_t), p3), \ > - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: > __arm_vminq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), > __ARM_mve_coerce(__p2, int32x4_t), p3), \ > - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: > __arm_vminq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t), > __ARM_mve_coerce(__p2, uint8x16_t), p3), \ > - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: > __arm_vminq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t), > __ARM_mve_coerce(__p2, uint16x8_t), p3), \ > - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: > __arm_vminq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t), > __ARM_mve_coerce(__p2, uint32x4_t), p3));}) > - > #define __arm_vminvq(p0,p1) ({ __typeof(p0) __p0 =3D (p0); \ > __typeof(p1) __p1 =3D (p1); \ > _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, > \ > -- > 2.34.1