From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2064.outbound.protection.outlook.com [40.107.22.64]) by sourceware.org (Postfix) with ESMTPS id 831FA385625A for ; Fri, 5 May 2023 08:40:14 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 831FA385625A Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wY1UAEcVubv1QYVqP0vgM9dlXS+WCIrVzBa22bkvXo8=; b=V5X948lWpzslWOgPsKMn6RHV79f+DHVv1cY1W7nzBhWN0uZMg1GHs2VCCVFnODqVg4IymVakp3pXBW/dlYQANdtLfBabAwdFDVl3wNb2yE5qtfgiy4XeyEhD79ZV3IxGuEZQqlxq19t/VqbFbvxwY5JiBjDdzCI4sZ8aL/1Ai/o= Received: from DB6P192CA0003.EURP192.PROD.OUTLOOK.COM (2603:10a6:4:b8::13) by GV1PR08MB7682.eurprd08.prod.outlook.com (2603:10a6:150:61::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 08:40:07 +0000 Received: from DBAEUR03FT004.eop-EUR03.prod.protection.outlook.com (2603:10a6:4:b8:cafe::38) by DB6P192CA0003.outlook.office365.com (2603:10a6:4:b8::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend Transport; Fri, 5 May 2023 08:40:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT004.mail.protection.outlook.com (100.127.142.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 08:40:06 +0000 Received: ("Tessian outbound 945aec65ec65:v136"); Fri, 05 May 2023 08:40:06 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: ce00e886a4702e63 X-CR-MTA-TID: 64aa7808 Received: from 3804a1eee0a1.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 068AC42C-AF96-400F-8F73-C14DB2D131EB.1; Fri, 05 May 2023 08:40:00 +0000 Received: from EUR01-DB5-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3804a1eee0a1.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 08:40:00 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YfUqG1VBFJs9I+7oEspAjrP0bHu36SAKEmrCprOETBpAgm/nSNGr6bJ0Htm6aO4rW7Hur6HWrHs1IdCiaf7f1xiCvIkXgJCwzAvVaimph6TJAT2Cmy3WqaUNlUYRMD3KZ9Det4MAfipi6v6w6O0JgwgsrfWT1kspAF8s2zsfoVFL/t7cDiQFneu5WollNRNkhJ9t2z2s8rBx118cDr/KKLeQaaArHEfxKbY3r9GgIF9a+ddYNGz2gWeU/O3siOc5MjslmXc2buRpcC7ydEwpWjfGLXPP3CNBnzdbtdtsSYK4cNTPXnDHGRiPOYo6ZWWqNK9bl8+a9KI0pyvNz9jykA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wY1UAEcVubv1QYVqP0vgM9dlXS+WCIrVzBa22bkvXo8=; b=fW2RaqmIUPvKyi1b8tVoNkOLuipoJcswWSJiMV9o3oN+3vw3uJSAqRbZpBwEAnKSy3HztXxXyIVHoXi7QMUfJmj/pHJFCApNvLAwnVxCnRr0Q8MrUKCDkURE+p1kjmvdONO7BY+lE6nR7qt1QnhvuX33kP1L2EP/tUOtGjNtLTfR6me3QxC7wbz0NvuyXCgBM20dvTxzQXX4xVJG4IW9c5Z9Z5ru4ET426vcGjrwfoKOAVJTecZotmIOS/bs7jGP3FQ1nnIydNYiYd02SIuMg67pPLKwZquX2FfG/ybeEY4Uqll6wStEb3UA54gNs6cksQr/ZAy1+2koBriunwqrxg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wY1UAEcVubv1QYVqP0vgM9dlXS+WCIrVzBa22bkvXo8=; b=V5X948lWpzslWOgPsKMn6RHV79f+DHVv1cY1W7nzBhWN0uZMg1GHs2VCCVFnODqVg4IymVakp3pXBW/dlYQANdtLfBabAwdFDVl3wNb2yE5qtfgiy4XeyEhD79ZV3IxGuEZQqlxq19t/VqbFbvxwY5JiBjDdzCI4sZ8aL/1Ai/o= Received: from DUZPR01CA0104.eurprd01.prod.exchangelabs.com (2603:10a6:10:4bb::17) by DB9PR08MB7399.eurprd08.prod.outlook.com (2603:10a6:10:371::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 08:39:55 +0000 Received: from DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com (2603:10a6:10:4bb:cafe::ec) by DUZPR01CA0104.outlook.office365.com (2603:10a6:10:4bb::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 08:39:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by DBAEUR03FT020.mail.protection.outlook.com (100.127.143.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6363.20 via Frontend Transport; Fri, 5 May 2023 08:39:55 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 08:39:51 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 08:39:51 +0000 From: Christophe Lyon To: , , , CC: Christophe Lyon Subject: [PATCH 14/23] arm: [MVE intrinsics] rework vmaxq vminq Date: Fri, 5 May 2023 10:39:21 +0200 Message-ID: <20230505083930.101210-14-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505083930.101210-1-christophe.lyon@arm.com> References: <20230505083930.101210-1-christophe.lyon@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: DBAEUR03FT020:EE_|DB9PR08MB7399:EE_|DBAEUR03FT004:EE_|GV1PR08MB7682:EE_ X-MS-Office365-Filtering-Correlation-Id: b3d82cd7-c029-45f0-197f-08db4d445442 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: sFloh2VNINjRrcMyZcQLFKjQBVDtsISVBPjajoExK5NbRlEXN3bRq3ax9JulJfXKjp3X1QlLqsgPhNn9Us5D74OFAJYyahg74P8kw3pzwF36R/quix3xU4iMqXDgZVnUCNRwcRjDgPj042UNV010TrUsEUQSinkbN32FKOtOBVEhcDVxqz1D1y8+ybIeC4x4ps0PZILJeVKsgafeJISKNJbfwm8KgD13aSzYncuyiH061MOm+WwMVOBLj/tfXHFUWoQiZcB6BwWjToe2Y9N+t0ykQF9anwbsr2gyrDRxH15six4P8S49MfvefRmy3rVxd+kukGkgQA9k1xKmF7hxprdVEPnj+CA1t8Hd1bVJXedv7O2c913nWM5BUGcCCSKLomxLA5zIzG3PoQhmzH42qYbV0mfqcTYeXS6xLjMdRNdsO+T97K3l5r43XlfsgSFZQbq5z/Zm2fDKU3LRFzgg/p4UXU6QrFXmJOs6Ekin/XdZEklvltMqVmS4QQALSpPkWdeeeOYKDL4K4nGEE8WuGpT358R7BcCG6L6EYSiLs+2nEoh3tmmZ1Azn7MvrVkRsnqBTjCdthprkAiFQe0xtMHhKNyx9L5MRx1Vh6E5QT3e8Kw9ZU7c2NyjU96phZtH0yXXDmqjInMGC1HvZRpf0tmLAAT8vWgX3mRoON2O+Hu5sJkN26yEWDR+F2WQuzlUIOhQdrKK1yPFoaU0+cN5JxR/p+HtJp3SCscyyk2isSftmChVI3jYRiGsQvOFwXdKRBq/EfBUbdFpO6EnEZap99w== X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(39860400002)(376002)(451199021)(46966006)(40470700004)(36840700001)(40460700003)(7696005)(478600001)(6636002)(4326008)(70206006)(70586007)(316002)(110136005)(6666004)(86362001)(36756003)(426003)(336012)(83380400001)(47076005)(2616005)(36860700001)(1076003)(26005)(8936002)(5660300002)(8676002)(44832011)(30864003)(41300700001)(82310400005)(40480700001)(2906002)(82740400003)(186003)(356005)(81166007)(34020700004)(36900700001)(559001)(579004);DIR:OUT;SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7399 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT004.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 0db351a4-d701-4238-9abf-08db4d444d46 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7B9IBehhs+2KxHx32lVMIi/JJcPL/oS4ublf1GS9SvtPkfMwLMjBdlWaUixEdY67IRqv9NKqlWWAy5vsyrK9CK8bi9tU6TeoNjUZcI33cH1Ml61YaHg7m7xMZt47i0mIt5rx+UOJPjHHcR77g1XfPuiO26AFMit5UcxV7UibrmBmzHTuDZE/vTbIijtu7GreiMPG13Amm+n1g1OHqwm6wMzkFDo9JKSoHUhKQcOWOR2NQh+YGHCJy6Y9kWju+mUikEoUmK/uGwHCKDtecZjzNE/+XiNXiLPbj9NHYxtNquiVaRuIC+nQ0UqxqRdkDnGdTmbDCknMeAhOH8p5x73ywRcX79GW5vsXDwlaYNXNE62RdsLGZntaRWtSjdwr71YLLfWHpUU2VX4CXsbK/hD/xFzNsg/3/cg6ybFzWm5RfU50HjKE5i1lY/l4qO7HyFMcJMDxyK2FNAPN0xs3Pd+DXRr6Tjxb0lknD2XPNjQcePssKSHox0kD8iJfSVVd0FP1gM+UW58YHbKYzzO4QneqStjX9jghNJtw4/8uIRlZ0enXCM6eyL5E7Lwfg/H+irO0FbXN3yYypEQ4KDumRtfa9a4YZLtFl1EWuxkdOOqeSNmb8Q1l+y28390h+RhGa9ySkjkmGiZYmTHOh/JNY0RCevus+6tKfy3x/A0NYon/j5Zk3Qibce1rHV2Wo2i8tTLaZ7oImYb7JJT2MF8FLQi+BpPXUWwxMB03lAeGfYua7kQ= X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(396003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(4326008)(6636002)(44832011)(70206006)(70586007)(41300700001)(5660300002)(8936002)(8676002)(26005)(1076003)(110136005)(478600001)(316002)(7696005)(6666004)(2906002)(30864003)(2616005)(83380400001)(47076005)(426003)(336012)(40460700003)(186003)(36860700001)(81166007)(82740400003)(40480700001)(34020700004)(82310400005)(86362001)(36756003)(559001)(579004);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 08:40:06.9633 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b3d82cd7-c029-45f0-197f-08db4d445442 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT004.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7682 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,KAM_DMARC_NONE,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Implement vmaxq and vminq using the new MVE builtins framework. 2022-09-08 Christophe Lyon gcc/ * config/arm/arm-mve-builtins-base.cc (FUNCTION_WITH_RTX_M_NO_F): New. (vmaxq, vminq): New. * config/arm/arm-mve-builtins-base.def (vmaxq, vminq): New. * config/arm/arm-mve-builtins-base.h (vmaxq, vminq): New. * config/arm/arm_mve.h (vminq): Remove. (vmaxq): Remove. (vmaxq_m): Remove. (vminq_m): Remove. (vminq_x): Remove. (vmaxq_x): Remove. (vminq_u8): Remove. (vmaxq_u8): Remove. (vminq_s8): Remove. (vmaxq_s8): Remove. (vminq_u16): Remove. (vmaxq_u16): Remove. (vminq_s16): Remove. (vmaxq_s16): Remove. (vminq_u32): Remove. (vmaxq_u32): Remove. (vminq_s32): Remove. (vmaxq_s32): Remove. (vmaxq_m_s8): Remove. (vmaxq_m_s32): Remove. (vmaxq_m_s16): Remove. (vmaxq_m_u8): Remove. (vmaxq_m_u32): Remove. (vmaxq_m_u16): Remove. (vminq_m_s8): Remove. (vminq_m_s32): Remove. (vminq_m_s16): Remove. (vminq_m_u8): Remove. (vminq_m_u32): Remove. (vminq_m_u16): Remove. (vminq_x_s8): Remove. (vminq_x_s16): Remove. (vminq_x_s32): Remove. (vminq_x_u8): Remove. (vminq_x_u16): Remove. (vminq_x_u32): Remove. (vmaxq_x_s8): Remove. (vmaxq_x_s16): Remove. (vmaxq_x_s32): Remove. (vmaxq_x_u8): Remove. (vmaxq_x_u16): Remove. (vmaxq_x_u32): Remove. (__arm_vminq_u8): Remove. (__arm_vmaxq_u8): Remove. (__arm_vminq_s8): Remove. (__arm_vmaxq_s8): Remove. (__arm_vminq_u16): Remove. (__arm_vmaxq_u16): Remove. (__arm_vminq_s16): Remove. (__arm_vmaxq_s16): Remove. (__arm_vminq_u32): Remove. (__arm_vmaxq_u32): Remove. (__arm_vminq_s32): Remove. (__arm_vmaxq_s32): Remove. (__arm_vmaxq_m_s8): Remove. (__arm_vmaxq_m_s32): Remove. (__arm_vmaxq_m_s16): Remove. (__arm_vmaxq_m_u8): Remove. (__arm_vmaxq_m_u32): Remove. (__arm_vmaxq_m_u16): Remove. (__arm_vminq_m_s8): Remove. (__arm_vminq_m_s32): Remove. (__arm_vminq_m_s16): Remove. (__arm_vminq_m_u8): Remove. (__arm_vminq_m_u32): Remove. (__arm_vminq_m_u16): Remove. (__arm_vminq_x_s8): Remove. (__arm_vminq_x_s16): Remove. (__arm_vminq_x_s32): Remove. (__arm_vminq_x_u8): Remove. (__arm_vminq_x_u16): Remove. (__arm_vminq_x_u32): Remove. (__arm_vmaxq_x_s8): Remove. (__arm_vmaxq_x_s16): Remove. (__arm_vmaxq_x_s32): Remove. (__arm_vmaxq_x_u8): Remove. (__arm_vmaxq_x_u16): Remove. (__arm_vmaxq_x_u32): Remove. (__arm_vminq): Remove. (__arm_vmaxq): Remove. (__arm_vmaxq_m): Remove. (__arm_vminq_m): Remove. (__arm_vminq_x): Remove. (__arm_vmaxq_x): Remove. --- gcc/config/arm/arm-mve-builtins-base.cc | 11 + gcc/config/arm/arm-mve-builtins-base.def | 2 + gcc/config/arm/arm-mve-builtins-base.h | 2 + gcc/config/arm/arm_mve.h | 628 ----------------------- 4 files changed, 15 insertions(+), 628 deletions(-) diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-mve-builtins-base.cc index 4bebf86f784..1839d5cb1a5 100644 --- a/gcc/config/arm/arm-mve-builtins-base.cc +++ b/gcc/config/arm/arm-mve-builtins-base.cc @@ -110,6 +110,15 @@ namespace arm_mve { UNSPEC##_M_S, UNSPEC##_M_U, UNSPEC##_M_F, \ UNSPEC##_M_N_S, UNSPEC##_M_N_U, -1)) + /* Helper for builtins with RTX codes, _m predicated override, but + no floating-point versions. */ +#define FUNCTION_WITH_RTX_M_NO_F(NAME, RTX_S, RTX_U, UNSPEC) FUNCTION \ + (NAME, unspec_based_mve_function_exact_insn, \ + (RTX_S, RTX_U, UNKNOWN, \ + -1, -1, -1, \ + UNSPEC##_M_S, UNSPEC##_M_U, -1, \ + -1, -1, -1)) + /* Helper for builtins without RTX codes, no _m predicated and no _n overrides. */ #define FUNCTION_WITHOUT_M_N(NAME, UNSPEC) FUNCTION \ @@ -173,6 +182,8 @@ FUNCTION_WITHOUT_M_N (vcreateq, VCREATEQ) FUNCTION_WITH_RTX_M (veorq, XOR, VEORQ) FUNCTION_WITH_M_N_NO_F (vhaddq, VHADDQ) FUNCTION_WITH_M_N_NO_F (vhsubq, VHSUBQ) +FUNCTION_WITH_RTX_M_NO_F (vmaxq, SMAX, UMAX, VMAXQ) +FUNCTION_WITH_RTX_M_NO_F (vminq, SMIN, UMIN, VMINQ) FUNCTION_WITHOUT_N_NO_F (vmulhq, VMULHQ) FUNCTION_WITH_RTX_M_N (vmulq, MULT, VMULQ) FUNCTION_WITH_RTX_M_N_NO_N_F (vorrq, IOR, VORRQ) diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-mve-builtins-base.def index f2e40cda2af..3b42bf46e81 100644 --- a/gcc/config/arm/arm-mve-builtins-base.def +++ b/gcc/config/arm/arm-mve-builtins-base.def @@ -25,6 +25,8 @@ DEF_MVE_FUNCTION (vcreateq, create, all_integer_with_64, none) DEF_MVE_FUNCTION (veorq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vhaddq, binary_opt_n, all_integer, mx_or_none) DEF_MVE_FUNCTION (vhsubq, binary_opt_n, all_integer, mx_or_none) +DEF_MVE_FUNCTION (vmaxq, binary, all_integer, mx_or_none) +DEF_MVE_FUNCTION (vminq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vmulhq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vmulq, binary_opt_n, all_integer, mx_or_none) DEF_MVE_FUNCTION (vorrq, binary_orrq, all_integer, mx_or_none) diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-mve-builtins-base.h index 5b62de6a922..81d10f4a8f4 100644 --- a/gcc/config/arm/arm-mve-builtins-base.h +++ b/gcc/config/arm/arm-mve-builtins-base.h @@ -30,6 +30,8 @@ extern const function_base *const vcreateq; extern const function_base *const veorq; extern const function_base *const vhaddq; extern const function_base *const vhsubq; +extern const function_base *const vmaxq; +extern const function_base *const vminq; extern const function_base *const vmulhq; extern const function_base *const vmulq; extern const function_base *const vorrq; diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h index ad67dcfd024..5fbea52c8ef 100644 --- a/gcc/config/arm/arm_mve.h +++ b/gcc/config/arm/arm_mve.h @@ -65,9 +65,7 @@ #define vmullbq_int(__a, __b) __arm_vmullbq_int(__a, __b) #define vmladavq(__a, __b) __arm_vmladavq(__a, __b) #define vminvq(__a, __b) __arm_vminvq(__a, __b) -#define vminq(__a, __b) __arm_vminq(__a, __b) #define vmaxvq(__a, __b) __arm_vmaxvq(__a, __b) -#define vmaxq(__a, __b) __arm_vmaxq(__a, __b) #define vcmphiq(__a, __b) __arm_vcmphiq(__a, __b) #define vcmpeqq(__a, __b) __arm_vcmpeqq(__a, __b) #define vcmpcsq(__a, __b) __arm_vcmpcsq(__a, __b) @@ -214,8 +212,6 @@ #define vcaddq_rot90_m(__inactive, __a, __b, __p) __arm_vcaddq_rot90_m(__inactive, __a, __b, __p) #define vhcaddq_rot270_m(__inactive, __a, __b, __p) __arm_vhcaddq_rot270_m(__inactive, __a, __b, __p) #define vhcaddq_rot90_m(__inactive, __a, __b, __p) __arm_vhcaddq_rot90_m(__inactive, __a, __b, __p) -#define vmaxq_m(__inactive, __a, __b, __p) __arm_vmaxq_m(__inactive, __a, __b, __p) -#define vminq_m(__inactive, __a, __b, __p) __arm_vminq_m(__inactive, __a, __b, __p) #define vmladavaq_p(__a, __b, __c, __p) __arm_vmladavaq_p(__a, __b, __c, __p) #define vmladavaxq_p(__a, __b, __c, __p) __arm_vmladavaxq_p(__a, __b, __c, __p) #define vmlaq_m(__a, __b, __c, __p) __arm_vmlaq_m(__a, __b, __c, __p) @@ -339,8 +335,6 @@ #define viwdupq_x_u8(__a, __b, __imm, __p) __arm_viwdupq_x_u8(__a, __b, __imm, __p) #define viwdupq_x_u16(__a, __b, __imm, __p) __arm_viwdupq_x_u16(__a, __b, __imm, __p) #define viwdupq_x_u32(__a, __b, __imm, __p) __arm_viwdupq_x_u32(__a, __b, __imm, __p) -#define vminq_x(__a, __b, __p) __arm_vminq_x(__a, __b, __p) -#define vmaxq_x(__a, __b, __p) __arm_vmaxq_x(__a, __b, __p) #define vabsq_x(__a, __p) __arm_vabsq_x(__a, __p) #define vclsq_x(__a, __p) __arm_vclsq_x(__a, __p) #define vclzq_x(__a, __p) __arm_vclzq_x(__a, __p) @@ -614,9 +608,7 @@ #define vmullbq_int_u8(__a, __b) __arm_vmullbq_int_u8(__a, __b) #define vmladavq_u8(__a, __b) __arm_vmladavq_u8(__a, __b) #define vminvq_u8(__a, __b) __arm_vminvq_u8(__a, __b) -#define vminq_u8(__a, __b) __arm_vminq_u8(__a, __b) #define vmaxvq_u8(__a, __b) __arm_vmaxvq_u8(__a, __b) -#define vmaxq_u8(__a, __b) __arm_vmaxq_u8(__a, __b) #define vcmpneq_n_u8(__a, __b) __arm_vcmpneq_n_u8(__a, __b) #define vcmphiq_u8(__a, __b) __arm_vcmphiq_u8(__a, __b) #define vcmphiq_n_u8(__a, __b) __arm_vcmphiq_n_u8(__a, __b) @@ -656,9 +648,7 @@ #define vmladavxq_s8(__a, __b) __arm_vmladavxq_s8(__a, __b) #define vmladavq_s8(__a, __b) __arm_vmladavq_s8(__a, __b) #define vminvq_s8(__a, __b) __arm_vminvq_s8(__a, __b) -#define vminq_s8(__a, __b) __arm_vminq_s8(__a, __b) #define vmaxvq_s8(__a, __b) __arm_vmaxvq_s8(__a, __b) -#define vmaxq_s8(__a, __b) __arm_vmaxq_s8(__a, __b) #define vhcaddq_rot90_s8(__a, __b) __arm_vhcaddq_rot90_s8(__a, __b) #define vhcaddq_rot270_s8(__a, __b) __arm_vhcaddq_rot270_s8(__a, __b) #define vcaddq_rot90_s8(__a, __b) __arm_vcaddq_rot90_s8(__a, __b) @@ -672,9 +662,7 @@ #define vmullbq_int_u16(__a, __b) __arm_vmullbq_int_u16(__a, __b) #define vmladavq_u16(__a, __b) __arm_vmladavq_u16(__a, __b) #define vminvq_u16(__a, __b) __arm_vminvq_u16(__a, __b) -#define vminq_u16(__a, __b) __arm_vminq_u16(__a, __b) #define vmaxvq_u16(__a, __b) __arm_vmaxvq_u16(__a, __b) -#define vmaxq_u16(__a, __b) __arm_vmaxq_u16(__a, __b) #define vcmpneq_n_u16(__a, __b) __arm_vcmpneq_n_u16(__a, __b) #define vcmphiq_u16(__a, __b) __arm_vcmphiq_u16(__a, __b) #define vcmphiq_n_u16(__a, __b) __arm_vcmphiq_n_u16(__a, __b) @@ -714,9 +702,7 @@ #define vmladavxq_s16(__a, __b) __arm_vmladavxq_s16(__a, __b) #define vmladavq_s16(__a, __b) __arm_vmladavq_s16(__a, __b) #define vminvq_s16(__a, __b) __arm_vminvq_s16(__a, __b) -#define vminq_s16(__a, __b) __arm_vminq_s16(__a, __b) #define vmaxvq_s16(__a, __b) __arm_vmaxvq_s16(__a, __b) -#define vmaxq_s16(__a, __b) __arm_vmaxq_s16(__a, __b) #define vhcaddq_rot90_s16(__a, __b) __arm_vhcaddq_rot90_s16(__a, __b) #define vhcaddq_rot270_s16(__a, __b) __arm_vhcaddq_rot270_s16(__a, __b) #define vcaddq_rot90_s16(__a, __b) __arm_vcaddq_rot90_s16(__a, __b) @@ -730,9 +716,7 @@ #define vmullbq_int_u32(__a, __b) __arm_vmullbq_int_u32(__a, __b) #define vmladavq_u32(__a, __b) __arm_vmladavq_u32(__a, __b) #define vminvq_u32(__a, __b) __arm_vminvq_u32(__a, __b) -#define vminq_u32(__a, __b) __arm_vminq_u32(__a, __b) #define vmaxvq_u32(__a, __b) __arm_vmaxvq_u32(__a, __b) -#define vmaxq_u32(__a, __b) __arm_vmaxq_u32(__a, __b) #define vcmpneq_n_u32(__a, __b) __arm_vcmpneq_n_u32(__a, __b) #define vcmphiq_u32(__a, __b) __arm_vcmphiq_u32(__a, __b) #define vcmphiq_n_u32(__a, __b) __arm_vcmphiq_n_u32(__a, __b) @@ -772,9 +756,7 @@ #define vmladavxq_s32(__a, __b) __arm_vmladavxq_s32(__a, __b) #define vmladavq_s32(__a, __b) __arm_vmladavq_s32(__a, __b) #define vminvq_s32(__a, __b) __arm_vminvq_s32(__a, __b) -#define vminq_s32(__a, __b) __arm_vminq_s32(__a, __b) #define vmaxvq_s32(__a, __b) __arm_vmaxvq_s32(__a, __b) -#define vmaxq_s32(__a, __b) __arm_vmaxq_s32(__a, __b) #define vhcaddq_rot90_s32(__a, __b) __arm_vhcaddq_rot90_s32(__a, __b) #define vhcaddq_rot270_s32(__a, __b) __arm_vhcaddq_rot270_s32(__a, __b) #define vcaddq_rot90_s32(__a, __b) __arm_vcaddq_rot90_s32(__a, __b) @@ -1411,18 +1393,6 @@ #define vhcaddq_rot90_m_s8(__inactive, __a, __b, __p) __arm_vhcaddq_rot90_m_s8(__inactive, __a, __b, __p) #define vhcaddq_rot90_m_s32(__inactive, __a, __b, __p) __arm_vhcaddq_rot90_m_s32(__inactive, __a, __b, __p) #define vhcaddq_rot90_m_s16(__inactive, __a, __b, __p) __arm_vhcaddq_rot90_m_s16(__inactive, __a, __b, __p) -#define vmaxq_m_s8(__inactive, __a, __b, __p) __arm_vmaxq_m_s8(__inactive, __a, __b, __p) -#define vmaxq_m_s32(__inactive, __a, __b, __p) __arm_vmaxq_m_s32(__inactive, __a, __b, __p) -#define vmaxq_m_s16(__inactive, __a, __b, __p) __arm_vmaxq_m_s16(__inactive, __a, __b, __p) -#define vmaxq_m_u8(__inactive, __a, __b, __p) __arm_vmaxq_m_u8(__inactive, __a, __b, __p) -#define vmaxq_m_u32(__inactive, __a, __b, __p) __arm_vmaxq_m_u32(__inactive, __a, __b, __p) -#define vmaxq_m_u16(__inactive, __a, __b, __p) __arm_vmaxq_m_u16(__inactive, __a, __b, __p) -#define vminq_m_s8(__inactive, __a, __b, __p) __arm_vminq_m_s8(__inactive, __a, __b, __p) -#define vminq_m_s32(__inactive, __a, __b, __p) __arm_vminq_m_s32(__inactive, __a, __b, __p) -#define vminq_m_s16(__inactive, __a, __b, __p) __arm_vminq_m_s16(__inactive, __a, __b, __p) -#define vminq_m_u8(__inactive, __a, __b, __p) __arm_vminq_m_u8(__inactive, __a, __b, __p) -#define vminq_m_u32(__inactive, __a, __b, __p) __arm_vminq_m_u32(__inactive, __a, __b, __p) -#define vminq_m_u16(__inactive, __a, __b, __p) __arm_vminq_m_u16(__inactive, __a, __b, __p) #define vmladavaq_p_s8(__a, __b, __c, __p) __arm_vmladavaq_p_s8(__a, __b, __c, __p) #define vmladavaq_p_s32(__a, __b, __c, __p) __arm_vmladavaq_p_s32(__a, __b, __c, __p) #define vmladavaq_p_s16(__a, __b, __c, __p) __arm_vmladavaq_p_s16(__a, __b, __c, __p) @@ -1943,18 +1913,6 @@ #define vdupq_x_n_u8(__a, __p) __arm_vdupq_x_n_u8(__a, __p) #define vdupq_x_n_u16(__a, __p) __arm_vdupq_x_n_u16(__a, __p) #define vdupq_x_n_u32(__a, __p) __arm_vdupq_x_n_u32(__a, __p) -#define vminq_x_s8(__a, __b, __p) __arm_vminq_x_s8(__a, __b, __p) -#define vminq_x_s16(__a, __b, __p) __arm_vminq_x_s16(__a, __b, __p) -#define vminq_x_s32(__a, __b, __p) __arm_vminq_x_s32(__a, __b, __p) -#define vminq_x_u8(__a, __b, __p) __arm_vminq_x_u8(__a, __b, __p) -#define vminq_x_u16(__a, __b, __p) __arm_vminq_x_u16(__a, __b, __p) -#define vminq_x_u32(__a, __b, __p) __arm_vminq_x_u32(__a, __b, __p) -#define vmaxq_x_s8(__a, __b, __p) __arm_vmaxq_x_s8(__a, __b, __p) -#define vmaxq_x_s16(__a, __b, __p) __arm_vmaxq_x_s16(__a, __b, __p) -#define vmaxq_x_s32(__a, __b, __p) __arm_vmaxq_x_s32(__a, __b, __p) -#define vmaxq_x_u8(__a, __b, __p) __arm_vmaxq_x_u8(__a, __b, __p) -#define vmaxq_x_u16(__a, __b, __p) __arm_vmaxq_x_u16(__a, __b, __p) -#define vmaxq_x_u32(__a, __b, __p) __arm_vmaxq_x_u32(__a, __b, __p) #define vabsq_x_s8(__a, __p) __arm_vabsq_x_s8(__a, __p) #define vabsq_x_s16(__a, __p) __arm_vabsq_x_s16(__a, __p) #define vabsq_x_s32(__a, __p) __arm_vabsq_x_s32(__a, __p) @@ -2937,13 +2895,6 @@ __arm_vminvq_u8 (uint8_t __a, uint8x16_t __b) return __builtin_mve_vminvq_uv16qi (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_u8 (uint8x16_t __a, uint8x16_t __b) -{ - return __builtin_mve_vminq_uv16qi (__a, __b); -} - __extension__ extern __inline uint8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq_u8 (uint8_t __a, uint8x16_t __b) @@ -2951,13 +2902,6 @@ __arm_vmaxvq_u8 (uint8_t __a, uint8x16_t __b) return __builtin_mve_vmaxvq_uv16qi (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_u8 (uint8x16_t __a, uint8x16_t __b) -{ - return __builtin_mve_vmaxq_uv16qi (__a, __b); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpneq_n_u8 (uint8x16_t __a, uint8_t __b) @@ -3233,13 +3177,6 @@ __arm_vminvq_s8 (int8_t __a, int8x16_t __b) return __builtin_mve_vminvq_sv16qi (__a, __b); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_s8 (int8x16_t __a, int8x16_t __b) -{ - return __builtin_mve_vminq_sv16qi (__a, __b); -} - __extension__ extern __inline int8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq_s8 (int8_t __a, int8x16_t __b) @@ -3247,13 +3184,6 @@ __arm_vmaxvq_s8 (int8_t __a, int8x16_t __b) return __builtin_mve_vmaxvq_sv16qi (__a, __b); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_s8 (int8x16_t __a, int8x16_t __b) -{ - return __builtin_mve_vmaxq_sv16qi (__a, __b); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vhcaddq_rot90_s8 (int8x16_t __a, int8x16_t __b) @@ -3345,13 +3275,6 @@ __arm_vminvq_u16 (uint16_t __a, uint16x8_t __b) return __builtin_mve_vminvq_uv8hi (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_u16 (uint16x8_t __a, uint16x8_t __b) -{ - return __builtin_mve_vminq_uv8hi (__a, __b); -} - __extension__ extern __inline uint16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq_u16 (uint16_t __a, uint16x8_t __b) @@ -3359,13 +3282,6 @@ __arm_vmaxvq_u16 (uint16_t __a, uint16x8_t __b) return __builtin_mve_vmaxvq_uv8hi (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_u16 (uint16x8_t __a, uint16x8_t __b) -{ - return __builtin_mve_vmaxq_uv8hi (__a, __b); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpneq_n_u16 (uint16x8_t __a, uint16_t __b) @@ -3641,13 +3557,6 @@ __arm_vminvq_s16 (int16_t __a, int16x8_t __b) return __builtin_mve_vminvq_sv8hi (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_s16 (int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vminq_sv8hi (__a, __b); -} - __extension__ extern __inline int16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq_s16 (int16_t __a, int16x8_t __b) @@ -3655,13 +3564,6 @@ __arm_vmaxvq_s16 (int16_t __a, int16x8_t __b) return __builtin_mve_vmaxvq_sv8hi (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_s16 (int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vmaxq_sv8hi (__a, __b); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vhcaddq_rot90_s16 (int16x8_t __a, int16x8_t __b) @@ -3753,13 +3655,6 @@ __arm_vminvq_u32 (uint32_t __a, uint32x4_t __b) return __builtin_mve_vminvq_uv4si (__a, __b); } -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_u32 (uint32x4_t __a, uint32x4_t __b) -{ - return __builtin_mve_vminq_uv4si (__a, __b); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq_u32 (uint32_t __a, uint32x4_t __b) @@ -3767,13 +3662,6 @@ __arm_vmaxvq_u32 (uint32_t __a, uint32x4_t __b) return __builtin_mve_vmaxvq_uv4si (__a, __b); } -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_u32 (uint32x4_t __a, uint32x4_t __b) -{ - return __builtin_mve_vmaxq_uv4si (__a, __b); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpneq_n_u32 (uint32x4_t __a, uint32_t __b) @@ -4049,13 +3937,6 @@ __arm_vminvq_s32 (int32_t __a, int32x4_t __b) return __builtin_mve_vminvq_sv4si (__a, __b); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_s32 (int32x4_t __a, int32x4_t __b) -{ - return __builtin_mve_vminq_sv4si (__a, __b); -} - __extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq_s32 (int32_t __a, int32x4_t __b) @@ -4063,13 +3944,6 @@ __arm_vmaxvq_s32 (int32_t __a, int32x4_t __b) return __builtin_mve_vmaxvq_sv4si (__a, __b); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_s32 (int32x4_t __a, int32x4_t __b) -{ - return __builtin_mve_vmaxq_sv4si (__a, __b); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vhcaddq_rot90_s32 (int32x4_t __a, int32x4_t __b) @@ -7380,90 +7254,6 @@ __arm_vhcaddq_rot90_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, m return __builtin_mve_vhcaddq_rot90_m_sv8hi (__inactive, __a, __b, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_sv16qi (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_sv4si (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_sv8hi (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_uv16qi (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_uv4si (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_uv8hi (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_sv16qi (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_sv4si (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_sv8hi (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_uv16qi (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_uv4si (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_uv8hi (__inactive, __a, __b, __p); -} - __extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmladavaq_p_s8 (int32_t __a, int8x16_t __b, int8x16_t __c, mve_pred16_t __p) @@ -10635,90 +10425,6 @@ __arm_vdupq_x_n_u32 (uint32_t __a, mve_pred16_t __p) return __builtin_mve_vdupq_m_n_uv4si (__arm_vuninitializedq_u32 (), __a, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_sv4si (__arm_vuninitializedq_s32 (), __a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_uv16qi (__arm_vuninitializedq_u8 (), __a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_uv8hi (__arm_vuninitializedq_u16 (), __a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vminq_m_uv4si (__arm_vuninitializedq_u32 (), __a, __b, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_sv4si (__arm_vuninitializedq_s32 (), __a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_uv16qi (__arm_vuninitializedq_u8 (), __a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_uv8hi (__arm_vuninitializedq_u16 (), __a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vmaxq_m_uv4si (__arm_vuninitializedq_u32 (), __a, __b, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vabsq_x_s8 (int8x16_t __a, mve_pred16_t __p) @@ -15624,13 +15330,6 @@ __arm_vminvq (uint8_t __a, uint8x16_t __b) return __arm_vminvq_u8 (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq (uint8x16_t __a, uint8x16_t __b) -{ - return __arm_vminq_u8 (__a, __b); -} - __extension__ extern __inline uint8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq (uint8_t __a, uint8x16_t __b) @@ -15638,13 +15337,6 @@ __arm_vmaxvq (uint8_t __a, uint8x16_t __b) return __arm_vmaxvq_u8 (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq (uint8x16_t __a, uint8x16_t __b) -{ - return __arm_vmaxq_u8 (__a, __b); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpneq (uint8x16_t __a, uint8_t __b) @@ -15918,13 +15610,6 @@ __arm_vminvq (int8_t __a, int8x16_t __b) return __arm_vminvq_s8 (__a, __b); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq (int8x16_t __a, int8x16_t __b) -{ - return __arm_vminq_s8 (__a, __b); -} - __extension__ extern __inline int8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq (int8_t __a, int8x16_t __b) @@ -15932,13 +15617,6 @@ __arm_vmaxvq (int8_t __a, int8x16_t __b) return __arm_vmaxvq_s8 (__a, __b); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq (int8x16_t __a, int8x16_t __b) -{ - return __arm_vmaxq_s8 (__a, __b); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vhcaddq_rot90 (int8x16_t __a, int8x16_t __b) @@ -16030,13 +15708,6 @@ __arm_vminvq (uint16_t __a, uint16x8_t __b) return __arm_vminvq_u16 (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq (uint16x8_t __a, uint16x8_t __b) -{ - return __arm_vminq_u16 (__a, __b); -} - __extension__ extern __inline uint16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq (uint16_t __a, uint16x8_t __b) @@ -16044,13 +15715,6 @@ __arm_vmaxvq (uint16_t __a, uint16x8_t __b) return __arm_vmaxvq_u16 (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq (uint16x8_t __a, uint16x8_t __b) -{ - return __arm_vmaxq_u16 (__a, __b); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpneq (uint16x8_t __a, uint16_t __b) @@ -16324,13 +15988,6 @@ __arm_vminvq (int16_t __a, int16x8_t __b) return __arm_vminvq_s16 (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq (int16x8_t __a, int16x8_t __b) -{ - return __arm_vminq_s16 (__a, __b); -} - __extension__ extern __inline int16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq (int16_t __a, int16x8_t __b) @@ -16338,13 +15995,6 @@ __arm_vmaxvq (int16_t __a, int16x8_t __b) return __arm_vmaxvq_s16 (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq (int16x8_t __a, int16x8_t __b) -{ - return __arm_vmaxq_s16 (__a, __b); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vhcaddq_rot90 (int16x8_t __a, int16x8_t __b) @@ -16436,13 +16086,6 @@ __arm_vminvq (uint32_t __a, uint32x4_t __b) return __arm_vminvq_u32 (__a, __b); } -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vminq_u32 (__a, __b); -} - __extension__ extern __inline uint32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq (uint32_t __a, uint32x4_t __b) @@ -16450,13 +16093,6 @@ __arm_vmaxvq (uint32_t __a, uint32x4_t __b) return __arm_vmaxvq_u32 (__a, __b); } -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vmaxq_u32 (__a, __b); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpneq (uint32x4_t __a, uint32_t __b) @@ -16730,13 +16366,6 @@ __arm_vminvq (int32_t __a, int32x4_t __b) return __arm_vminvq_s32 (__a, __b); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq (int32x4_t __a, int32x4_t __b) -{ - return __arm_vminq_s32 (__a, __b); -} - __extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmaxvq (int32_t __a, int32x4_t __b) @@ -16744,13 +16373,6 @@ __arm_vmaxvq (int32_t __a, int32x4_t __b) return __arm_vmaxvq_s32 (__a, __b); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq (int32x4_t __a, int32x4_t __b) -{ - return __arm_vmaxq_s32 (__a, __b); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vhcaddq_rot90 (int32x4_t __a, int32x4_t __b) @@ -20020,90 +19642,6 @@ __arm_vhcaddq_rot90_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_p return __arm_vhcaddq_rot90_m_s16 (__inactive, __a, __b, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_m_s8 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_m_s32 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_m_s16 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_m_u8 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_m_u32 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_m (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_m_u16 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vminq_m_s8 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vminq_m_s32 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vminq_m_s16 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __arm_vminq_m_u8 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vminq_m_u32 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_m (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vminq_m_u16 (__inactive, __a, __b, __p); -} - __extension__ extern __inline int32_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vmladavaq_p (int32_t __a, int8x16_t __b, int8x16_t __c, mve_pred16_t __p) @@ -22806,90 +22344,6 @@ __arm_viwdupq_x_u32 (uint32_t *__a, uint32_t __b, const int __imm, mve_pred16_t return __arm_viwdupq_x_wb_u32 (__a, __b, __imm, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vminq_x_s8 (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vminq_x_s16 (__a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vminq_x_s32 (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __arm_vminq_x_u8 (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vminq_x_u16 (__a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vminq_x (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vminq_x_u32 (__a, __b, __p); -} - -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_x_s8 (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_x_s16 (__a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_x_s32 (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_x_u8 (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_x_u16 (__a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vmaxq_x (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vmaxq_x_u32 (__a, __b, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vabsq_x (int8x16_t __a, mve_pred16_t __p) @@ -27274,16 +26728,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vhcaddq_rot90_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vhcaddq_rot90_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));}) -#define __arm_vminq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vminq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vminq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vminq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vminq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - #define __arm_vminaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -27291,16 +26735,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminaq_s16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminaq_s32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));}) -#define __arm_vmaxq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vmaxq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmaxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmaxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vmaxq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmaxq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vmaxq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - #define __arm_vmaxaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -28867,16 +28301,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmullbq_int_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vmullbq_int_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) -#define __arm_vminq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vminq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vminq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vminq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vminq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - #define __arm_vminaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -28884,16 +28308,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminaq_s16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminaq_s32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, int32x4_t)));}) -#define __arm_vmaxq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vmaxq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmaxq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmaxq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vmaxq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmaxq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vmaxq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - #define __arm_vmaxaq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -30608,28 +30022,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vhcaddq_rot90_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vhcaddq_rot90_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3));}) -#define __arm_vmaxq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - __typeof(p2) __p2 = (p2); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vmaxq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmaxq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmaxq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vmaxq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmaxq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vmaxq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));}) - -#define __arm_vminq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - __typeof(p2) __p2 = (p2); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vminq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vminq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vminq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vminq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));}) - #define __arm_vmlaq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ @@ -31068,26 +30460,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_int_n][__ARM_mve_type_int16x8_t]: __arm_vminavq_p_s16 (__p0, __ARM_mve_coerce(__p1, int16x8_t), p2), \ int (*)[__ARM_mve_type_int_n][__ARM_mve_type_int32x4_t]: __arm_vminavq_p_s32 (__p0, __ARM_mve_coerce(__p1, int32x4_t), p2));}) -#define __arm_vmaxq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ - __typeof(p2) __p2 = (p2); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vmaxq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vmaxq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vmaxq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vmaxq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vmaxq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vmaxq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));}) - -#define __arm_vminq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ - __typeof(p2) __p2 = (p2); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vminq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vminq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vminq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vminq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vminq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vminq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));}) - #define __arm_vminvq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ -- 2.34.1