From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR02-AM0-obe.outbound.protection.outlook.com (mail-am0eur02on2042.outbound.protection.outlook.com [40.107.247.42]) by sourceware.org (Postfix) with ESMTPS id 998CB3856949 for ; Fri, 5 May 2023 08:40:07 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 998CB3856949 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iVpD7TfsbkepXlnfQG0nER2kINMBTD8k0bJkJCFvAnA=; b=SFqplBztqArDS/9ejkZkJTAphvptE8RC3AzAX4xgImG6RTTaaUUzoOKSxXDnUDBBKFEuasIyStzKa5SYcfAIVIN3u1L0mIZ6HRDAQh9WrpTbGcyZy86YgcLwVej6GOEfrBfZxKpouNUkuuV7cGSODC4QhcbqzgbbAav7XdV6PFg= Received: from AS9PR06CA0503.eurprd06.prod.outlook.com (2603:10a6:20b:49b::28) by DU0PR08MB9909.eurprd08.prod.outlook.com (2603:10a6:10:403::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Fri, 5 May 2023 08:40:04 +0000 Received: from AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:49b:cafe::4f) by AS9PR06CA0503.outlook.office365.com (2603:10a6:20b:49b::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 08:40:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT063.mail.protection.outlook.com (100.127.140.221) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 08:40:04 +0000 Received: ("Tessian outbound 3570909035da:v136"); Fri, 05 May 2023 08:40:04 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 382940dbf1f87778 X-CR-MTA-TID: 64aa7808 Received: from dee4a12d7399.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id F431A3CA-78D3-4B2A-B9BC-F1E2E6EA9021.1; Fri, 05 May 2023 08:39:56 +0000 Received: from EUR03-DBA-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id dee4a12d7399.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 05 May 2023 08:39:56 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oO2gTDHyX0ZWSgfMA1ANX0AouQEO0Vzup9YDcKKfSLLrfevOudxIFDSfGPHV753isqAZVq/iT4EFhJfl6kJBItHd40VoSS1/9WpZZrw19qWDP08OCnKLYwBaYKhbYkH5euKr0PFEralmbB3+2tzRhBOvXQSyaVVu1qd1i+DtPnsXv3PNPrNn/srFcb8SQVZYc6Dttj9LSlrroJJFxVcFqt4bJu8O0AZ5C8xA0JrexUGYjdEm2KL4CGNLnCjD3PepkaD/z7wmzTzHI5QaDi5OLqzHS4f1oDCrkxhBBtqzh1QP4i99GA+BFufNW5mEceEHRnf7FcbD7+atYs90B4/v3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iVpD7TfsbkepXlnfQG0nER2kINMBTD8k0bJkJCFvAnA=; b=dpWLmDGcJeIiHGHOZnUP7KPz/y/f3Yx+bU9f8/xN6cdsMWpb0lFt0SiGjCO1dHzgnghbGhYOkuTD+sWqUFkLL6Oqzjp09LL8PicSL/kjk/VEE6+Gj7eTPOiim4KNv3N3FJNnEan4UB3rfMR6HLybicC1K6cnyizeKyEaSiEvMT211xiy+HmkMLu4CYWuEa5BEk88vYwAbT/txKpGPbKoXx8sBlTYvXFFiwRIqXD9x9WcMjwx/H0zX096jO7QpVGf+dFQtKF0GLzSk6OD/dptsalRmmqmRdP3dGFCgTcOD0aTociIShYJuPf30JaSjywyI6iE5aNw/JtFrwPfknS62Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iVpD7TfsbkepXlnfQG0nER2kINMBTD8k0bJkJCFvAnA=; b=SFqplBztqArDS/9ejkZkJTAphvptE8RC3AzAX4xgImG6RTTaaUUzoOKSxXDnUDBBKFEuasIyStzKa5SYcfAIVIN3u1L0mIZ6HRDAQh9WrpTbGcyZy86YgcLwVej6GOEfrBfZxKpouNUkuuV7cGSODC4QhcbqzgbbAav7XdV6PFg= Received: from AS9PR06CA0522.eurprd06.prod.outlook.com (2603:10a6:20b:49d::18) by DB9PR08MB7890.eurprd08.prod.outlook.com (2603:10a6:10:39d::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May 2023 08:39:51 +0000 Received: from AM7EUR03FT020.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:49d:cafe::d9) by AS9PR06CA0522.outlook.office365.com (2603:10a6:20b:49d::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 08:39:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by AM7EUR03FT020.mail.protection.outlook.com (100.127.140.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6387.12 via Frontend Transport; Fri, 5 May 2023 08:39:51 +0000 Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 5 May 2023 08:39:48 +0000 Received: from e129018.arm.com (10.57.22.112) by mail.arm.com (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.23 via Frontend Transport; Fri, 5 May 2023 08:39:48 +0000 From: Christophe Lyon To: , , , CC: Christophe Lyon Subject: [PATCH 07/23] arm: [MVE intrinsics] rework vabdq Date: Fri, 5 May 2023 10:39:14 +0200 Message-ID: <20230505083930.101210-7-christophe.lyon@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230505083930.101210-1-christophe.lyon@arm.com> References: <20230505083930.101210-1-christophe.lyon@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: AM7EUR03FT020:EE_|DB9PR08MB7890:EE_|AM7EUR03FT063:EE_|DU0PR08MB9909:EE_ X-MS-Office365-Filtering-Correlation-Id: e5469a66-9b42-4a8d-5c17-08db4d4452dc x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: 7LXzzm3EZRAL0LY63sqZ9pYh8Mx3Nxx/X+bsy/FfCqvegPpHiv8WSYe9FZokLzRMmUbKcviYut82vYuVTLcSt8pWyMDveVojYGxF9K7io/E0lJG6vLJVajOCowAFIpUphMpqYjaYBd92Vm69so3H4yNICoi/k6Pcm5O8++SN+OkDj1U6f9pqt/dL+XwYQP2VUm3ALXo/IUNK1J0UacGb8D1gOLAdnGq4zeytE2nEvoLuBHloiVqSbIqaH8QeLKZzRVTT7k2rIB+ciUwrJrENcsgrDZZEj9zoWGsGOVmbmShV6Kar8sbzzdEsOETl0t6EVSGASSi2v3CTIN0eZ86o6jWPBTIfBa3PeDxy7vswcJ51xnT2mv3/OXxq7AqbhkiWbYpKYytHp6ixDqWYCwBtQ/A1aV3n7/Kg9B/g/gDlgfDT42t1o1KAV3RVcOORn3PpdXQHOmuM/3nhElD0RfkF0mA5H6V10Dwxd4Zd+xDppQp1RKQtEVZJMpABZgNJLGkJQlGo5jifSBnkWVMjPg/JJLTQwNdIzbw6eiWmAI4XOdKIIWzdUuou9zNmYxhx7W/Ae5PpXJYbfvIAPuEMosXJBDGnn0Bbc+TzjS/xC0G/VKDCaD2e3L4PhqJsYW0Mzd+yehs+RZ/YO138ui/xRpHhH7jP/T0OuNdx4foD/RLsqWBY9bXlqtzou2GrEOux+KAvbgwwEMZ4mSUNnecZphNtvTpO9jR5cTU6CgarjLEoDGivtKMum8DpcKhL0htKJT9q X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199021)(40470700004)(46966006)(36840700001)(47076005)(83380400001)(2616005)(40460700003)(186003)(426003)(336012)(2906002)(30864003)(34020700004)(40480700001)(36756003)(86362001)(82310400005)(82740400003)(81166007)(356005)(36860700001)(8936002)(8676002)(5660300002)(44832011)(6636002)(4326008)(70586007)(70206006)(41300700001)(7696005)(6666004)(316002)(478600001)(26005)(1076003)(110136005)(36900700001)(579004);DIR:OUT;SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7890 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: cedfb8cd-5885-4461-fac4-08db4d444b13 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tyufgh7P/hJKVMrxj1UmO88vKbOr8y2NY286Fpfnnk7tRwfo7bCGLNn4MPaxnGcOHdognYe8MlIHn7rMjsnx36DBQD7Rr+HWMTyUOMZhGsyR5P1BfvF7/eFzsrdkAHh2ekIHceFeUPeQ+w+up875F2C7GgV6sJ6gs1ItVTPgKNbCj5cmseep5u4N7Y2IbSdIHlzHKq9guKK6+71nbAi4xJSGT0qa6fZrzVLbjTY/+4tOd9wFNGwPFazjl1GOVQa/b6kAziW4hbKSEiIvQJvk1D58PfKZ242t33us3ePVLUqvYCW7CgnI7i3YE1lkr55gVCSyVYfVKmRuaCCDPZgLmsKPw9lNy/7PZ9OaF9R+PiU3vXa6Is7rqk1CjblSCuk5VT91KR1lZw17+Bwq5vQITfF7tWm3IJS8TQEFiuji70cmHPM515FXvz3+pZivKSeZ42phA9Jw2DMQz7Wgl8pt4LJb7K0fUOi7x5YpbF65iGfocOCacwWax5mM+x1t2ylk3MCAFt2fNjO7G5B2kF34oqLT4pygrwW6uzSa2BRH+lc1yvD2ANphb0gilWvo0nqCOcgPIbd5yhXA7BMFpA99VKlxaJ/T0zOwzVYJelmyD9pmjzjzbIfi7vne+7pPdSZcloG56OIlHPrIEAIQmYlYOxCPlOxennOXDDZsTsLgJM8Z/xJMu43+tdVkdXTHqAjlzzkQSsuGOWgd4YxSSPVPPQ== X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(1076003)(186003)(2616005)(26005)(426003)(336012)(110136005)(82310400005)(83380400001)(36756003)(36860700001)(7696005)(47076005)(6666004)(40460700003)(34020700004)(40480700001)(5660300002)(316002)(82740400003)(2906002)(30864003)(4326008)(478600001)(86362001)(6636002)(41300700001)(8936002)(8676002)(44832011)(81166007)(70586007)(70206006)(579004);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 08:40:04.5564 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e5469a66-9b42-4a8d-5c17-08db4d4452dc X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9909 X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,KAM_DMARC_NONE,KAM_SHORT,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Implement vabdq using the new MVE builtins framework. 2022-09-08 Christophe Lyon gcc/ * config/arm/arm-mve-builtins-base.cc (FUNCTION_WITHOUT_N): New. (vabdq): New. * config/arm/arm-mve-builtins-base.def (vabdq): New. * config/arm/arm-mve-builtins-base.h (vabdq): New. * config/arm/arm_mve.h (vabdq): Remove. (vabdq_m): Remove. (vabdq_x): Remove. (vabdq_u8): Remove. (vabdq_s8): Remove. (vabdq_u16): Remove. (vabdq_s16): Remove. (vabdq_u32): Remove. (vabdq_s32): Remove. (vabdq_f16): Remove. (vabdq_f32): Remove. (vabdq_m_s8): Remove. (vabdq_m_s32): Remove. (vabdq_m_s16): Remove. (vabdq_m_u8): Remove. (vabdq_m_u32): Remove. (vabdq_m_u16): Remove. (vabdq_m_f32): Remove. (vabdq_m_f16): Remove. (vabdq_x_s8): Remove. (vabdq_x_s16): Remove. (vabdq_x_s32): Remove. (vabdq_x_u8): Remove. (vabdq_x_u16): Remove. (vabdq_x_u32): Remove. (vabdq_x_f16): Remove. (vabdq_x_f32): Remove. (__arm_vabdq_u8): Remove. (__arm_vabdq_s8): Remove. (__arm_vabdq_u16): Remove. (__arm_vabdq_s16): Remove. (__arm_vabdq_u32): Remove. (__arm_vabdq_s32): Remove. (__arm_vabdq_m_s8): Remove. (__arm_vabdq_m_s32): Remove. (__arm_vabdq_m_s16): Remove. (__arm_vabdq_m_u8): Remove. (__arm_vabdq_m_u32): Remove. (__arm_vabdq_m_u16): Remove. (__arm_vabdq_x_s8): Remove. (__arm_vabdq_x_s16): Remove. (__arm_vabdq_x_s32): Remove. (__arm_vabdq_x_u8): Remove. (__arm_vabdq_x_u16): Remove. (__arm_vabdq_x_u32): Remove. (__arm_vabdq_f16): Remove. (__arm_vabdq_f32): Remove. (__arm_vabdq_m_f32): Remove. (__arm_vabdq_m_f16): Remove. (__arm_vabdq_x_f16): Remove. (__arm_vabdq_x_f32): Remove. (__arm_vabdq): Remove. (__arm_vabdq_m): Remove. (__arm_vabdq_x): Remove. --- gcc/config/arm/arm-mve-builtins-base.cc | 10 + gcc/config/arm/arm-mve-builtins-base.def | 2 + gcc/config/arm/arm-mve-builtins-base.h | 1 + gcc/config/arm/arm_mve.h | 431 ----------------------- 4 files changed, 13 insertions(+), 431 deletions(-) diff --git a/gcc/config/arm/arm-mve-builtins-base.cc b/gcc/config/arm/arm-mve-builtins-base.cc index 8c125657c67..a74119db917 100644 --- a/gcc/config/arm/arm-mve-builtins-base.cc +++ b/gcc/config/arm/arm-mve-builtins-base.cc @@ -146,6 +146,16 @@ namespace arm_mve { UNSPEC##_M_S, -1, -1, \ UNSPEC##_M_N_S, -1, -1)) + /* Helper for builtins with only unspec codes, _m predicated + overrides, but no _n version. */ +#define FUNCTION_WITHOUT_N(NAME, UNSPEC) FUNCTION \ + (NAME, unspec_mve_function_exact_insn, \ + (UNSPEC##_S, UNSPEC##_U, UNSPEC##_F, \ + -1, -1, -1, \ + UNSPEC##_M_S, UNSPEC##_M_U, UNSPEC##_M_F, \ + -1, -1, -1)) + +FUNCTION_WITHOUT_N (vabdq, VABDQ) FUNCTION_WITH_RTX_M_N (vaddq, PLUS, VADDQ) FUNCTION_WITH_RTX_M (vandq, AND, VANDQ) FUNCTION_WITHOUT_M_N (vcreateq, VCREATEQ) diff --git a/gcc/config/arm/arm-mve-builtins-base.def b/gcc/config/arm/arm-mve-builtins-base.def index 5b9966341ce..9230837fd43 100644 --- a/gcc/config/arm/arm-mve-builtins-base.def +++ b/gcc/config/arm/arm-mve-builtins-base.def @@ -18,6 +18,7 @@ . */ #define REQUIRES_FLOAT false +DEF_MVE_FUNCTION (vabdq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vaddq, binary_opt_n, all_integer, mx_or_none) DEF_MVE_FUNCTION (vandq, binary, all_integer, mx_or_none) DEF_MVE_FUNCTION (vcreateq, create, all_integer_with_64, none) @@ -41,6 +42,7 @@ DEF_MVE_FUNCTION (vuninitializedq, inherent, all_integer_with_64, none) #undef REQUIRES_FLOAT #define REQUIRES_FLOAT true +DEF_MVE_FUNCTION (vabdq, binary, all_float, mx_or_none) DEF_MVE_FUNCTION (vaddq, binary_opt_n, all_float, mx_or_none) DEF_MVE_FUNCTION (vandq, binary, all_float, mx_or_none) DEF_MVE_FUNCTION (vcreateq, create, all_float, none) diff --git a/gcc/config/arm/arm-mve-builtins-base.h b/gcc/config/arm/arm-mve-builtins-base.h index eeb747d52ad..d9d45d1925a 100644 --- a/gcc/config/arm/arm-mve-builtins-base.h +++ b/gcc/config/arm/arm-mve-builtins-base.h @@ -23,6 +23,7 @@ namespace arm_mve { namespace functions { +extern const function_base *const vabdq; extern const function_base *const vaddq; extern const function_base *const vandq; extern const function_base *const vcreateq; diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h index 44b383dbe08..175d9955c33 100644 --- a/gcc/config/arm/arm_mve.h +++ b/gcc/config/arm/arm_mve.h @@ -77,7 +77,6 @@ #define vbicq(__a, __b) __arm_vbicq(__a, __b) #define vaddvq_p(__a, __p) __arm_vaddvq_p(__a, __p) #define vaddvaq(__a, __b) __arm_vaddvaq(__a, __b) -#define vabdq(__a, __b) __arm_vabdq(__a, __b) #define vshlq_r(__a, __b) __arm_vshlq_r(__a, __b) #define vqshlq(__a, __b) __arm_vqshlq(__a, __b) #define vqshlq_r(__a, __b) __arm_vqshlq_r(__a, __b) @@ -218,7 +217,6 @@ #define vqshluq_m(__inactive, __a, __imm, __p) __arm_vqshluq_m(__inactive, __a, __imm, __p) #define vabavq_p(__a, __b, __c, __p) __arm_vabavq_p(__a, __b, __c, __p) #define vshlq_m(__inactive, __a, __b, __p) __arm_vshlq_m(__inactive, __a, __b, __p) -#define vabdq_m(__inactive, __a, __b, __p) __arm_vabdq_m(__inactive, __a, __b, __p) #define vbicq_m(__inactive, __a, __b, __p) __arm_vbicq_m(__inactive, __a, __b, __p) #define vbrsrq_m(__inactive, __a, __b, __p) __arm_vbrsrq_m(__inactive, __a, __b, __p) #define vcaddq_rot270_m(__inactive, __a, __b, __p) __arm_vcaddq_rot270_m(__inactive, __a, __b, __p) @@ -355,7 +353,6 @@ #define viwdupq_x_u32(__a, __b, __imm, __p) __arm_viwdupq_x_u32(__a, __b, __imm, __p) #define vminq_x(__a, __b, __p) __arm_vminq_x(__a, __b, __p) #define vmaxq_x(__a, __b, __p) __arm_vmaxq_x(__a, __b, __p) -#define vabdq_x(__a, __b, __p) __arm_vabdq_x(__a, __b, __p) #define vabsq_x(__a, __p) __arm_vabsq_x(__a, __p) #define vclsq_x(__a, __p) __arm_vclsq_x(__a, __p) #define vclzq_x(__a, __p) __arm_vclzq_x(__a, __p) @@ -652,7 +649,6 @@ #define vbicq_u8(__a, __b) __arm_vbicq_u8(__a, __b) #define vaddvq_p_u8(__a, __p) __arm_vaddvq_p_u8(__a, __p) #define vaddvaq_u8(__a, __b) __arm_vaddvaq_u8(__a, __b) -#define vabdq_u8(__a, __b) __arm_vabdq_u8(__a, __b) #define vshlq_r_u8(__a, __b) __arm_vshlq_r_u8(__a, __b) #define vqshlq_u8(__a, __b) __arm_vqshlq_u8(__a, __b) #define vqshlq_r_u8(__a, __b) __arm_vqshlq_r_u8(__a, __b) @@ -698,7 +694,6 @@ #define vbrsrq_n_s8(__a, __b) __arm_vbrsrq_n_s8(__a, __b) #define vbicq_s8(__a, __b) __arm_vbicq_s8(__a, __b) #define vaddvaq_s8(__a, __b) __arm_vaddvaq_s8(__a, __b) -#define vabdq_s8(__a, __b) __arm_vabdq_s8(__a, __b) #define vshlq_n_s8(__a, __imm) __arm_vshlq_n_s8(__a, __imm) #define vrshrq_n_s8(__a, __imm) __arm_vrshrq_n_s8(__a, __imm) #define vqshlq_n_s8(__a, __imm) __arm_vqshlq_n_s8(__a, __imm) @@ -722,7 +717,6 @@ #define vbicq_u16(__a, __b) __arm_vbicq_u16(__a, __b) #define vaddvq_p_u16(__a, __p) __arm_vaddvq_p_u16(__a, __p) #define vaddvaq_u16(__a, __b) __arm_vaddvaq_u16(__a, __b) -#define vabdq_u16(__a, __b) __arm_vabdq_u16(__a, __b) #define vshlq_r_u16(__a, __b) __arm_vshlq_r_u16(__a, __b) #define vqshlq_u16(__a, __b) __arm_vqshlq_u16(__a, __b) #define vqshlq_r_u16(__a, __b) __arm_vqshlq_r_u16(__a, __b) @@ -768,7 +762,6 @@ #define vbrsrq_n_s16(__a, __b) __arm_vbrsrq_n_s16(__a, __b) #define vbicq_s16(__a, __b) __arm_vbicq_s16(__a, __b) #define vaddvaq_s16(__a, __b) __arm_vaddvaq_s16(__a, __b) -#define vabdq_s16(__a, __b) __arm_vabdq_s16(__a, __b) #define vshlq_n_s16(__a, __imm) __arm_vshlq_n_s16(__a, __imm) #define vrshrq_n_s16(__a, __imm) __arm_vrshrq_n_s16(__a, __imm) #define vqshlq_n_s16(__a, __imm) __arm_vqshlq_n_s16(__a, __imm) @@ -792,7 +785,6 @@ #define vbicq_u32(__a, __b) __arm_vbicq_u32(__a, __b) #define vaddvq_p_u32(__a, __p) __arm_vaddvq_p_u32(__a, __p) #define vaddvaq_u32(__a, __b) __arm_vaddvaq_u32(__a, __b) -#define vabdq_u32(__a, __b) __arm_vabdq_u32(__a, __b) #define vshlq_r_u32(__a, __b) __arm_vshlq_r_u32(__a, __b) #define vqshlq_u32(__a, __b) __arm_vqshlq_u32(__a, __b) #define vqshlq_r_u32(__a, __b) __arm_vqshlq_r_u32(__a, __b) @@ -838,7 +830,6 @@ #define vbrsrq_n_s32(__a, __b) __arm_vbrsrq_n_s32(__a, __b) #define vbicq_s32(__a, __b) __arm_vbicq_s32(__a, __b) #define vaddvaq_s32(__a, __b) __arm_vaddvaq_s32(__a, __b) -#define vabdq_s32(__a, __b) __arm_vabdq_s32(__a, __b) #define vshlq_n_s32(__a, __imm) __arm_vshlq_n_s32(__a, __imm) #define vrshrq_n_s32(__a, __imm) __arm_vrshrq_n_s32(__a, __imm) #define vqshlq_n_s32(__a, __imm) __arm_vqshlq_n_s32(__a, __imm) @@ -894,7 +885,6 @@ #define vcaddq_rot90_f16(__a, __b) __arm_vcaddq_rot90_f16(__a, __b) #define vcaddq_rot270_f16(__a, __b) __arm_vcaddq_rot270_f16(__a, __b) #define vbicq_f16(__a, __b) __arm_vbicq_f16(__a, __b) -#define vabdq_f16(__a, __b) __arm_vabdq_f16(__a, __b) #define vshlltq_n_s8(__a, __imm) __arm_vshlltq_n_s8(__a, __imm) #define vshllbq_n_s8(__a, __imm) __arm_vshllbq_n_s8(__a, __imm) #define vbicq_n_s16(__a, __imm) __arm_vbicq_n_s16(__a, __imm) @@ -950,7 +940,6 @@ #define vcaddq_rot90_f32(__a, __b) __arm_vcaddq_rot90_f32(__a, __b) #define vcaddq_rot270_f32(__a, __b) __arm_vcaddq_rot270_f32(__a, __b) #define vbicq_f32(__a, __b) __arm_vbicq_f32(__a, __b) -#define vabdq_f32(__a, __b) __arm_vabdq_f32(__a, __b) #define vshlltq_n_s16(__a, __imm) __arm_vshlltq_n_s16(__a, __imm) #define vshllbq_n_s16(__a, __imm) __arm_vshllbq_n_s16(__a, __imm) #define vbicq_n_s32(__a, __imm) __arm_vbicq_n_s32(__a, __imm) @@ -1460,12 +1449,6 @@ #define vshlq_m_u32(__inactive, __a, __b, __p) __arm_vshlq_m_u32(__inactive, __a, __b, __p) #define vabavq_p_u32(__a, __b, __c, __p) __arm_vabavq_p_u32(__a, __b, __c, __p) #define vshlq_m_s32(__inactive, __a, __b, __p) __arm_vshlq_m_s32(__inactive, __a, __b, __p) -#define vabdq_m_s8(__inactive, __a, __b, __p) __arm_vabdq_m_s8(__inactive, __a, __b, __p) -#define vabdq_m_s32(__inactive, __a, __b, __p) __arm_vabdq_m_s32(__inactive, __a, __b, __p) -#define vabdq_m_s16(__inactive, __a, __b, __p) __arm_vabdq_m_s16(__inactive, __a, __b, __p) -#define vabdq_m_u8(__inactive, __a, __b, __p) __arm_vabdq_m_u8(__inactive, __a, __b, __p) -#define vabdq_m_u32(__inactive, __a, __b, __p) __arm_vabdq_m_u32(__inactive, __a, __b, __p) -#define vabdq_m_u16(__inactive, __a, __b, __p) __arm_vabdq_m_u16(__inactive, __a, __b, __p) #define vbicq_m_s8(__inactive, __a, __b, __p) __arm_vbicq_m_s8(__inactive, __a, __b, __p) #define vbicq_m_s32(__inactive, __a, __b, __p) __arm_vbicq_m_s32(__inactive, __a, __b, __p) #define vbicq_m_s16(__inactive, __a, __b, __p) __arm_vbicq_m_s16(__inactive, __a, __b, __p) @@ -1700,8 +1683,6 @@ #define vshrntq_m_n_s16(__a, __b, __imm, __p) __arm_vshrntq_m_n_s16(__a, __b, __imm, __p) #define vshrntq_m_n_u32(__a, __b, __imm, __p) __arm_vshrntq_m_n_u32(__a, __b, __imm, __p) #define vshrntq_m_n_u16(__a, __b, __imm, __p) __arm_vshrntq_m_n_u16(__a, __b, __imm, __p) -#define vabdq_m_f32(__inactive, __a, __b, __p) __arm_vabdq_m_f32(__inactive, __a, __b, __p) -#define vabdq_m_f16(__inactive, __a, __b, __p) __arm_vabdq_m_f16(__inactive, __a, __b, __p) #define vbicq_m_f32(__inactive, __a, __b, __p) __arm_vbicq_m_f32(__inactive, __a, __b, __p) #define vbicq_m_f16(__inactive, __a, __b, __p) __arm_vbicq_m_f16(__inactive, __a, __b, __p) #define vbrsrq_m_n_f32(__inactive, __a, __b, __p) __arm_vbrsrq_m_n_f32(__inactive, __a, __b, __p) @@ -2060,12 +2041,6 @@ #define vmaxq_x_u8(__a, __b, __p) __arm_vmaxq_x_u8(__a, __b, __p) #define vmaxq_x_u16(__a, __b, __p) __arm_vmaxq_x_u16(__a, __b, __p) #define vmaxq_x_u32(__a, __b, __p) __arm_vmaxq_x_u32(__a, __b, __p) -#define vabdq_x_s8(__a, __b, __p) __arm_vabdq_x_s8(__a, __b, __p) -#define vabdq_x_s16(__a, __b, __p) __arm_vabdq_x_s16(__a, __b, __p) -#define vabdq_x_s32(__a, __b, __p) __arm_vabdq_x_s32(__a, __b, __p) -#define vabdq_x_u8(__a, __b, __p) __arm_vabdq_x_u8(__a, __b, __p) -#define vabdq_x_u16(__a, __b, __p) __arm_vabdq_x_u16(__a, __b, __p) -#define vabdq_x_u32(__a, __b, __p) __arm_vabdq_x_u32(__a, __b, __p) #define vabsq_x_s8(__a, __p) __arm_vabsq_x_s8(__a, __p) #define vabsq_x_s16(__a, __p) __arm_vabsq_x_s16(__a, __p) #define vabsq_x_s32(__a, __p) __arm_vabsq_x_s32(__a, __p) @@ -2201,8 +2176,6 @@ #define vminnmq_x_f32(__a, __b, __p) __arm_vminnmq_x_f32(__a, __b, __p) #define vmaxnmq_x_f16(__a, __b, __p) __arm_vmaxnmq_x_f16(__a, __b, __p) #define vmaxnmq_x_f32(__a, __b, __p) __arm_vmaxnmq_x_f32(__a, __b, __p) -#define vabdq_x_f16(__a, __b, __p) __arm_vabdq_x_f16(__a, __b, __p) -#define vabdq_x_f32(__a, __b, __p) __arm_vabdq_x_f32(__a, __b, __p) #define vabsq_x_f16(__a, __p) __arm_vabsq_x_f16(__a, __p) #define vabsq_x_f32(__a, __p) __arm_vabsq_x_f32(__a, __p) #define vnegq_x_f16(__a, __p) __arm_vnegq_x_f16(__a, __p) @@ -3211,13 +3184,6 @@ __arm_vaddvaq_u8 (uint32_t __a, uint8x16_t __b) return __builtin_mve_vaddvaq_uv16qi (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_u8 (uint8x16_t __a, uint8x16_t __b) -{ - return __builtin_mve_vabdq_uv16qi (__a, __b); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_r_u8 (uint8x16_t __a, int32_t __b) @@ -3533,13 +3499,6 @@ __arm_vaddvaq_s8 (int32_t __a, int8x16_t __b) return __builtin_mve_vaddvaq_sv16qi (__a, __b); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_s8 (int8x16_t __a, int8x16_t __b) -{ - return __builtin_mve_vabdq_sv16qi (__a, __b); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_n_s8 (int8x16_t __a, const int __imm) @@ -3703,13 +3662,6 @@ __arm_vaddvaq_u16 (uint32_t __a, uint16x8_t __b) return __builtin_mve_vaddvaq_uv8hi (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_u16 (uint16x8_t __a, uint16x8_t __b) -{ - return __builtin_mve_vabdq_uv8hi (__a, __b); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_r_u16 (uint16x8_t __a, int32_t __b) @@ -4025,13 +3977,6 @@ __arm_vaddvaq_s16 (int32_t __a, int16x8_t __b) return __builtin_mve_vaddvaq_sv8hi (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_s16 (int16x8_t __a, int16x8_t __b) -{ - return __builtin_mve_vabdq_sv8hi (__a, __b); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_n_s16 (int16x8_t __a, const int __imm) @@ -4195,13 +4140,6 @@ __arm_vaddvaq_u32 (uint32_t __a, uint32x4_t __b) return __builtin_mve_vaddvaq_uv4si (__a, __b); } -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_u32 (uint32x4_t __a, uint32x4_t __b) -{ - return __builtin_mve_vabdq_uv4si (__a, __b); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_r_u32 (uint32x4_t __a, int32_t __b) @@ -4517,13 +4455,6 @@ __arm_vaddvaq_s32 (int32_t __a, int32x4_t __b) return __builtin_mve_vaddvaq_sv4si (__a, __b); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_s32 (int32x4_t __a, int32x4_t __b) -{ - return __builtin_mve_vabdq_sv4si (__a, __b); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_n_s32 (int32x4_t __a, const int __imm) @@ -7715,48 +7646,6 @@ __arm_vshlq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred1 return __builtin_mve_vshlq_m_sv4si (__inactive, __a, __b, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_sv16qi (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m_s32 (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_sv4si (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m_s16 (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_sv8hi (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m_u8 (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_uv16qi (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m_u32 (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_uv4si (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m_u16 (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_uv8hi (__inactive, __a, __b, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq_m_s8 (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p) @@ -11432,48 +11321,6 @@ __arm_vmaxq_x_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) return __builtin_mve_vmaxq_m_uv4si (__arm_vuninitializedq_u32 (), __a, __b, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x_s8 (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_sv16qi (__arm_vuninitializedq_s8 (), __a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x_s16 (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_sv8hi (__arm_vuninitializedq_s16 (), __a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x_s32 (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_sv4si (__arm_vuninitializedq_s32 (), __a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x_u8 (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_uv16qi (__arm_vuninitializedq_u8 (), __a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x_u16 (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_uv8hi (__arm_vuninitializedq_u16 (), __a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x_u32 (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_uv4si (__arm_vuninitializedq_u32 (), __a, __b, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vabsq_x_s8 (int8x16_t __a, mve_pred16_t __p) @@ -13692,13 +13539,6 @@ __arm_vbicq_f16 (float16x8_t __a, float16x8_t __b) return __builtin_mve_vbicq_fv8hf (__a, __b); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_f16 (float16x8_t __a, float16x8_t __b) -{ - return __builtin_mve_vabdq_fv8hf (__a, __b); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpneq_n_f32 (float32x4_t __a, float32_t __b) @@ -13895,13 +13735,6 @@ __arm_vbicq_f32 (float32x4_t __a, float32x4_t __b) return __builtin_mve_vbicq_fv4sf (__a, __b); } -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_f32 (float32x4_t __a, float32x4_t __b) -{ - return __builtin_mve_vabdq_fv4sf (__a, __b); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcvttq_f16_f32 (float16x8_t __a, float32x4_t __b) @@ -14666,20 +14499,6 @@ __arm_vcvtq_m_n_f32_s32 (float32x4_t __inactive, int32x4_t __a, const int __imm6 return __builtin_mve_vcvtq_m_n_to_f_sv4sf (__inactive, __a, __imm6, __p); } -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m_f32 (float32x4_t __inactive, float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_fv4sf (__inactive, __a, __b, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m_f16 (float16x8_t __inactive, float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_fv8hf (__inactive, __a, __b, __p); -} - __extension__ extern __inline float32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq_m_f32 (float32x4_t __inactive, float32x4_t __a, float32x4_t __b, mve_pred16_t __p) @@ -15274,20 +15093,6 @@ __arm_vmaxnmq_x_f32 (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) return __builtin_mve_vmaxnmq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __b, __p); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x_f16 (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_fv8hf (__arm_vuninitializedq_f16 (), __a, __b, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x_f32 (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __builtin_mve_vabdq_m_fv4sf (__arm_vuninitializedq_f32 (), __a, __b, __p); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vabsq_x_f16 (float16x8_t __a, mve_pred16_t __p) @@ -16652,13 +16457,6 @@ __arm_vaddvaq (uint32_t __a, uint8x16_t __b) return __arm_vaddvaq_u8 (__a, __b); } -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq (uint8x16_t __a, uint8x16_t __b) -{ - return __arm_vabdq_u8 (__a, __b); -} - __extension__ extern __inline uint8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_r (uint8x16_t __a, int32_t __b) @@ -16974,13 +16772,6 @@ __arm_vaddvaq (int32_t __a, int8x16_t __b) return __arm_vaddvaq_s8 (__a, __b); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq (int8x16_t __a, int8x16_t __b) -{ - return __arm_vabdq_s8 (__a, __b); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_n (int8x16_t __a, const int __imm) @@ -17142,13 +16933,6 @@ __arm_vaddvaq (uint32_t __a, uint16x8_t __b) return __arm_vaddvaq_u16 (__a, __b); } -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq (uint16x8_t __a, uint16x8_t __b) -{ - return __arm_vabdq_u16 (__a, __b); -} - __extension__ extern __inline uint16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_r (uint16x8_t __a, int32_t __b) @@ -17464,13 +17248,6 @@ __arm_vaddvaq (int32_t __a, int16x8_t __b) return __arm_vaddvaq_s16 (__a, __b); } -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq (int16x8_t __a, int16x8_t __b) -{ - return __arm_vabdq_s16 (__a, __b); -} - __extension__ extern __inline int16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_n (int16x8_t __a, const int __imm) @@ -17632,13 +17409,6 @@ __arm_vaddvaq (uint32_t __a, uint32x4_t __b) return __arm_vaddvaq_u32 (__a, __b); } -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq (uint32x4_t __a, uint32x4_t __b) -{ - return __arm_vabdq_u32 (__a, __b); -} - __extension__ extern __inline uint32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_r (uint32x4_t __a, int32_t __b) @@ -17954,13 +17724,6 @@ __arm_vaddvaq (int32_t __a, int32x4_t __b) return __arm_vaddvaq_s32 (__a, __b); } -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq (int32x4_t __a, int32x4_t __b) -{ - return __arm_vabdq_s32 (__a, __b); -} - __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vshlq_n (int32x4_t __a, const int __imm) @@ -21111,48 +20874,6 @@ __arm_vshlq_m (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t return __arm_vshlq_m_s32 (__inactive, __a, __b, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_m_s8 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m (int32x4_t __inactive, int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_m_s32 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m (int16x8_t __inactive, int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_m_s16 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m (uint8x16_t __inactive, uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_m_u8 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m (uint32x4_t __inactive, uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_m_u32 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m (uint16x8_t __inactive, uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_m_u16 (__inactive, __a, __b, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq_m (int8x16_t __inactive, int8x16_t __a, int8x16_t __b, mve_pred16_t __p) @@ -24359,48 +24080,6 @@ __arm_vmaxq_x (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) return __arm_vmaxq_x_u32 (__a, __b, __p); } -__extension__ extern __inline int8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x (int8x16_t __a, int8x16_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_x_s8 (__a, __b, __p); -} - -__extension__ extern __inline int16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x (int16x8_t __a, int16x8_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_x_s16 (__a, __b, __p); -} - -__extension__ extern __inline int32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x (int32x4_t __a, int32x4_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_x_s32 (__a, __b, __p); -} - -__extension__ extern __inline uint8x16_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x (uint8x16_t __a, uint8x16_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_x_u8 (__a, __b, __p); -} - -__extension__ extern __inline uint16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x (uint16x8_t __a, uint16x8_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_x_u16 (__a, __b, __p); -} - -__extension__ extern __inline uint32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x (uint32x4_t __a, uint32x4_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_x_u32 (__a, __b, __p); -} - __extension__ extern __inline int8x16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vabsq_x (int8x16_t __a, mve_pred16_t __p) @@ -26195,13 +25874,6 @@ __arm_vbicq (float16x8_t __a, float16x8_t __b) return __arm_vbicq_f16 (__a, __b); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq (float16x8_t __a, float16x8_t __b) -{ - return __arm_vabdq_f16 (__a, __b); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpneq (float32x4_t __a, float32_t __b) @@ -26398,13 +26070,6 @@ __arm_vbicq (float32x4_t __a, float32x4_t __b) return __arm_vbicq_f32 (__a, __b); } -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq (float32x4_t __a, float32x4_t __b) -{ - return __arm_vabdq_f32 (__a, __b); -} - __extension__ extern __inline mve_pred16_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vcmpeqq_m (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) @@ -27154,20 +26819,6 @@ __arm_vcvtq_m_n (float32x4_t __inactive, int32x4_t __a, const int __imm6, mve_pr return __arm_vcvtq_m_n_f32_s32 (__inactive, __a, __imm6, __p); } -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m (float32x4_t __inactive, float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_m_f32 (__inactive, __a, __b, __p); -} - -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_m (float16x8_t __inactive, float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_m_f16 (__inactive, __a, __b, __p); -} - __extension__ extern __inline float32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vbicq_m (float32x4_t __inactive, float32x4_t __a, float32x4_t __b, mve_pred16_t __p) @@ -27686,20 +27337,6 @@ __arm_vmaxnmq_x (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) return __arm_vmaxnmq_x_f32 (__a, __b, __p); } -__extension__ extern __inline float16x8_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x (float16x8_t __a, float16x8_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_x_f16 (__a, __b, __p); -} - -__extension__ extern __inline float32x4_t -__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) -__arm_vabdq_x (float32x4_t __a, float32x4_t __b, mve_pred16_t __p) -{ - return __arm_vabdq_x_f32 (__a, __b, __p); -} - __extension__ extern __inline float16x8_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) __arm_vabsq_x (float16x8_t __a, mve_pred16_t __p) @@ -28554,18 +28191,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t]: __arm_vcvtq_n_f16_u16 (__ARM_mve_coerce(__p0, uint16x8_t), p1), \ int (*)[__ARM_mve_type_uint32x4_t]: __arm_vcvtq_n_f32_u32 (__ARM_mve_coerce(__p0, uint32x4_t), p1));}) -#define __arm_vabdq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vabdq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vabdq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vabdq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vabdq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vabdq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vabdq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vabdq_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t)), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vabdq_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t)));}) - #define __arm_vbicq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -29746,19 +29371,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vcmpgeq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), p2), \ int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vcmpgeq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), p2));}) -#define __arm_vabdq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - __typeof(p2) __p2 = (p2); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vabdq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vabdq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vabdq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vabdq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vabdq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vabdq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vabdq_m_f16 (__ARM_mve_coerce(__p0, float16x8_t), __ARM_mve_coerce(__p1, float16x8_t), __ARM_mve_coerce(__p2, float16x8_t), p3), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vabdq_m_f32 (__ARM_mve_coerce(__p0, float32x4_t), __ARM_mve_coerce(__p1, float32x4_t), __ARM_mve_coerce(__p2, float32x4_t), p3));}) - #define __arm_vbicq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ @@ -30228,18 +29840,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint32x4_t]: __arm_vstrwq_scatter_base_wb_p_u32 (p0, p1, __ARM_mve_coerce(__p2, uint32x4_t), p3), \ int (*)[__ARM_mve_type_float32x4_t]: __arm_vstrwq_scatter_base_wb_p_f32 (p0, p1, __ARM_mve_coerce(__p2, float32x4_t), p3));}) -#define __arm_vabdq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ - __typeof(p2) __p2 = (p2); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vabdq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vabdq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vabdq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vabdq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vabdq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vabdq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3), \ - int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vabdq_x_f16 (__ARM_mve_coerce(__p1, float16x8_t), __ARM_mve_coerce(__p2, float16x8_t), p3), \ - int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vabdq_x_f32 (__ARM_mve_coerce(__p1, float32x4_t), __ARM_mve_coerce(__p2, float32x4_t), p3));}) - #define __arm_vabsq_x(p1,p2) ({ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p1)])0, \ int (*)[__ARM_mve_type_int8x16_t]: __arm_vabsq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), p2), \ @@ -30762,16 +30362,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vbicq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vbicq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) -#define __arm_vabdq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vabdq_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t)), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vabdq_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t)), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vabdq_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t)), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vabdq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vabdq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vabdq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)));}) - #define __arm_vcmpeqq(p0,p1) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)])0, \ @@ -31416,17 +31006,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vabavq_p_u16(__p0, __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vabavq_p_u32(__p0, __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));}) -#define __arm_vabdq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ - __typeof(p1) __p1 = (p1); \ - __typeof(p2) __p2 = (p2); \ - _Generic( (int (*)[__ARM_mve_typeid(__p0)][__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vabdq_m_s8 (__ARM_mve_coerce(__p0, int8x16_t), __ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vabdq_m_s16 (__ARM_mve_coerce(__p0, int16x8_t), __ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vabdq_m_s32 (__ARM_mve_coerce(__p0, int32x4_t), __ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vabdq_m_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vabdq_m_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vabdq_m_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));}) - #define __arm_vbicq_m(p0,p1,p2,p3) ({ __typeof(p0) __p0 = (p0); \ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ @@ -31834,16 +31413,6 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint16x8_t]: __arm_vrev64q_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t), p2), \ int (*)[__ARM_mve_type_uint32x4_t]: __arm_vrev64q_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t), p2));}) -#define __arm_vabdq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ - __typeof(p2) __p2 = (p2); \ - _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ - int (*)[__ARM_mve_type_int8x16_t][__ARM_mve_type_int8x16_t]: __arm_vabdq_x_s8 (__ARM_mve_coerce(__p1, int8x16_t), __ARM_mve_coerce(__p2, int8x16_t), p3), \ - int (*)[__ARM_mve_type_int16x8_t][__ARM_mve_type_int16x8_t]: __arm_vabdq_x_s16 (__ARM_mve_coerce(__p1, int16x8_t), __ARM_mve_coerce(__p2, int16x8_t), p3), \ - int (*)[__ARM_mve_type_int32x4_t][__ARM_mve_type_int32x4_t]: __arm_vabdq_x_s32 (__ARM_mve_coerce(__p1, int32x4_t), __ARM_mve_coerce(__p2, int32x4_t), p3), \ - int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vabdq_x_u8 (__ARM_mve_coerce(__p1, uint8x16_t), __ARM_mve_coerce(__p2, uint8x16_t), p3), \ - int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vabdq_x_u16 (__ARM_mve_coerce(__p1, uint16x8_t), __ARM_mve_coerce(__p2, uint16x8_t), p3), \ - int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vabdq_x_u32 (__ARM_mve_coerce(__p1, uint32x4_t), __ARM_mve_coerce(__p2, uint32x4_t), p3));}) - #define __arm_vbicq_x(p1,p2,p3) ({ __typeof(p1) __p1 = (p1); \ __typeof(p2) __p2 = (p2); \ _Generic( (int (*)[__ARM_mve_typeid(__p1)][__ARM_mve_typeid(__p2)])0, \ -- 2.34.1