From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 1816) id AB79D3858C39; Mon, 24 Apr 2023 08:29:17 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org AB79D3858C39 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1682324957; bh=NNhcHjQVCBL2/sJaER20C2LSchDaqP6P7fU3k/NuLeY=; h=From:To:Subject:Date:From; b=F2pEoTqXrV4ueKrQZfTWems95ttLPNzfkmqR6gz0vMwHCigxVPq7DxzUKNmZYcd6D MlUyUhdBapnrs22Kfi25lbHdUH5RIYNvF6rWWNXfJpGh+HCvc/+neVI97SrFKVr/OJ nY3oxifxdh37NEgCS9fYpZ2ian3SY8tqk3EU3pX8= MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="utf-8" From: Kyrylo Tkachov To: gcc-cvs@gcc.gnu.org Subject: [gcc r14-188] aarch64: Add pattern to match zero-extending scalar result of ADDLV X-Act-Checkin: gcc X-Git-Author: Kyrylo Tkachov X-Git-Refname: refs/heads/master X-Git-Oldrev: 60bf26a412a9ec2b467c04fac1dfacef2ef09c6d X-Git-Newrev: 6ec565d8755afe1c187cda69fb8e478e669cfd02 Message-Id: <20230424082917.AB79D3858C39@sourceware.org> Date: Mon, 24 Apr 2023 08:29:17 +0000 (GMT) List-Id: https://gcc.gnu.org/g:6ec565d8755afe1c187cda69fb8e478e669cfd02 commit r14-188-g6ec565d8755afe1c187cda69fb8e478e669cfd02 Author: Kyrylo Tkachov Date: Mon Apr 24 09:28:35 2023 +0100 aarch64: Add pattern to match zero-extending scalar result of ADDLV The vaddlv_u8 and vaddlv_u16 intrinsics produce a widened scalar result (uint16_t and uint32_t). The ADDLV instructions themselves zero the rest of the V register, which gives us a free zero-extension to 32 and 64 bits, similar to how it works on the GP reg side. Because we don't model that zero-extension in the machine description this can cause GCC to move the results of these instructions to the GP regs just to do a (superfluous) zero-extension. This patch just adds a pattern to catch these cases. For the testcases we can now generate no zero-extends or GP<->FP reg moves, whereas before we generated stuff like: foo_8_32: uaddlv h0, v0.8b umov w1, v0.h[0] // FP<->GP move with zero-extension! str w1, [x0] ret Bootstrapped and tested on aarch64-none-linux-gnu. gcc/ChangeLog: * config/aarch64/aarch64-simd.md (*aarch64_addlv_ze): New pattern. gcc/testsuite/ChangeLog: * gcc.target/aarch64/simd/addlv_zext.c: New test. Diff: --- gcc/config/aarch64/aarch64-simd.md | 16 +++++ gcc/testsuite/gcc.target/aarch64/simd/addlv_zext.c | 84 ++++++++++++++++++++++ 2 files changed, 100 insertions(+) diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 7bd4362318b..d1e74a6704a 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -3521,6 +3521,22 @@ [(set_attr "type" "neon_reduc_add")] ) +;; Zero-extending version of the above. As these intrinsics produce a scalar +;; value that may be used by further intrinsics we want to avoid moving the +;; result into GP regs to do a zero-extension that ADDLV/ADDLP gives for free. + +(define_insn "*aarch64_addlv_ze" + [(set (match_operand:GPI 0 "register_operand" "=w") + (zero_extend:GPI + (unspec: + [(match_operand:VDQV_L 1 "register_operand" "w")] + USADDLV)))] + "TARGET_SIMD + && (GET_MODE_SIZE (mode) > GET_MODE_SIZE (mode))" + "addl\\t%0, %1." + [(set_attr "type" "neon_reduc_add")] +) + (define_insn "aarch64_addlp" [(set (match_operand: 0 "register_operand" "=w") (unspec: [(match_operand:VDQV_L 1 "register_operand" "w")] diff --git a/gcc/testsuite/gcc.target/aarch64/simd/addlv_zext.c b/gcc/testsuite/gcc.target/aarch64/simd/addlv_zext.c new file mode 100644 index 00000000000..1bd3c303743 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/simd/addlv_zext.c @@ -0,0 +1,84 @@ +/* { dg-do compile } */ +/* { dg-additional-options "--save-temps -O1" } */ +/* { dg-final { check-function-bodies "**" "" "" } } */ + +#include + +/* +** foo_8_32: +** uaddlv h0, v0.8b +** str s0, \[x0\] +** ret +*/ + +void +foo_8_32 (uint8x8_t a, uint32_t *res) +{ + *res = vaddlv_u8 (a); +} + +/* +** foo_8_64: +** uaddlv h0, v0.8b +** str d0, \[x0\] +** ret +*/ + +void +foo_8_64 (uint8x8_t a, uint64_t *res) +{ + *res = vaddlv_u8 (a); +} + +/* +** foo_16_64: +** uaddlv s0, v0.4h +** str d0, \[x0\] +** ret +*/ + +void +foo_16_64 (uint16x4_t a, uint64_t *res) +{ + *res = vaddlv_u16 (a); +} + +/* +** fooq_8_32: +** uaddlv h0, v0.16b +** str s0, \[x0\] +** ret +*/ + +void +fooq_8_32 (uint8x16_t a, uint32_t *res) +{ + *res = vaddlvq_u8 (a); +} + +/* +** fooq_8_64: +** uaddlv h0, v0.16b +** str d0, \[x0\] +** ret +*/ + +void +fooq_8_64 (uint8x16_t a, uint64_t *res) +{ + *res = vaddlvq_u8 (a); +} + +/* +** fooq_16_64: +** uaddlv s0, v0.8h +** str d0, \[x0\] +** ret +*/ + +void +fooq_16_64 (uint16x8_t a, uint64_t *res) +{ + *res = vaddlvq_u16 (a); +} +