From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 1816) id 4EA7C385B52A; Fri, 16 Jun 2023 13:07:55 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 4EA7C385B52A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1686920875; bh=QlR89hkXCrGVa4s6CiQzTPiH2odrkY4hph2ln3jNdF8=; h=From:To:Subject:Date:From; b=CzFKGr7Qdh+rCEOjMG7oFGFly6dsqJ1VXp5zhy7/tq7L5gITyvQV6zoGn/ShSoBBq 87mVwPCNgyG9MluBu8K0JhDSqVnJAkvXYjB5ypBaNAPhts0YTdu9+e0qcCHq/TF5yS kABlx/OR31KY1/Pyp/IKWeakBcx8Z3QxeYmbJ2sk= MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="utf-8" From: Kyrylo Tkachov To: gcc-cvs@gcc.gnu.org Subject: [gcc r14-1886] aarch64: [US]Q(R)SHR(U)N scalar forms refactoring X-Act-Checkin: gcc X-Git-Author: Kyrylo Tkachov X-Git-Refname: refs/heads/master X-Git-Oldrev: ffb87344dd343df60eafb10d510ac704f37417ca X-Git-Newrev: d20b2ad845876eec0ee80a3933ad49f9f6c4ee30 Message-Id: <20230616130755.4EA7C385B52A@sourceware.org> Date: Fri, 16 Jun 2023 13:07:55 +0000 (GMT) List-Id: https://gcc.gnu.org/g:d20b2ad845876eec0ee80a3933ad49f9f6c4ee30 commit r14-1886-gd20b2ad845876eec0ee80a3933ad49f9f6c4ee30 Author: Kyrylo Tkachov Date: Tue Jun 6 23:35:52 2023 +0100 aarch64: [US]Q(R)SHR(U)N scalar forms refactoring Some instructions from the previous patch have scalar forms: SQSHRN,SQRSHRN,UQSHRN,UQRSHRN,SQSHRUN,SQRSHRUN. This patch converts the patterns for these to use standard RTL codes. Their MD patterns deviate slightly from the vector forms mostly due to things like operands being scalar rather than vectors. One nuance is in the SQSHRUN,SQRSHRUN patterns. These end in a truncate to the scalar narrow mode e.g. SI -> QI. This gets simplified by the RTL passes to a subreg rather than keeping it as a truncate. So we end up representing these without the truncate and in the expander read the narrow subreg in order to comply with the expected width of the intrinsic. Bootstrapped and tested on aarch64-none-linux-gnu and aarch64_be-none-elf. gcc/ChangeLog: * config/aarch64/aarch64-simd.md (aarch64_qshrn_n): Rename to... (aarch64_shrn_n): ... This. Reimplement with RTL codes. (*aarch64_rshrn_n_insn): New define_insn. (aarch64_sqrshrun_n_insn): Likewise. (aarch64_sqshrun_n_insn): Likewise. (aarch64_rshrn_n): New define_expand. (aarch64_sqshrun_n): Likewise. (aarch64_sqrshrun_n): Likewise. * config/aarch64/iterators.md (V2XWIDE): Add HI and SI modes. Diff: --- gcc/config/aarch64/aarch64-simd.md | 111 ++++++++++++++++++++++++++++++++++--- gcc/config/aarch64/iterators.md | 3 +- 2 files changed, 106 insertions(+), 8 deletions(-) diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 8b92981bebb..bbb54344eb7 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -6654,15 +6654,15 @@ ;; vq(r)shr(u)n_n -(define_insn "aarch64_qshrn_n" +(define_insn "aarch64_shrn_n" [(set (match_operand: 0 "register_operand" "=w") - (unspec: [(match_operand:SD_HSDI 1 "register_operand" "w") - (match_operand:SI 2 - "aarch64_simd_shift_imm_offset_" "i")] - VQSHRN_N))] + (SAT_TRUNC: + (:SD_HSDI + (match_operand:SD_HSDI 1 "register_operand" "w") + (match_operand:SI 2 "aarch64_simd_shift_imm_offset_"))))] "TARGET_SIMD" - "qshrn\\t%0, %1, %2" - [(set_attr "type" "neon_sat_shift_imm_narrow_q")] + "shrn\t%0, %1, %2" + [(set_attr "type" "neon_shift_imm_narrow_q")] ) (define_insn "*aarch64_shrn_n_insn" @@ -6704,6 +6704,41 @@ [(set_attr "type" "neon_shift_imm_narrow_q")] ) +(define_insn "*aarch64_rshrn_n_insn" + [(set (match_operand: 0 "register_operand" "=w") + (SAT_TRUNC: + (: + (plus: + (: + (match_operand:SD_HSDI 1 "register_operand" "w")) + (match_operand: 3 "aarch64_simd_rsra_rnd_imm_vec")) + (match_operand:SI 2 "aarch64_simd_shift_imm_offset_"))))] + "TARGET_SIMD + && aarch64_const_vec_rnd_cst_p (operands[3], operands[2])" + "rshrn\t%0, %1, %2" + [(set_attr "type" "neon_shift_imm_narrow_q")] +) + +(define_expand "aarch64_rshrn_n" + [(set (match_operand: 0 "register_operand") + (SAT_TRUNC: + (: + (plus: + (: + (match_operand:SD_HSDI 1 "register_operand")) + (match_dup 3)) + (match_operand:SI 2 "aarch64_simd_shift_imm_offset_"))))] + "TARGET_SIMD" + { + /* Use this expander to create the rounding constant vector, which is + 1 << (shift - 1). Use wide_int here to ensure that the right TImode + RTL is generated when handling the DImode expanders. */ + int prec = GET_MODE_UNIT_PRECISION (mode); + wide_int rnd_wi = wi::set_bit_in_zero (INTVAL (operands[2]) - 1, prec); + operands[3] = immed_wide_int_const (rnd_wi, GET_MODE_INNER (mode)); + } +) + (define_expand "aarch64_rshrn_n" [(set (match_operand: 0 "register_operand") (ALL_TRUNC: @@ -6748,6 +6783,34 @@ [(set_attr "type" "neon_shift_imm_narrow_q")] ) +(define_insn "aarch64_sqshrun_n_insn" + [(set (match_operand:SD_HSDI 0 "register_operand" "=w") + (smin:SD_HSDI + (smax:SD_HSDI + (ashiftrt:SD_HSDI + (match_operand:SD_HSDI 1 "register_operand" "w") + (match_operand:SI 2 "aarch64_simd_shift_imm_offset_")) + (const_int 0)) + (const_int )))] + "TARGET_SIMD" + "sqshrun\t%0, %1, %2" + [(set_attr "type" "neon_shift_imm_narrow_q")] +) + +(define_expand "aarch64_sqshrun_n" + [(match_operand: 0 "register_operand") + (match_operand:SD_HSDI 1 "register_operand") + (match_operand:SI 2 "aarch64_simd_shift_imm_offset_")] + "TARGET_SIMD" + { + rtx dst = gen_reg_rtx (mode); + emit_insn (gen_aarch64_sqshrun_n_insn (dst, operands[1], + operands[2])); + emit_move_insn (operands[0], gen_lowpart (mode, dst)); + DONE; + } +) + (define_expand "aarch64_sqshrun_n" [(set (match_operand: 0 "register_operand") (truncate: @@ -6788,6 +6851,40 @@ [(set_attr "type" "neon_shift_imm_narrow_q")] ) +(define_insn "aarch64_sqrshrun_n_insn" + [(set (match_operand: 0 "register_operand" "=w") + (smin: + (smax: + (ashiftrt: + (plus: + (sign_extend: + (match_operand:SD_HSDI 1 "register_operand" "w")) + (match_operand: 3 "aarch64_simd_rsra_rnd_imm_vec")) + (match_operand:SI 2 "aarch64_simd_shift_imm_offset_")) + (const_int 0)) + (const_int )))] + "TARGET_SIMD + && aarch64_const_vec_rnd_cst_p (operands[3], operands[2])" + "sqrshrun\t%0, %1, %2" + [(set_attr "type" "neon_shift_imm_narrow_q")] +) + +(define_expand "aarch64_sqrshrun_n" + [(match_operand: 0 "register_operand") + (match_operand:SD_HSDI 1 "register_operand") + (match_operand:SI 2 "aarch64_simd_shift_imm_offset_")] + "TARGET_SIMD" + { + int prec = GET_MODE_UNIT_PRECISION (mode); + wide_int rnd_wi = wi::set_bit_in_zero (INTVAL (operands[2]) - 1, prec); + rtx rnd = immed_wide_int_const (rnd_wi, mode); + rtx dst = gen_reg_rtx (mode); + emit_insn (gen_aarch64_sqrshrun_n_insn (dst, operands[1], operands[2], rnd)); + emit_move_insn (operands[0], gen_lowpart (mode, dst)); + DONE; + } +) + (define_expand "aarch64_sqrshrun_n" [(set (match_operand: 0 "register_operand") (truncate: diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md index e8c62c88b14..acc7a3ec46e 100644 --- a/gcc/config/aarch64/iterators.md +++ b/gcc/config/aarch64/iterators.md @@ -1532,7 +1532,8 @@ (define_mode_attr V2XWIDE [(V8QI "V8HI") (V4HI "V4SI") (V16QI "V16HI") (V8HI "V8SI") (V2SI "V2DI") (V4SI "V4DI") - (V2DI "V2TI") (DI "TI")]) + (V2DI "V2TI") (DI "TI") + (HI "SI") (SI "DI")]) ;; Predicate mode associated with VWIDE. (define_mode_attr VWIDE_PRED [(VNx8HF "VNx4BI") (VNx4SF "VNx2BI")])