From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 7835) id B6A9C3939C1D; Wed, 16 Jun 2021 13:23:20 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B6A9C3939C1D MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="utf-8" From: Jonathan Wright To: gcc-cvs@gcc.gnu.org Subject: [gcc r12-1531] aarch64: Model zero-high-half semantics of XTN instruction in RTL X-Act-Checkin: gcc X-Git-Author: Jonathan Wright X-Git-Refname: refs/heads/master X-Git-Oldrev: ac6c858d072016ad2c409f1593fa290ad0d87e11 X-Git-Newrev: d8a88cdae9c0c42ab7c5c65a5043c4f8bad349d2 Message-Id: <20210616132320.B6A9C3939C1D@sourceware.org> Date: Wed, 16 Jun 2021 13:23:20 +0000 (GMT) X-BeenThere: gcc-cvs@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-cvs mailing list List-Unsubscribe: , List-Archive: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Jun 2021 13:23:20 -0000 https://gcc.gnu.org/g:d8a88cdae9c0c42ab7c5c65a5043c4f8bad349d2 commit r12-1531-gd8a88cdae9c0c42ab7c5c65a5043c4f8bad349d2 Author: Jonathan Wright Date: Fri Jun 11 15:48:51 2021 +0100 aarch64: Model zero-high-half semantics of XTN instruction in RTL Modeling the zero-high-half semantics of the XTN narrowing instruction in RTL indicates to the compiler that this is a totally destructive operation. This enables more RTL simplifications and also prevents some register allocation issues. Add new tests to narrow_zero_high_half.c to verify the benefit of this change. gcc/ChangeLog: 2021-06-11 Jonathan Wright * config/aarch64/aarch64-simd.md (aarch64_xtn_insn_le): Define - modeling zero-high-half semantics. (aarch64_xtn): Change to an expander that emits the appropriate instruction depending on endianness. (aarch64_xtn_insn_be): Define - modeling zero-high-half semantics. (aarch64_xtn2_le): Rename to... (aarch64_xtn2_insn_le): This. (aarch64_xtn2_be): Rename to... (aarch64_xtn2_insn_be): This. (vec_pack_trunc_): Emit truncation instruction instead of aarch64_xtn. * config/aarch64/iterators.md (Vnarrowd): Add Vnarrowd mode attribute iterator. gcc/testsuite/ChangeLog: * gcc.target/aarch64/narrow_zero_high_half.c: Add new tests. Diff: --- gcc/config/aarch64/aarch64-simd.md | 105 ++++++++++++++------- gcc/config/aarch64/iterators.md | 2 + .../gcc.target/aarch64/narrow_zero_high_half.c | 16 ++++ 3 files changed, 88 insertions(+), 35 deletions(-) diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index e750faed1db..b23556b551c 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -1690,17 +1690,48 @@ ;; Narrowing operations. -;; For doubles. +(define_insn "aarch64_xtn_insn_le" + [(set (match_operand: 0 "register_operand" "=w") + (vec_concat: + (truncate: (match_operand:VQN 1 "register_operand" "w")) + (match_operand: 2 "aarch64_simd_or_scalar_imm_zero")))] + "TARGET_SIMD && !BYTES_BIG_ENDIAN" + "xtn\\t%0., %1." + [(set_attr "type" "neon_move_narrow_q")] +) -(define_insn "aarch64_xtn" - [(set (match_operand: 0 "register_operand" "=w") - (truncate: (match_operand:VQN 1 "register_operand" "w")))] - "TARGET_SIMD" +(define_insn "aarch64_xtn_insn_be" + [(set (match_operand: 0 "register_operand" "=w") + (vec_concat: + (match_operand: 2 "aarch64_simd_or_scalar_imm_zero") + (truncate: (match_operand:VQN 1 "register_operand" "w"))))] + "TARGET_SIMD && BYTES_BIG_ENDIAN" "xtn\\t%0., %1." [(set_attr "type" "neon_move_narrow_q")] ) -(define_insn "aarch64_xtn2_le" +(define_expand "aarch64_xtn" + [(set (match_operand: 0 "register_operand") + (truncate: (match_operand:VQN 1 "register_operand")))] + "TARGET_SIMD" + { + rtx tmp = gen_reg_rtx (mode); + if (BYTES_BIG_ENDIAN) + emit_insn (gen_aarch64_xtn_insn_be (tmp, operands[1], + CONST0_RTX (mode))); + else + emit_insn (gen_aarch64_xtn_insn_le (tmp, operands[1], + CONST0_RTX (mode))); + + /* The intrinsic expects a narrow result, so emit a subreg that will get + optimized away as appropriate. */ + emit_move_insn (operands[0], lowpart_subreg (mode, tmp, + mode)); + DONE; + } +) + +(define_insn "aarch64_xtn2_insn_le" [(set (match_operand: 0 "register_operand" "=w") (vec_concat: (match_operand: 1 "register_operand" "0") @@ -1710,7 +1741,7 @@ [(set_attr "type" "neon_move_narrow_q")] ) -(define_insn "aarch64_xtn2_be" +(define_insn "aarch64_xtn2_insn_be" [(set (match_operand: 0 "register_operand" "=w") (vec_concat: (truncate: (match_operand:VQN 2 "register_operand" "w")) @@ -1727,15 +1758,17 @@ "TARGET_SIMD" { if (BYTES_BIG_ENDIAN) - emit_insn (gen_aarch64_xtn2_be (operands[0], operands[1], - operands[2])); + emit_insn (gen_aarch64_xtn2_insn_be (operands[0], operands[1], + operands[2])); else - emit_insn (gen_aarch64_xtn2_le (operands[0], operands[1], - operands[2])); + emit_insn (gen_aarch64_xtn2_insn_le (operands[0], operands[1], + operands[2])); DONE; } ) +;; Packing doubles. + (define_expand "vec_pack_trunc_" [(match_operand: 0 "register_operand") (match_operand:VDN 1 "register_operand") @@ -1748,10 +1781,35 @@ emit_insn (gen_move_lo_quad_ (tempreg, operands[lo])); emit_insn (gen_move_hi_quad_ (tempreg, operands[hi])); - emit_insn (gen_aarch64_xtn (operands[0], tempreg)); + emit_insn (gen_trunc2 (operands[0], tempreg)); DONE; }) +;; Packing quads. + +(define_expand "vec_pack_trunc_" + [(set (match_operand: 0 "register_operand") + (vec_concat: + (truncate: (match_operand:VQN 1 "register_operand")) + (truncate: (match_operand:VQN 2 "register_operand"))))] + "TARGET_SIMD" + { + rtx tmpreg = gen_reg_rtx (mode); + int lo = BYTES_BIG_ENDIAN ? 2 : 1; + int hi = BYTES_BIG_ENDIAN ? 1 : 2; + + emit_insn (gen_trunc2 (tmpreg, operands[lo])); + + if (BYTES_BIG_ENDIAN) + emit_insn (gen_aarch64_xtn2_insn_be (operands[0], tmpreg, + operands[hi])); + else + emit_insn (gen_aarch64_xtn2_insn_le (operands[0], tmpreg, + operands[hi])); + DONE; + } +) + (define_insn "aarch64_shrn_insn_le" [(set (match_operand: 0 "register_operand" "=w") (vec_concat: @@ -1936,29 +1994,6 @@ } ) -;; For quads. - -(define_expand "vec_pack_trunc_" - [(set (match_operand: 0 "register_operand") - (vec_concat: - (truncate: (match_operand:VQN 1 "register_operand")) - (truncate: (match_operand:VQN 2 "register_operand"))))] - "TARGET_SIMD" - { - rtx tmpreg = gen_reg_rtx (mode); - int lo = BYTES_BIG_ENDIAN ? 2 : 1; - int hi = BYTES_BIG_ENDIAN ? 1 : 2; - - emit_insn (gen_aarch64_xtn (tmpreg, operands[lo])); - - if (BYTES_BIG_ENDIAN) - emit_insn (gen_aarch64_xtn2_be (operands[0], tmpreg, operands[hi])); - else - emit_insn (gen_aarch64_xtn2_le (operands[0], tmpreg, operands[hi])); - DONE; - } -) - ;; Widening operations. (define_insn "aarch64_simd_vec_unpack_lo_" diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md index e9047d00d97..caa42f8f169 100644 --- a/gcc/config/aarch64/iterators.md +++ b/gcc/config/aarch64/iterators.md @@ -1257,6 +1257,8 @@ ;; Narrowed modes for VDN. (define_mode_attr VNARROWD [(V4HI "V8QI") (V2SI "V4HI") (DI "V2SI")]) +(define_mode_attr Vnarrowd [(V4HI "v8qi") (V2SI "v4hi") + (DI "v2si")]) ;; Narrowed double-modes for VQN (Used for XTN). (define_mode_attr VNARROWQ [(V8HI "V8QI") (V4SI "V4HI") diff --git a/gcc/testsuite/gcc.target/aarch64/narrow_zero_high_half.c b/gcc/testsuite/gcc.target/aarch64/narrow_zero_high_half.c index a79a4c33dab..451b0116e5e 100644 --- a/gcc/testsuite/gcc.target/aarch64/narrow_zero_high_half.c +++ b/gcc/testsuite/gcc.target/aarch64/narrow_zero_high_half.c @@ -48,6 +48,21 @@ TEST_SHIFT (vqrshrun_n, uint8x16_t, int16x8_t, s16, u8) TEST_SHIFT (vqrshrun_n, uint16x8_t, int32x4_t, s32, u16) TEST_SHIFT (vqrshrun_n, uint32x4_t, int64x2_t, s64, u32) +#define TEST_UNARY(name, rettype, intype, fs, rs) \ + rettype test_ ## name ## _ ## fs ## _zero_high \ + (intype a) \ + { \ + return vcombine_ ## rs (name ## _ ## fs (a), \ + vdup_n_ ## rs (0)); \ + } + +TEST_UNARY (vmovn, int8x16_t, int16x8_t, s16, s8) +TEST_UNARY (vmovn, int16x8_t, int32x4_t, s32, s16) +TEST_UNARY (vmovn, int32x4_t, int64x2_t, s64, s32) +TEST_UNARY (vmovn, uint8x16_t, uint16x8_t, u16, u8) +TEST_UNARY (vmovn, uint16x8_t, uint32x4_t, u32, u16) +TEST_UNARY (vmovn, uint32x4_t, uint64x2_t, u64, u32) + /* { dg-final { scan-assembler-not "dup\\t" } } */ /* { dg-final { scan-assembler-times "\\tshrn\\tv" 6} } */ @@ -58,3 +73,4 @@ TEST_SHIFT (vqrshrun_n, uint32x4_t, int64x2_t, s64, u32) /* { dg-final { scan-assembler-times "\\tuqrshrn\\tv" 3} } */ /* { dg-final { scan-assembler-times "\\tsqshrun\\tv" 3} } */ /* { dg-final { scan-assembler-times "\\tsqrshrun\\tv" 3} } */ +/* { dg-final { scan-assembler-times "\\txtn\\tv" 6} } */