On Wed, 18 Jan 2023 at 19:59, Richard Sandiford wrote: > > Prathamesh Kulkarni writes: > > On Tue, 17 Jan 2023 at 18:29, Richard Sandiford > > wrote: > >> > >> Prathamesh Kulkarni writes: > >> > Hi Richard, > >> > For the following (contrived) test: > >> > > >> > void foo(int32x4_t v) > >> > { > >> > v[3] = 0; > >> > return v; > >> > } > >> > > >> > -O2 code-gen: > >> > foo: > >> > fmov s1, wzr > >> > ins v0.s[3], v1.s[0] > >> > ret > >> > > >> > I suppose we can instead emit the following code-gen ? > >> > foo: > >> > ins v0.s[3], wzr > >> > ret > >> > > >> > combine produces: > >> > Failed to match this instruction: > >> > (set (reg:V4SI 95 [ v ]) > >> > (vec_merge:V4SI (const_vector:V4SI [ > >> > (const_int 0 [0]) repeated x4 > >> > ]) > >> > (reg:V4SI 97) > >> > (const_int 8 [0x8]))) > >> > > >> > So, I wrote the following pattern to match the above insn: > >> > (define_insn "aarch64_simd_vec_set_zero" > >> > [(set (match_operand:VALL_F16 0 "register_operand" "=w") > >> > (vec_merge:VALL_F16 > >> > (match_operand:VALL_F16 1 "const_dup0_operand" "w") > >> > (match_operand:VALL_F16 3 "register_operand" "0") > >> > (match_operand:SI 2 "immediate_operand" "i")))] > >> > "TARGET_SIMD" > >> > { > >> > int elt = ENDIAN_LANE_N (, exact_log2 (INTVAL (operands[2]))); > >> > operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt); > >> > return "ins\\t%0.[%p2], wzr"; > >> > } > >> > ) > >> > > >> > which now matches the above insn produced by combine. > >> > However, in reload dump, it creates a new insn for assigning > >> > register to (const_vector (const_int 0)), > >> > which results in: > >> > (insn 19 8 13 2 (set (reg:V4SI 33 v1 [99]) > >> > (const_vector:V4SI [ > >> > (const_int 0 [0]) repeated x4 > >> > ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si} > >> > (nil)) > >> > (insn 13 19 14 2 (set (reg/i:V4SI 32 v0) > >> > (vec_merge:V4SI (reg:V4SI 33 v1 [99]) > >> > (reg:V4SI 32 v0 [97]) > >> > (const_int 8 [0x8]))) "wzr-test.c":8:1 1808 > >> > {aarch64_simd_vec_set_zerov4si} > >> > (nil)) > >> > > >> > and eventually the code-gen: > >> > foo: > >> > movi v1.4s, 0 > >> > ins v0.s[3], wzr > >> > ret > >> > > >> > To get rid of redundant assignment of 0 to v1, I tried to split the > >> > above pattern > >> > as in the attached patch. This works to emit code-gen: > >> > foo: > >> > ins v0.s[3], wzr > >> > ret > >> > > >> > However, I am not sure if this is the right approach. Could you suggest, > >> > if it'd be possible to get rid of UNSPEC_SETZERO in the patch ? > >> > >> The problem is with the "w" constraint on operand 1, which tells LRA > >> to force the zero into an FPR. It should work if you remove the > >> constraint. > > Ah indeed, sorry about that, changing the constrained works. > > "i" isn't right though, because that's for scalar integers. > There's no need for any constraint here -- the predicate does > all of the work. > > > Does the attached patch look OK after bootstrap+test ? > > Since we're in stage-4, shall it be OK to commit now, or queue it for stage-1 ? > > It needs tests as well. :-) > > Also: > > > Thanks, > > Prathamesh > > > > > >> > >> Also, I think you'll need to use zr for the zero, so that > >> it uses xzr for 64-bit elements. > >> > >> I think this and the existing patterns ought to test > >> exact_log2 (INTVAL (operands[2])) >= 0 in the insn condition, > >> since there's no guarantee that RTL optimisations won't form > >> vec_merges that have other masks. > >> > >> Thanks, > >> Richard > > > > [aarch64] Use wzr/xzr for assigning 0 to vector element. > > > > gcc/ChangeLog: > > * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero): > > New pattern. > > * config/aarch64/predicates.md (const_dup0_operand): New. > > > > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md > > index 104088f67d2..8e54ee4e886 100644 > > --- a/gcc/config/aarch64/aarch64-simd.md > > +++ b/gcc/config/aarch64/aarch64-simd.md > > @@ -1083,6 +1083,20 @@ > > [(set_attr "type" "neon_ins, neon_from_gp, neon_load1_one_lane")] > > ) > > > > +(define_insn "aarch64_simd_vec_set_zero" > > + [(set (match_operand:VALL_F16 0 "register_operand" "=w") > > + (vec_merge:VALL_F16 > > + (match_operand:VALL_F16 1 "const_dup0_operand" "i") > > + (match_operand:VALL_F16 3 "register_operand" "0") > > + (match_operand:SI 2 "immediate_operand" "i")))] > > + "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0" > > + { > > + int elt = ENDIAN_LANE_N (, exact_log2 (INTVAL (operands[2]))); > > + operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt); > > + return "ins\\t%0.[%p2], zr"; > > + } > > +) > > + > > (define_insn "@aarch64_simd_vec_copy_lane" > > [(set (match_operand:VALL_F16 0 "register_operand" "=w") > > (vec_merge:VALL_F16 > > diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md > > index ff7f73d3f30..901fa1bd7f9 100644 > > --- a/gcc/config/aarch64/predicates.md > > +++ b/gcc/config/aarch64/predicates.md > > @@ -49,6 +49,13 @@ > > return CONST_INT_P (op) && IN_RANGE (INTVAL (op), 1, 3); > > }) > > > > +(define_predicate "const_dup0_operand" > > + (match_code "const_vector") > > +{ > > + op = unwrap_const_vec_duplicate (op); > > + return CONST_INT_P (op) && rtx_equal_p (op, const0_rtx); > > +}) > > + > > We already have aarch64_simd_imm_zero for this. aarch64_simd_imm_zero > is actually more general, because it works for floating-point modes too. > > I think the tests should cover all modes included in VALL_F16, since > that should have picked up this and the xzr thing. Hi Richard, Thanks for the suggestions. Does the attached patch look OK ? I am not sure how to test for v4bf and v8bf since it seems the compiler refuses conversions to/from bfloat16_t ? Thanks, Prathamesh > > Thanks, > Richard > > > (define_predicate "subreg_lowpart_operator" > > (ior (match_code "truncate") > > (and (match_code "subreg")