On Mon, 13 Mar 2023 at 13:03, Richard Biener wrote: > > On Fri, 10 Mar 2023, Richard Sandiford wrote: > > > Sorry for the slow reply. > > > > Prathamesh Kulkarni writes: > > > Unfortunately it regresses code-gen for the following case: > > > > > > svint32_t f(int32x4_t x) > > > { > > > return svdupq_s32 (x[0], x[1], x[2], x[3]); > > > } > > > > > > -O2 code-gen with trunk: > > > f: > > > dup z0.q, z0.q[0] > > > ret > > > > > > -O2 code-gen with patch: > > > f: > > > dup s1, v0.s[1] > > > mov v2.8b, v0.8b > > > ins v1.s[1], v0.s[3] > > > ins v2.s[1], v0.s[2] > > > zip1 v0.4s, v2.4s, v1.4s > > > dup z0.q, z0.q[0] > > > ret > > > > > > IIUC, svdupq_impl::expand uses aarch64_expand_vector_init > > > to initialize the "base 128-bit vector" and then use dupq to replicate it. > > > > > > Without patch, aarch64_expand_vector_init generates fallback code, and then > > > combine optimizes a sequence of vec_merge/vec_select pairs into an assignment: > > > > > > (insn 7 3 8 2 (set (reg:SI 99) > > > (vec_select:SI (reg/v:V4SI 97 [ x ]) > > > (parallel [ > > > (const_int 1 [0x1]) > > > ]))) "bar.c":6:10 2592 {aarch64_get_lanev4si} > > > (nil)) > > > > > > (insn 13 9 15 2 (set (reg:V4SI 102) > > > (vec_merge:V4SI (vec_duplicate:V4SI (reg:SI 99)) > > > (reg/v:V4SI 97 [ x ]) > > > (const_int 2 [0x2]))) "bar.c":6:10 1794 {aarch64_simd_vec_setv4si} > > > (expr_list:REG_DEAD (reg:SI 99) > > > (expr_list:REG_DEAD (reg/v:V4SI 97 [ x ]) > > > (nil)))) > > > > > > into: > > > Trying 7 -> 13: > > > 7: r99:SI=vec_select(r97:V4SI,parallel) > > > 13: r102:V4SI=vec_merge(vec_duplicate(r99:SI),r97:V4SI,0x2) > > > REG_DEAD r99:SI > > > REG_DEAD r97:V4SI > > > Successfully matched this instruction: > > > (set (reg:V4SI 102) > > > (reg/v:V4SI 97 [ x ])) > > > > > > which eventually results into: > > > (note 2 25 3 2 NOTE_INSN_DELETED) > > > (note 3 2 7 2 NOTE_INSN_FUNCTION_BEG) > > > (note 7 3 8 2 NOTE_INSN_DELETED) > > > (note 8 7 9 2 NOTE_INSN_DELETED) > > > (note 9 8 13 2 NOTE_INSN_DELETED) > > > (note 13 9 15 2 NOTE_INSN_DELETED) > > > (note 15 13 17 2 NOTE_INSN_DELETED) > > > (note 17 15 18 2 NOTE_INSN_DELETED) > > > (note 18 17 22 2 NOTE_INSN_DELETED) > > > (insn 22 18 23 2 (parallel [ > > > (set (reg/i:VNx4SI 32 v0) > > > (vec_duplicate:VNx4SI (reg:V4SI 108))) > > > (clobber (scratch:VNx16BI)) > > > ]) "bar.c":7:1 5202 {aarch64_vec_duplicate_vqvnx4si_le} > > > (expr_list:REG_DEAD (reg:V4SI 108) > > > (nil))) > > > (insn 23 22 0 2 (use (reg/i:VNx4SI 32 v0)) "bar.c":7:1 -1 > > > (nil)) > > > > > > I was wondering if we should add the above special case, of assigning > > > target = vec in aarch64_expand_vector_init, if initializer is { > > > vec[0], vec[1], ... } ? > > > > I'm not sure it will be easy to detect that. Won't the inputs to > > aarch64_expand_vector_init just be plain registers? It's not a > > good idea in general to search for definitions of registers > > during expansion. > > > > It would be nice to fix this by lowering svdupq into: > > > > (a) a constructor for a 128-bit vector > > (b) a duplication of the 128-bit vector to fill an SVE vector > > > > But I'm not sure what the best way of doing (b) would be. > > In RTL we can use vec_duplicate, but I don't think gimple > > has an equivalent construct. Maybe Richi has some ideas. > > On GIMPLE it would be > > _1 = { a, ... }; // (a) > _2 = { _1, ... }; // (b) > > but I'm not sure if (b), a VL CTOR of fixed len(?) sub-vectors is > possible? But at least a CTOR of vectors is what we use to > concat vectors. > > With the recent relaxing of VEC_PERM inputs it's also possible to > express (b) with a VEC_PERM: > > _2 = VEC_PERM <_1, _1, { 0, 1, 2, 3, 0, 1, 2, 3, ... }> > > but again I'm not sure if that repeating 0, 1, 2, 3 is expressible > for VL vectors (maybe we'd allow "wrapping" here, I'm not sure). > Hi, Thanks for the suggestions and sorry for late response in turn. The attached patch tries to fix the issue by explicitly constructing a CTOR from svdupq's arguments and then using VEC_PERM_EXPR with VL mask having encoded elements {0, 1, ... nargs-1}, npatterns == nargs, and nelts_per_pattern == 1, to replicate the base vector. So for example, for the above case, svint32_t f_32(int32x4_t x) { return svdupq_s32 (x[0], x[1], x[2], x[3]); } forwprop1 lowers it to: svint32_t _6; vector(4) int _8; : _1 = BIT_FIELD_REF ; _2 = BIT_FIELD_REF ; _3 = BIT_FIELD_REF ; _4 = BIT_FIELD_REF ; _8 = {_1, _2, _3, _4}; _6 = VEC_PERM_EXPR <_8, _8, { 0, 1, 2, 3, ... }>; return _6; which is then eventually optimized to: svint32_t _6; [local count: 1073741824]: _6 = VEC_PERM_EXPR ; return _6; code-gen: f_32: dup z0.q, z0.q[0] ret Does it look OK ? Thanks, Prathamesh > Richard. > > > We're planning to implement the ACLE's Neon-SVE bridge: > > https://github.com/ARM-software/acle/blob/main/main/acle.md#neon-sve-bridge > > and so we'll need (b) to implement the svdup_neonq functions. > > > > Thanks, > > Richard > > > > -- > Richard Biener > SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 Nuernberg, > Germany; GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman; > HRB 36809 (AG Nuernberg)