On Thu, 2 Feb 2023 at 20:50, Richard Sandiford wrote: > > Prathamesh Kulkarni writes: > >> >> > I have attached a patch that extends the transform if one half is dup > >> >> > and other is set of constants. > >> >> > For eg: > >> >> > int8x16_t f(int8_t x) > >> >> > { > >> >> > return (int8x16_t) { x, 1, x, 2, x, 3, x, 4, x, 5, x, 6, x, 7, x, 8 }; > >> >> > } > >> >> > > >> >> > code-gen trunk: > >> >> > f: > >> >> > adrp x1, .LC0 > >> >> > ldr q0, [x1, #:lo12:.LC0] > >> >> > ins v0.b[0], w0 > >> >> > ins v0.b[2], w0 > >> >> > ins v0.b[4], w0 > >> >> > ins v0.b[6], w0 > >> >> > ins v0.b[8], w0 > >> >> > ins v0.b[10], w0 > >> >> > ins v0.b[12], w0 > >> >> > ins v0.b[14], w0 > >> >> > ret > >> >> > > >> >> > code-gen with patch: > >> >> > f: > >> >> > dup v0.16b, w0 > >> >> > adrp x0, .LC0 > >> >> > ldr q1, [x0, #:lo12:.LC0] > >> >> > zip1 v0.16b, v0.16b, v1.16b > >> >> > ret > >> >> > > >> >> > Bootstrapped+tested on aarch64-linux-gnu. > >> >> > Does it look OK ? > >> >> > >> >> Looks like a nice improvement. It'll need to wait for GCC 14 now though. > >> >> > >> >> However, rather than handle this case specially, I think we should instead > >> >> take a divide-and-conquer approach: split the initialiser into even and > >> >> odd elements, find the best way of loading each part, then compare the > >> >> cost of these sequences + ZIP with the cost of the fallback code (the code > >> >> later in aarch64_expand_vector_init). > >> >> > >> >> For example, doing that would allow: > >> >> > >> >> { x, y, 0, y, 0, y, 0, y, 0, y } > >> >> > >> >> to be loaded more easily, even though the even elements aren't wholly > >> >> constant. > >> > Hi Richard, > >> > I have attached a prototype patch based on the above approach. > >> > It subsumes specializing for above {x, y, x, y, x, y, x, y} case by generating > >> > same sequence, thus I removed that hunk, and improves the following cases: > >> > > >> > (a) > >> > int8x16_t f_s16(int8_t x) > >> > { > >> > return (int8x16_t) { x, 1, x, 2, x, 3, x, 4, > >> > x, 5, x, 6, x, 7, x, 8 }; > >> > } > >> > > >> > code-gen trunk: > >> > f_s16: > >> > adrp x1, .LC0 > >> > ldr q0, [x1, #:lo12:.LC0] > >> > ins v0.b[0], w0 > >> > ins v0.b[2], w0 > >> > ins v0.b[4], w0 > >> > ins v0.b[6], w0 > >> > ins v0.b[8], w0 > >> > ins v0.b[10], w0 > >> > ins v0.b[12], w0 > >> > ins v0.b[14], w0 > >> > ret > >> > > >> > code-gen with patch: > >> > f_s16: > >> > dup v0.16b, w0 > >> > adrp x0, .LC0 > >> > ldr q1, [x0, #:lo12:.LC0] > >> > zip1 v0.16b, v0.16b, v1.16b > >> > ret > >> > > >> > (b) > >> > int8x16_t f_s16(int8_t x, int8_t y) > >> > { > >> > return (int8x16_t) { x, y, 1, y, 2, y, 3, y, > >> > 4, y, 5, y, 6, y, 7, y }; > >> > } > >> > > >> > code-gen trunk: > >> > f_s16: > >> > adrp x2, .LC0 > >> > ldr q0, [x2, #:lo12:.LC0] > >> > ins v0.b[0], w0 > >> > ins v0.b[1], w1 > >> > ins v0.b[3], w1 > >> > ins v0.b[5], w1 > >> > ins v0.b[7], w1 > >> > ins v0.b[9], w1 > >> > ins v0.b[11], w1 > >> > ins v0.b[13], w1 > >> > ins v0.b[15], w1 > >> > ret > >> > > >> > code-gen patch: > >> > f_s16: > >> > adrp x2, .LC0 > >> > dup v1.16b, w1 > >> > ldr q0, [x2, #:lo12:.LC0] > >> > ins v0.b[0], w0 > >> > zip1 v0.16b, v0.16b, v1.16b > >> > ret > >> > >> Nice. > >> > >> > There are a couple of issues I have come across: > >> > (1) Choosing element to pad vector. > >> > For eg, if we are initiailizing a vector say { x, y, 0, y, 1, y, 2, y } > >> > with mode V8HI. > >> > We split it into { x, 0, 1, 2 } and { y, y, y, y} > >> > However since the mode is V8HI, we would need to pad the above split vectors > >> > with 4 more elements to match up to vector length. > >> > For {x, 0, 1, 2} using any constant is the obvious choice while for {y, y, y, y} > >> > using 'y' is the obvious choice thus making them: > >> > {x, 0, 1, 2, 0, 0, 0, 0} and {y, y, y, y, y, y, y, y} > >> > These would be then merged using zip1 which would discard the lower half > >> > of both vectors. > >> > Currently I encoded the above two heuristics in > >> > aarch64_expand_vector_init_get_padded_elem: > >> > (a) If split portion contains a constant, use the constant to pad the vector. > >> > (b) If split portion only contains variables, then use the most > >> > frequently repeating variable > >> > to pad the vector. > >> > I suppose tho this could be improved ? > >> > >> I think we should just build two 64-bit vectors (V4HIs) and use a subreg > >> to fill the upper elements with undefined values. > >> > >> I suppose in principle we would have the same problem when splitting > >> a 64-bit vector into 2 32-bit vectors, but it's probably better to punt > >> on that for now. Eventually it would be worth adding full support for > >> 32-bit Advanced SIMD modes (with necessary restrictions for FP exceptions) > >> but it's quite a big task. The 128-bit to 64-bit split is the one that > >> matters most. > >> > >> > (2) Setting cost for zip1: > >> > Currently it returns 4 as cost for following zip1 insn: > >> > (set (reg:V8HI 102) > >> > (unspec:V8HI [ > >> > (reg:V8HI 103) > >> > (reg:V8HI 108) > >> > ] UNSPEC_ZIP1)) > >> > I am not sure if that's correct, or if not, what cost to use in this case > >> > for zip1 ? > >> > >> TBH 4 seems a bit optimistic. It's COSTS_N_INSNS (1), whereas the > >> generic advsimd_vec_cost::permute_cost is 2 insns. But the costs of > >> inserts are probably underestimated to the same extent, so hopefully > >> things work out. > >> > >> So it's probably best to accept the costs as they're currently given. > >> Changing them would need extensive testing. > >> > >> However, one of the advantages of the split is that it allows the > >> subvectors to be built in parallel. When optimising for speed, > >> it might make sense to take the maximum of the subsequence costs > >> and add the cost of the zip to that. > > Hi Richard, > > Thanks for the suggestions. > > In the attached patch, it recurses only if nelts == 16 to punt for 64 > > -> 32 bit split, > > It should be based on the size rather than the number of elements. > The example we talked about above involved building V8HIs from two > V4HIs, which is also valid. Right, sorry got mixed up. The attached patch punts if vector_size == 64 by resorting to fallback, which handles V8HI cases. For eg: int16x8_t f(int16_t x) { return (int16x8_t) { x, 1, x, 2, x, 3, x, 4 }; } code-gen with patch: f: dup v0.4h, w0 adrp x0, .LC0 ldr d1, [x0, #:lo12:.LC0] zip1 v0.8h, v0.8h, v1.8h ret Just to clarify, we punt on 64 bit vector size, because there is no 32-bit vector available, to build 2 32-bit vectors for even and odd halves, and then "extend" them with subreg ? It also punts if n_elts < 8, because I am not sure if it's profitable to do recursion+merging for 4 or lesser elements. Does it look OK ? > > > and uses std::max(even_init, odd_init) + insn_cost (zip1_insn) for > > computing total cost of the sequence. > > > > So, for following case: > > int8x16_t f_s8(int8_t x) > > { > > return (int8x16_t) { x, 1, x, 2, x, 3, x, 4, > > x, 5, x, 6, x, 7, x, 8 }; > > } > > > > it now generates: > > f_s16: > > dup v0.8b, w0 > > adrp x0, .LC0 > > ldr d1, [x0, #:lo12:.LC0] > > zip1 v0.16b, v0.16b, v1.16b > > ret > > > > Which I assume is correct, since zip1 will merge the lower halves of > > two vectors while leaving the upper halves undefined ? > > Yeah, it looks valid, but I would say that zip1 ignores the upper halves > (rather than leaving them undefined). Yes, sorry for mis-phrasing. For the following test: int16x8_t f_s16 (int16_t x0, int16_t x1, int16_t x2, int16_t x3, int16_t x4, int16_t x5, int16_t x6, int16_t x7) { return (int16x8_t) { x0, x1, x2, x3, x4, x5, x6, x7 }; } it chose to go recursive+zip1 route since we take max (cost (odd_init), cost (even_init)) and add cost of zip1 insn which turns out to be lesser than cost of fallback: f_s16: sxth w0, w0 sxth w1, w1 fmov d0, x0 fmov d1, x1 ins v0.h[1], w2 ins v1.h[1], w3 ins v0.h[2], w4 ins v1.h[2], w5 ins v0.h[3], w6 ins v1.h[3], w7 zip1 v0.8h, v0.8h, v1.8h ret I assume that's OK since it has fewer dependencies compared to fallback code-gen even if it's longer ? With -Os the cost for sequence is taken as cost(odd_init) + cost(even_init) + cost(zip1_insn) which turns out to be same as cost for fallback sequence and it generates the fallback code-sequence: f_s16: sxth w0, w0 fmov s0, w0 ins v0.h[1], w1 ins v0.h[2], w2 ins v0.h[3], w3 ins v0.h[4], w4 ins v0.h[5], w5 ins v0.h[6], w6 ins v0.h[7], w7 ret Thanks, Prathamesh > > Thanks, > Richard