On Thu, 18 May 2023 at 13:37, Richard Sandiford wrote: > > Prathamesh Kulkarni writes: > > On Tue, 16 May 2023 at 00:29, Richard Sandiford > > wrote: > >> > >> Prathamesh Kulkarni writes: > >> > Hi Richard, > >> > After committing the interleave+zip1 patch for vector initialization, > >> > it seems to regress the s32 case for this patch: > >> > > >> > int32x4_t f_s32(int32_t x) > >> > { > >> > return (int32x4_t) { x, x, x, 1 }; > >> > } > >> > > >> > code-gen: > >> > f_s32: > >> > movi v30.2s, 0x1 > >> > fmov s31, w0 > >> > dup v0.2s, v31.s[0] > >> > ins v30.s[0], v31.s[0] > >> > zip1 v0.4s, v0.4s, v30.4s > >> > ret > >> > > >> > instead of expected code-gen: > >> > f_s32: > >> > movi v31.2s, 0x1 > >> > dup v0.4s, w0 > >> > ins v0.s[3], v31.s[0] > >> > ret > >> > > >> > Cost for fallback sequence: 16 > >> > Cost for interleave and zip sequence: 12 > >> > > >> > For the above case, the cost for interleave+zip1 sequence is computed as: > >> > halves[0]: > >> > (set (reg:V2SI 96) > >> > (vec_duplicate:V2SI (reg/v:SI 93 [ x ]))) > >> > cost = 8 > >> > > >> > halves[1]: > >> > (set (reg:V2SI 97) > >> > (const_vector:V2SI [ > >> > (const_int 1 [0x1]) repeated x2 > >> > ])) > >> > (set (reg:V2SI 97) > >> > (vec_merge:V2SI (vec_duplicate:V2SI (reg/v:SI 93 [ x ])) > >> > (reg:V2SI 97) > >> > (const_int 1 [0x1]))) > >> > cost = 8 > >> > > >> > followed by: > >> > (set (reg:V4SI 95) > >> > (unspec:V4SI [ > >> > (subreg:V4SI (reg:V2SI 96) 0) > >> > (subreg:V4SI (reg:V2SI 97) 0) > >> > ] UNSPEC_ZIP1)) > >> > cost = 4 > >> > > >> > So the total cost becomes > >> > max(costs[0], costs[1]) + zip1_insn_cost > >> > = max(8, 8) + 4 > >> > = 12 > >> > > >> > While the fallback rtl sequence is: > >> > (set (reg:V4SI 95) > >> > (vec_duplicate:V4SI (reg/v:SI 93 [ x ]))) > >> > cost = 8 > >> > (set (reg:SI 98) > >> > (const_int 1 [0x1])) > >> > cost = 4 > >> > (set (reg:V4SI 95) > >> > (vec_merge:V4SI (vec_duplicate:V4SI (reg:SI 98)) > >> > (reg:V4SI 95) > >> > (const_int 8 [0x8]))) > >> > cost = 4 > >> > > >> > So total cost = 8 + 4 + 4 = 16, and we choose the interleave+zip1 sequence. > >> > > >> > I think the issue is probably that for the interleave+zip1 sequence we take > >> > max(costs[0], costs[1]) to reflect that both halves are interleaved, > >> > but for the fallback seq we use seq_cost, which assumes serial execution > >> > of insns in the sequence. > >> > For above fallback sequence, > >> > set (reg:V4SI 95) > >> > (vec_duplicate:V4SI (reg/v:SI 93 [ x ]))) > >> > and > >> > (set (reg:SI 98) > >> > (const_int 1 [0x1])) > >> > could be executed in parallel, which would make it's cost max(8, 4) + 4 = 12. > >> > >> Agreed. > >> > >> A good-enough substitute for this might be to ignore scalar moves > >> (for both alternatives) when costing for speed. > > Thanks for the suggestions. Just wondering for aarch64, if there's an easy > > way we can check if insn is a scalar move, similar to riscv's scalar_move_insn_p > > that checks if get_attr_type(insn) is TYPE_VIMOVXV or TYPE_VFMOVFV ? > > It should be enough to check that the pattern is a SET: > > (a) whose SET_DEST has a scalar mode and > (b) whose SET_SRC an aarch64_mov_operand Hi Richard, Thanks for the suggestions, the attached patch calls seq_cost to compute cost for sequence and then subtracts cost of each scalar move insn from it. Does that look OK ? The patch is under bootstrap+test on aarch64-linux-gnu. After applying the single-constant case patch on top, the cost of fallback sequence is now reduced to 12 instead of 16: Cost before ignoring scalar moves: 16 Ignoring cost = 4 for: (set (reg:SI 98) (const_int 1 [0x1])) Cost after ignoring scalar moves: 12 fallback_seq_cost = 12, zip1_seq_cost = 12 fallback_seq: (set (reg:V4SI 95) (vec_duplicate:V4SI (reg/v:SI 93 [ x ]))) (set (reg:SI 98) (const_int 1 [0x1])) (set (reg:V4SI 95) (vec_merge:V4SI (vec_duplicate:V4SI (reg:SI 98)) (reg:V4SI 95) (const_int 8 [0x8]))) zip1_seq: (set (reg:V2SI 96) (vec_duplicate:V2SI (reg/v:SI 93 [ x ]))) (set (reg:V2SI 97) (const_vector:V2SI [ (const_int 1 [0x1]) repeated x2 ])) (set (reg:V2SI 97) (vec_merge:V2SI (vec_duplicate:V2SI (reg/v:SI 93 [ x ])) (reg:V2SI 97) (const_int 1 [0x1]))) (set (reg:V4SI 95) (unspec:V4SI [ (subreg:V4SI (reg:V2SI 96) 0) (subreg:V4SI (reg:V2SI 97) 0) ] UNSPEC_ZIP1)) So now the costs for both sequences are tied at 12, and so it now chooses the fallback sequence, which "fixes" this case. However, more generally, if the costs for both sequences are tied, how do we evaluate which sequence'd be better ? Currently we choose the fallback sequence if the costs for both sequences are same. > > >> > I was wondering if we should we make cost for interleave+zip1 sequence > >> > more conservative > >> > by not taking max, but summing up costs[0] + costs[1] even for speed ? > >> > For this case, > >> > that would be 8 + 8 + 4 = 20. > >> > > >> > It generates the fallback sequence for other cases (s8, s16, s64) from > >> > the test-case. > >> > >> What does it do for the tests in the interleave+zip1 patch? If it doesn't > >> make a difference there then it sounds like we don't have enough tests. :) > > Oh right, the tests in interleave+zip1 patch only check for s16 case, > > sorry about that :/ > > Looking briefly at the code generated for s8, s32 and s64 case, > > (a) s8, and s16 seem to use same sequence for all cases. > > (b) s64 seems to use fallback sequence. > > (c) For vec-init-21.c, s8 and s16 cases prefer fallback sequence > > because costs are tied, > > while s32 case prefers interleave+zip1: > > > > int32x4_t f_s32(int32_t x, int32_t y) > > { > > return (int32x4_t) { x, y, 1, 2 }; > > } > > > > Code-gen with interleave+zip1 sequence: > > f_s32: > > movi v31.2s, 0x1 > > movi v0.2s, 0x2 > > ins v31.s[0], w0 > > ins v0.s[0], w1 > > zip1 v0.4s, v31.4s, v0.4s > > ret > > > > Code-gen with fallback sequence: > > f_s32: > > adrp x2, .LC0 > > ldr q0, [x2, #:lo12:.LC0] > > ins v0.s[0], w0 > > ins v0.s[1], w1 > > ret > > > > Fallback sequence cost = 20 > > interleave+zip1 sequence cost = 12 > > I assume interleave+zip1 sequence is better in this case (chosen currently) ? > > > > I will send a patch to add cases for s8, s16 and s64 in a follow up patch soon. > >> > >> Summing is only conservative if the fallback sequence is somehow "safer". > >> But I don't think it is. Building an N-element vector from N scalars > >> can be done using N instructions in the fallback case and N+1 instructions > >> in the interleave+zip1 case. But the interleave+zip1 case is still > >> better (speedwise) for N==16. > > Ack, thanks. > > Should we also prefer interleave+zip1 when the costs are tied ? > > No, because the ZIP1 approach requires more temporary registers (in > general). And we're making an optimistic (but reasonable) assumption > that enough vector pipes are free to do the work in parallel. Oh right, thanks, the zip1 sequence'd also increase register pressure. > > > For eg, for the following case: > > int32x4_t f_s32(int32_t x) > > { > > return (int32x4_t) { x, 1, x, 1 }; > > } > > > > costs for both fallback and interleave+zip1 sequence = 12, and we > > currently choose fallback sequence. > > Code-gen: > > f_s32: > > movi v0.4s, 0x1 > > fmov s31, w0 > > ins v0.s[0], v31.s[0] > > ins v0.s[2], v31.s[0] > > ret > > > > while, if we choose interleave+zip1, code-gen is: > > f_s32: > > dup v31.2s, w0 > > movi v0.2s, 0x1 > > zip1 v0.4s, v31.4s, v0.4s > > ret > > > > I suppose the interleave+zip1 sequence is better in this case ? > > And more generally, if the costs are tied, would it be OK to prefer > > interleave+zip1 sequence since it will > > have parallel execution of two halves, which may not always be the > > case with fallback sequence ? > > But when looking at these sequences, it's important to ask not just > which sequence wins, but why it wins. If the zip1 version is better > (I agree it probably is), then the question is why that isn't showing > up in the costs. > > > Also, would it be OK to commit the above patch that addresses the > > issue with single constant case and xfail the s32 case for now ? > > I think it'd be better to wait until we can avoid the XFAIL. That way > it's easier to see that we're making forward progress. Sure. Thanks, Prathamesh > > Thanks, > Richard