On 27/02/2024 08:47, Richard Biener wrote: > On Mon, 26 Feb 2024, Andre Vieira (lists) wrote: > >> >> >> On 05/02/2024 09:56, Richard Biener wrote: >>> On Thu, 1 Feb 2024, Andre Vieira (lists) wrote: >>> >>>> >>>> >>>> On 01/02/2024 07:19, Richard Biener wrote: >>>>> On Wed, 31 Jan 2024, Andre Vieira (lists) wrote: >>>>> >>>>> >>>>> The patch didn't come with a testcase so it's really hard to tell >>>>> what goes wrong now and how it is fixed ... >>>> >>>> My bad! I had a testcase locally but never added it... >>>> >>>> However... now I look at it and ran it past Richard S, the codegen isn't >>>> 'wrong', but it does have the potential to lead to some pretty slow >>>> codegen, >>>> especially for inbranch simdclones where it transforms the SVE predicate >>>> into >>>> an Advanced SIMD vector by inserting the elements one at a time... >>>> >>>> An example of which can be seen if you do: >>>> >>>> gcc -O3 -march=armv8-a+sve -msve-vector-bits=128 -fopenmp-simd t.c -S >>>> >>>> with the following t.c: >>>> #pragma omp declare simd simdlen(4) inbranch >>>> int __attribute__ ((const)) fn5(int); >>>> >>>> void fn4 (int *a, int *b, int n) >>>> { >>>> for (int i = 0; i < n; ++i) >>>> b[i] = fn5(a[i]); >>>> } >>>> >>>> Now I do have to say, for our main usecase of libmvec we won't have any >>>> 'inbranch' Advanced SIMD clones, so we avoid that issue... But of course >>>> that >>>> doesn't mean user-code will. >>> >>> It seems to use SVE masks with vector(4) and the >>> ABI says the mask is vector(4) int. You say that's because we choose >>> a Adv SIMD clone for the SVE VLS vector code (it calls _ZGVnM4v_fn5). >>> >>> The vectorizer creates >>> >>> _44 = VEC_COND_EXPR ; >>> >>> and then vector lowering decomposes this. That means the vectorizer >>> lacks a check that the target handles this VEC_COND_EXPR. >>> >>> Of course I would expect that SVE with VLS vectors is able to >>> code generate this operation, so it's missing patterns in the end. >>> >>> Richard. >>> >> >> What should we do for GCC-14? Going forward I think the right thing to do is >> to add these patterns. But I am not even going to try to do that right now and >> even though we can codegen for this, the result doesn't feel like it would >> ever be profitable which means I'd rather not vectorize, or well pick a >> different vector mode if possible. >> >> This would be achieved with the change to the targethook. If I change the hook >> to take modes, using STMT_VINFO_VECTYPE (stmt_vinfo), is that OK for now? > > Passing in a mode is OK. I'm still not fully understanding why the > clone isn't fully specifying 'mode' and if it does not why the > vectorizer itself can not disregard it. We could check that the modes of the parameters & return type are the same as the vector operands & result in the vectorizer. But then we'd also want to make sure we don't reject cases where we have simdclones with compatible modes, aka same element type, but a multiple element count. Which is where'd we get in trouble again I think, because we'd want to accept V8SI -> 2x V4SI, but not V8SI -> 2x VNx4SI (with VLS and aarch64_sve_vg = 2), not because it's invalid, but because right now the codegen is bad. And it's easier to do this in the targethook, which we can technically also use to 'rank' simdclones by setting a target_badness value, so in the future we could decide to assign some 'badness' to influence the rank a SVE simdclone for Advanced SIMD loops vs an Advanced SIMD clone for Advanced SIMD loops. This does touch another issue of simdclone costing, which is a larger issue in general and one we (arm) might want to approach in the future. It's a complex issue, because the vectorizer doesn't know the performance impact of a simdclone, we assume (as we should) that its faster than the original scalar, though we currently don't record costs for either, but we don't know by how much or how much impact it has, so the vectorizer can't reason whether it's beneficial to use a simdclone if it has to do a lot of operand preparation, we can merely tell it to use it, or not and all the other operations in the loop will determine costing. > From the past discussion I understood the existing situation isn't > as bad as initially thought and no bad things happen right now? Nope, I thought they compiler would fall apart, but it seems to be able to transform the operands from one mode into the other, so without the targethook it just generates slower loops in certain cases, which we'd rather avoid given the usecase for simdclones is to speed things up ;) Attached reworked patch. This patch adds a machine_mode argument to TARGET_SIMD_CLONE_USABLE to make sure the target can reject a simd_clone based on the vector mode it is using. This is needed because for VLS SVE vectorization the vectorizer accepts Advanced SIMD simd clones when vectorizing using SVE types because the simdlens might match, this currently leads to suboptimal codegen. Other targets do not currently need to use this argument. gcc/ChangeLog: * target.def (TARGET_SIMD_CLONE_USABLE): Add argument. * tree-vect-stmts.cc (vectorizable_simd_clone_call): Pass vector_mode to call TARGET_SIMD_CLONE_USABLE. * config/aarch64/aarch64.cc (aarch64_simd_clone_usable): Add argument and use it to reject the use of SVE simd clones with Advanced SIMD modes. * config/gcn/gcn.cc (gcn_simd_clone_usable): Add unused argument. * config/i386/i386.cc (ix86_simd_clone_usable): Likewise.