From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [IPv6:2a00:1450:4864:20::532]) by sourceware.org (Postfix) with ESMTPS id 5AEA93858D39 for ; Tue, 9 Nov 2021 11:01:41 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 5AEA93858D39 Received: by mail-ed1-x532.google.com with SMTP id j21so74834932edt.11 for ; Tue, 09 Nov 2021 03:01:41 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:content-transfer-encoding; bh=RXVS7YfhYqL3niT9y7eNj+l/K01Xxd/bTxxUX9MIPDk=; b=kMLZ1aXKtzAVU601GgaoxXS3Lnp6Jtkup/yqxAxXIxzr+S8351oJAtN8YhGQlnawbK iOkqI6a+cElJmAw1Wqc8+EvwhJ3SQqaiiBNhNOMSIuYYakJc2hWt+uFVwSIJju67VYv5 tRShFigZEogUE4UZjiIYdnoJxdaHpy5FRmFhI4kALPjJ4Jl1AHPGmgEFWH7IKcUSZWso RaqRZvpUGFaZjqOmdkgf3VesvasUg/ollMsFj8ccjM7zHH7q25ijm59EdmPDZhHjHxYZ WB5OfNLTMy4llPKh4ME8uzafLgnRFWnQppBUX2RI7OKE6qUp6PDfipPeXE9TVU0hpkNI l8vw== X-Gm-Message-State: AOAM530SYLw4NUf5o0o9c36yO8XxW+cFOJFE7SrU7Jh4/e8zp9c25UsV McP7vqer/UVyV1HM5TWWZAL1iNIFcYUg/0CVG4LO1Ntr X-Google-Smtp-Source: ABdhPJzuGBzRYf5ZaRZ0abz9SbaCaI7/9UJzr72dg8+HBqP4W993sdrddkJu/myrDmTAh1HnqNjoOh2VPpSKUb5krxw= X-Received: by 2002:a17:906:b201:: with SMTP id p1mr8580625ejz.571.1636455698039; Tue, 09 Nov 2021 03:01:38 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Richard Biener Date: Tue, 9 Nov 2021 12:01:27 +0100 Message-ID: Subject: Re: [PATCH] vect: Hookize better_loop_vinfo_p To: Richard Biener via Gcc-patches , Richard Biener , Richard Sandiford Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Nov 2021 11:01:45 -0000 On Tue, Nov 9, 2021 at 11:47 AM Richard Sandiford wrote: > > Richard Biener via Gcc-patches writes: > > On Mon, Nov 8, 2021 at 3:46 PM Richard Sandiford > > wrote: > >> > >> Richard Biener via Gcc-patches writes: > >> > On Mon, Nov 8, 2021 at 2:06 PM Richard Sandiford > >> > wrote: > >> >> > >> >> Richard Biener writes: > >> >> > On Mon, Nov 8, 2021 at 11:45 AM Richard Sandiford via Gcc-patches > >> >> > wrote: > >> >> >> > >> >> >> One of the things we want to do on AArch64 is compare vector loo= ps > >> >> >> side-by-side and pick the best one. For some targets, we want t= his > >> >> >> to be based on issue rates as well as the usual latency-based co= sts > >> >> >> (at least for loops with relatively high iteration counts). > >> >> >> > >> >> >> The current approach to doing this is: when costing vectorisatio= n > >> >> >> candidate A, try to guess what the other main candidate B will l= ook > >> >> >> like and adjust A's latency-based cost up or down based on the l= ikely > >> >> >> difference between A and B's issue rates. This effectively mean= s > >> >> >> that we try to cost parts of B at the same time as A, without ac= tually > >> >> >> being able to see B. > >> >> > > >> >> > I think the idea of the current costing is that compares are alwa= ys > >> >> > to the original scalar loop (with comparing the resulting magic > >> >> > cost numbers) so that by means of transitivity comparing two > >> >> > vector variants work as well. So I'm not sure where 'variant B' > >> >> > comes into play here? > >> >> > >> >> E.g. A could be SVE and B could be Advanced SIMD, or A could be > >> >> SVE with fully-packed vectors and B could be SVE with partially > >> >> unpacked vectors. > >> >> > >> >> One motivating case is Neoverse V1, which is a 256-bit SVE target. > >> >> There, scalar code, Advanced SIMD code and SVE code have different = issue > >> >> characteristics. SVE vector ops generally have the same latencies = as > >> >> the corresponding Advanced SIMD ops. Advanced SIMD ops generally > >> >> have double the throughput of SVE ones, so that the overall bit-for= -bit > >> >> throughputs are roughly equal. However, there are differences due = to > >> >> predication, load/store handling, and so on, and those differences > >> >> should be decisive in some cases. > >> >> > >> >> For SLP, latency-based costs are fine. But for loop costs, they hi= de > >> >> too many details. What works best in practice, both for vector vs. > >> >> vector and vector vs. scalar, is: > >> >> > >> >> (1) compare issue rates between two loop bodies (vector vs. vector > >> >> or vector vs. scalar) > >> >> (2) if issue rates are equal for a given number of scalar iteration= s, > >> >> compare the latencies of the loop bodies, as we do now > >> >> (3) if both the above are equal, compare the latencies of the prolo= gue > >> >> and epilogue, as we do now > >> >> > >> >> It's very difficult to combine latency and issue rate information > >> >> into a single per-statement integer, in such a way that the integer= s > >> >> can just be summed and compared. > >> > > >> > Yeah, so the idea I had when proposing the init_cost/add_stmt/finish= _cost > >> > was that finish_cost would determine the issue rate cost part, for > >> > example if the uarch can issue 2 ops with ISA A and 4 ops with ISA B > >> > then finish_cost would compute the number of cycles required to > >> > issue all vector stmts. That yields sth like IPC the latency cost c= ould > >> > be divided by. Doing that for both ISA A and ISA B would produce > >> > weighted latency values that can be compared? Alternatively > >> > you can of course simulate issue with the actual instruction latenci= es > >> > in mind and produce an overall iteration latency number. > >> > > >> > Comparing just latency or issue rate in isolation is likely not good > >> > enough? > >> > >> In practice, comparing issue rates in isolation does seem to be what w= e > >> want as the first level of comparison. I don't think total latency / > >> issue rate is really a meaningful combination. It makes sense to the > >> extent that =E2=80=9Clower latency =3D=3D good=E2=80=9D and =E2=80=9Ch= igher issue rate =3D=3D good=E2=80=9D, > >> but I don't think it's the case that halving the total latency is > >> equally valuable as doubling the issue rate. In practice the latter i= s > >> a significant win while the former might or might not be. > >> > >> total latency / issue rate also runs the risk of double-counting: > >> reducing the number of ops decreases the total latency *and* increases > >> the issue rate, which could have a quadratic effect. (This is why we > >> don't decrease SVE costs when we think that the SVE code will issue > >> more quickly than the Advanced SIMD code.) > >> > >> For a long-running loop, the only latencies that really matter are > >> for loop-carried dependencies. The issue rate information takes > >> those into account, in that it tracks reduction and (with later > >> patches) induction limiters. > > > > I see. I'm somewhat worried that we basically move all intelligence > > to the target hooks this way, requiring each target to duplicate code. > > Yeah, I agree that's a concern. > > > In fact one item on my list is to try tackling instruction issue on > > the vectorizer side where we have an idea on the data dependencies > > where the target then would continue to mostly work on the > > instruction level, returning "I can issue at most 4 of these, each > > with a latency of 2", > > plus globally specifying "I can issue 6 ops per clock". > > This would be too simplistic for what we need on AArch64. The broad > summary for Neoverse V1 would be =E2=80=9CI can issue two SVE ops per cyc= le=E2=80=9D > and =E2=80=9CI can issue four Advanced SIMD ops per cycle=E2=80=9D, but t= hat hides > a lot of the details that matter. > > Currently we model 4 issue rates: > > (1) number of stores per cycle > (2) number of loads and stores per cycle > (3) number of =E2=80=9Cgeneral=E2=80=9D operations per cycle > (4) number of predicate operations per cycle > > For Neoverse V1, some operations count against multiple issue rates. > E.g. a scalar store counts against (1) and (2). A vector store counts > against (1), (2) and (3). > > In some cases we also take renaming into account, which could perhaps > be seen as a (5). I guess we would need more issue rates to handle > asymetrical ALUs properly (a bit like (1) and (2) above). > > So I guess the target-independent way would need to maintain a > target-configurable tuple of issue rates, with each statement > yielding a tuple of ops to count against those rates. Yeah, I agree it's complicated. I do hope the job of tracking dependent ops will become easier when we can just shove an SLP tree to the target for this - at the moment for non-SLP the target will have to re-construct dependencies from the stmt_info SSA ops in some way, that's clearly sub-optimal (esp. considering patterns). We need this kind of rough analysis in other loop optimization contexts as well - for example loop distribution would mainly look at (1) and (2), and a possible unroll pass would need to do similar analysis. For unrolling we'd also need to model instruction fetch somehow since you cannot issue what you didn't fetch (but fetch is difficult due to the lack of a schedule on GIMPLE). Since it's a problem that solving might help more generally I was hoping at some high-level modeling to be good enough (eventually the high-level could extract most info from the DFA even...). > That's probably worth doing, but even then, I think there would > still be cases in which the target would need to force the choice > based on target-specific properties, even if it's just as a tie > breaker when the target-independent checks come back equal. > > > So when we rework the costing side we'll have to deal with the target > > doing magic that's not per insn and somehow preserve it - that might > > prove mightly complicated ... > > > > That said, if you are prepared to deal with this on the arm side > > when such reorg ever happens then I'm fine with the two patches. > > Thanks. And yeah, it would be up to us to keep the target code updated > if the target-independent costing improves. > > Although it might not look like it yet, the full patch series simplifies > the current target code rather than making it more complicated. :-) > The patches also make things more direct and so hopefully easier > to maintain. > > If target-independent code starts asking for issue rates (in the tuple > form sketched out above) then I think we're in a good position to > provide it. > > Thanks, > Richard > > > > > Thanks, > > Richard. > > > >> Thanks, > >> Richard > >> > >> > > >> >> So returning to the above, when costing an SVE loop A, we currently > >> >> collect on-the-side issue information about both A and the likely > >> >> Advanced SIMD implementation B. If B would have a higher throughpu= t > >> >> than A then we multiply A's latency-based costs by: > >> >> > >> >> ceil(B throughput/A throughput) > >> >> > >> >> The problem is, we don't actually know whether B exists or what it > >> >> would look like (given that Advanced SIMD and SVE have different > >> >> features). > >> >> > >> >> In principle, we should do the same in reverse, but since SVE needs > >> >> fewer ops than Advanced SIMD, the latencies are usually smaller alr= eady. > >> >> > >> >> We also do something similar for the scalar code. When costing bot= h > >> >> Advanced SIMD and SVE, we try to estimate the issue rate of the > >> >> original scalar code. If the scalar code could issue more quickly > >> >> than the equivalent vector code, we increase the latency-based cost > >> >> of the vector code to account for that. > >> >> > >> >> The idea of the patch (and others) is that we can do the (1), (2), > >> >> (3) comparison above directly rather than indirectly. We can also > >> >> use the issue information calculated while costing the scalar code, > >> >> rather than having to estimate the scalar issue information while > >> >> costing the vector code. > >> >> > >> >> >> This is needlessly indirect and complex. It was a compromise du= e > >> >> >> to the code being added (too) late in the GCC 11 cycle, so that > >> >> >> target-independent changes weren't possible. > >> >> >> > >> >> >> The target-independent code already compares two candidate loop_= vec_infos > >> >> >> side-by-side, so that information about A and B above are availa= ble > >> >> >> directly. This patch creates a way for targets to hook into thi= s > >> >> >> comparison. > >> >> > > >> >> > You mean it has both loop_vec_infos. But as said I'm not sure it= 's a good > >> >> > idea to compare those side-by-side - will that not lead to _more_= special-casing > >> >> > since you need to have a working compare to the scalar variant as= well > >> >> > to determine the vectorization threshold? > >> >> > >> >> A later patch allows the code to do the same comparison for vector > >> >> vs. scalar. It makes the target code significantly simpler compare= d > >> >> to now, since add_stmt_cost only needs to consider the latency and > >> >> issue rate of the code it's actually supposed to be costing, rather= than > >> >> trying also to estimate the issue rate of the scalar code and the i= ssue > >> >> rate of the Advanced SIMD code. > >> >> > >> >> Thanks, > >> >> Richard > >> >> > >> >> > > >> >> >> > >> >> >> The AArch64 code can therefore hook into better_main_loop_than_p= to > >> >> >> compare issue rates. If the issue rate comparison isn't decisiv= e, > >> >> >> the code can fall back to the normal latency-based comparison in= stead. > >> >> >> > >> >> >> Tested on aarch64-linux-gnu and x86_64-linux-gnu. OK to install= ? > >> >> >> > >> >> >> Richard > >> >> >> > >> >> >> > >> >> >> gcc/ > >> >> >> * tree-vectorizer.h (vector_costs::better_main_loop_than= _p) > >> >> >> (vector_costs::better_epilogue_loop_than_p) > >> >> >> (vector_costs::compare_inside_loop_cost) > >> >> >> (vector_costs::compare_outside_loop_cost): Likewise. > >> >> >> * tree-vectorizer.c (vector_costs::better_main_loop_than= _p) > >> >> >> (vector_costs::better_epilogue_loop_than_p) > >> >> >> (vector_costs::compare_inside_loop_cost) > >> >> >> (vector_costs::compare_outside_loop_cost): New functions= , > >> >> >> containing code moved from... > >> >> >> * tree-vect-loop.c (vect_better_loop_vinfo_p): ...here. > >> >> >> --- > >> >> >> gcc/tree-vect-loop.c | 142 ++--------------------------- > >> >> >> gcc/tree-vectorizer.c | 204 +++++++++++++++++++++++++++++++++++= +++++++ > >> >> >> gcc/tree-vectorizer.h | 17 ++++ > >> >> >> 3 files changed, 226 insertions(+), 137 deletions(-) > >> >> >> > >> >> >> diff --git a/gcc/tree-vect-loop.c b/gcc/tree-vect-loop.c > >> >> >> index dd4a363fee5..c9ee2e15e35 100644 > >> >> >> --- a/gcc/tree-vect-loop.c > >> >> >> +++ b/gcc/tree-vect-loop.c > >> >> >> @@ -2784,144 +2784,12 @@ vect_better_loop_vinfo_p (loop_vec_info= new_loop_vinfo, > >> >> >> return new_simdlen_p; > >> >> >> } > >> >> >> > >> >> >> - loop_vec_info main_loop =3D LOOP_VINFO_ORIG_LOOP_INFO (old_lo= op_vinfo); > >> >> >> - if (main_loop) > >> >> >> - { > >> >> >> - poly_uint64 main_poly_vf =3D LOOP_VINFO_VECT_FACTOR (main= _loop); > >> >> >> - unsigned HOST_WIDE_INT main_vf; > >> >> >> - unsigned HOST_WIDE_INT old_factor, new_factor, old_cost, = new_cost; > >> >> >> - /* If we can determine how many iterations are left for t= he epilogue > >> >> >> - loop, that is if both the main loop's vectorization fac= tor and number > >> >> >> - of iterations are constant, then we use them to calcula= te the cost of > >> >> >> - the epilogue loop together with a 'likely value' for th= e epilogues > >> >> >> - vectorization factor. Otherwise we use the main loop's= vectorization > >> >> >> - factor and the maximum poly value for the epilogue's. = If the target > >> >> >> - has not provided with a sensible upper bound poly vecto= rization > >> >> >> - factors are likely to be favored over constant ones. *= / > >> >> >> - if (main_poly_vf.is_constant (&main_vf) > >> >> >> - && LOOP_VINFO_NITERS_KNOWN_P (main_loop)) > >> >> >> - { > >> >> >> - unsigned HOST_WIDE_INT niters > >> >> >> - =3D LOOP_VINFO_INT_NITERS (main_loop) % main_vf; > >> >> >> - HOST_WIDE_INT old_likely_vf > >> >> >> - =3D estimated_poly_value (old_vf, POLY_VALUE_LIKELY)= ; > >> >> >> - HOST_WIDE_INT new_likely_vf > >> >> >> - =3D estimated_poly_value (new_vf, POLY_VALUE_LIKELY)= ; > >> >> >> - > >> >> >> - /* If the epilogue is using partial vectors we account= for the > >> >> >> - partial iteration here too. */ > >> >> >> - old_factor =3D niters / old_likely_vf; > >> >> >> - if (LOOP_VINFO_USING_PARTIAL_VECTORS_P (old_loop_vinfo= ) > >> >> >> - && niters % old_likely_vf !=3D 0) > >> >> >> - old_factor++; > >> >> >> - > >> >> >> - new_factor =3D niters / new_likely_vf; > >> >> >> - if (LOOP_VINFO_USING_PARTIAL_VECTORS_P (new_loop_vinfo= ) > >> >> >> - && niters % new_likely_vf !=3D 0) > >> >> >> - new_factor++; > >> >> >> - } > >> >> >> - else > >> >> >> - { > >> >> >> - unsigned HOST_WIDE_INT main_vf_max > >> >> >> - =3D estimated_poly_value (main_poly_vf, POLY_VALUE_M= AX); > >> >> >> - > >> >> >> - old_factor =3D main_vf_max / estimated_poly_value (old= _vf, > >> >> >> - POLY_= VALUE_MAX); > >> >> >> - new_factor =3D main_vf_max / estimated_poly_value (new= _vf, > >> >> >> - POLY_= VALUE_MAX); > >> >> >> - > >> >> >> - /* If the loop is not using partial vectors then it wi= ll iterate one > >> >> >> - time less than one that does. It is safe to subtra= ct one here, > >> >> >> - because the main loop's vf is always at least 2x bi= gger than that > >> >> >> - of an epilogue. */ > >> >> >> - if (!LOOP_VINFO_USING_PARTIAL_VECTORS_P (old_loop_vinf= o)) > >> >> >> - old_factor -=3D 1; > >> >> >> - if (!LOOP_VINFO_USING_PARTIAL_VECTORS_P (new_loop_vinf= o)) > >> >> >> - new_factor -=3D 1; > >> >> >> - } > >> >> >> - > >> >> >> - /* Compute the costs by multiplying the inside costs with= the factor and > >> >> >> - add the outside costs for a more complete picture. The= factor is the > >> >> >> - amount of times we are expecting to iterate this epilog= ue. */ > >> >> >> - old_cost =3D old_loop_vinfo->vector_costs->body_cost () *= old_factor; > >> >> >> - new_cost =3D new_loop_vinfo->vector_costs->body_cost () *= new_factor; > >> >> >> - old_cost +=3D old_loop_vinfo->vector_costs->outside_cost = (); > >> >> >> - new_cost +=3D new_loop_vinfo->vector_costs->outside_cost = (); > >> >> >> - return new_cost < old_cost; > >> >> >> - } > >> >> >> - > >> >> >> - /* Limit the VFs to what is likely to be the maximum number o= f iterations, > >> >> >> - to handle cases in which at least one loop_vinfo is fully-= masked. */ > >> >> >> - HOST_WIDE_INT estimated_max_niter =3D likely_max_stmt_executi= ons_int (loop); > >> >> >> - if (estimated_max_niter !=3D -1) > >> >> >> - { > >> >> >> - if (known_le (estimated_max_niter, new_vf)) > >> >> >> - new_vf =3D estimated_max_niter; > >> >> >> - if (known_le (estimated_max_niter, old_vf)) > >> >> >> - old_vf =3D estimated_max_niter; > >> >> >> - } > >> >> >> - > >> >> >> - /* Check whether the (fractional) cost per scalar iteration i= s lower > >> >> >> - or higher: new_inside_cost / new_vf vs. old_inside_cost / = old_vf. */ > >> >> >> - poly_int64 rel_new =3D new_loop_vinfo->vector_costs->body_cos= t () * old_vf; > >> >> >> - poly_int64 rel_old =3D old_loop_vinfo->vector_costs->body_cos= t () * new_vf; > >> >> >> - > >> >> >> - HOST_WIDE_INT est_rel_new_min > >> >> >> - =3D estimated_poly_value (rel_new, POLY_VALUE_MIN); > >> >> >> - HOST_WIDE_INT est_rel_new_max > >> >> >> - =3D estimated_poly_value (rel_new, POLY_VALUE_MAX); > >> >> >> - > >> >> >> - HOST_WIDE_INT est_rel_old_min > >> >> >> - =3D estimated_poly_value (rel_old, POLY_VALUE_MIN); > >> >> >> - HOST_WIDE_INT est_rel_old_max > >> >> >> - =3D estimated_poly_value (rel_old, POLY_VALUE_MAX); > >> >> >> - > >> >> >> - /* Check first if we can make out an unambigous total order f= rom the minimum > >> >> >> - and maximum estimates. */ > >> >> >> - if (est_rel_new_min < est_rel_old_min > >> >> >> - && est_rel_new_max < est_rel_old_max) > >> >> >> - return true; > >> >> >> - else if (est_rel_old_min < est_rel_new_min > >> >> >> - && est_rel_old_max < est_rel_new_max) > >> >> >> - return false; > >> >> >> - /* When old_loop_vinfo uses a variable vectorization factor, > >> >> >> - we know that it has a lower cost for at least one runtime = VF. > >> >> >> - However, we don't know how likely that VF is. > >> >> >> - > >> >> >> - One option would be to compare the costs for the estimated= VFs. > >> >> >> - The problem is that that can put too much pressure on the = cost > >> >> >> - model. E.g. if the estimated VF is also the lowest possib= le VF, > >> >> >> - and if old_loop_vinfo is 1 unit worse than new_loop_vinfo > >> >> >> - for the estimated VF, we'd then choose new_loop_vinfo even > >> >> >> - though (a) new_loop_vinfo might not actually be better tha= n > >> >> >> - old_loop_vinfo for that VF and (b) it would be significant= ly > >> >> >> - worse at larger VFs. > >> >> >> - > >> >> >> - Here we go for a hacky compromise: pick new_loop_vinfo if = it is > >> >> >> - no more expensive than old_loop_vinfo even after doubling = the > >> >> >> - estimated old_loop_vinfo VF. For all but trivial loops, t= his > >> >> >> - ensures that we only pick new_loop_vinfo if it is signific= antly > >> >> >> - better than old_loop_vinfo at the estimated VF. */ > >> >> >> - > >> >> >> - if (est_rel_old_min !=3D est_rel_new_min > >> >> >> - || est_rel_old_max !=3D est_rel_new_max) > >> >> >> - { > >> >> >> - HOST_WIDE_INT est_rel_new_likely > >> >> >> - =3D estimated_poly_value (rel_new, POLY_VALUE_LIKELY); > >> >> >> - HOST_WIDE_INT est_rel_old_likely > >> >> >> - =3D estimated_poly_value (rel_old, POLY_VALUE_LIKELY); > >> >> >> - > >> >> >> - return est_rel_new_likely * 2 <=3D est_rel_old_likely; > >> >> >> - } > >> >> >> - > >> >> >> - /* If there's nothing to choose between the loop bodies, see = whether > >> >> >> - there's a difference in the prologue and epilogue costs. = */ > >> >> >> - auto old_outside_cost =3D old_loop_vinfo->vector_costs->outsi= de_cost (); > >> >> >> - auto new_outside_cost =3D new_loop_vinfo->vector_costs->outsi= de_cost (); > >> >> >> - if (new_outside_cost !=3D old_outside_cost) > >> >> >> - return new_outside_cost < old_outside_cost; > >> >> >> + const auto *old_costs =3D old_loop_vinfo->vector_costs; > >> >> >> + const auto *new_costs =3D new_loop_vinfo->vector_costs; > >> >> >> + if (loop_vec_info main_loop =3D LOOP_VINFO_ORIG_LOOP_INFO (ol= d_loop_vinfo)) > >> >> >> + return new_costs->better_epilogue_loop_than_p (old_costs, m= ain_loop); > >> >> >> > >> >> >> - return false; > >> >> >> + return new_costs->better_main_loop_than_p (old_costs); > >> >> >> } > >> >> >> > >> >> >> /* Decide whether to replace OLD_LOOP_VINFO with NEW_LOOP_VINFO= . Return > >> >> >> diff --git a/gcc/tree-vectorizer.c b/gcc/tree-vectorizer.c > >> >> >> index 9ef76ce654b..dcbb2a3f13a 100644 > >> >> >> --- a/gcc/tree-vectorizer.c > >> >> >> +++ b/gcc/tree-vectorizer.c > >> >> >> @@ -1744,3 +1744,207 @@ vector_costs::adjust_cost_for_freq (stmt= _vec_info stmt_info, > >> >> >> } > >> >> >> return cost; > >> >> >> } > >> >> >> + > >> >> >> +/* See the comment above the declaration for details. */ > >> >> >> + > >> >> >> +bool > >> >> >> +vector_costs::better_main_loop_than_p (const vector_costs *othe= r) const > >> >> >> +{ > >> >> >> + int diff =3D compare_inside_loop_cost (other); > >> >> >> + if (diff !=3D 0) > >> >> >> + return diff < 0; > >> >> >> + > >> >> >> + /* If there's nothing to choose between the loop bodies, see = whether > >> >> >> + there's a difference in the prologue and epilogue costs. = */ > >> >> >> + diff =3D compare_outside_loop_cost (other); > >> >> >> + if (diff !=3D 0) > >> >> >> + return diff < 0; > >> >> >> + > >> >> >> + return false; > >> >> >> +} > >> >> >> + > >> >> >> + > >> >> >> +/* See the comment above the declaration for details. */ > >> >> >> + > >> >> >> +bool > >> >> >> +vector_costs::better_epilogue_loop_than_p (const vector_costs *= other, > >> >> >> + loop_vec_info main_lo= op) const > >> >> >> +{ > >> >> >> + loop_vec_info this_loop_vinfo =3D as_a (this->= m_vinfo); > >> >> >> + loop_vec_info other_loop_vinfo =3D as_a (other= ->m_vinfo); > >> >> >> + > >> >> >> + poly_int64 this_vf =3D LOOP_VINFO_VECT_FACTOR (this_loop_vinf= o); > >> >> >> + poly_int64 other_vf =3D LOOP_VINFO_VECT_FACTOR (other_loop_vi= nfo); > >> >> >> + > >> >> >> + poly_uint64 main_poly_vf =3D LOOP_VINFO_VECT_FACTOR (main_loo= p); > >> >> >> + unsigned HOST_WIDE_INT main_vf; > >> >> >> + unsigned HOST_WIDE_INT other_factor, this_factor, other_cost,= this_cost; > >> >> >> + /* If we can determine how many iterations are left for the e= pilogue > >> >> >> + loop, that is if both the main loop's vectorization factor= and number > >> >> >> + of iterations are constant, then we use them to calculate = the cost of > >> >> >> + the epilogue loop together with a 'likely value' for the e= pilogues > >> >> >> + vectorization factor. Otherwise we use the main loop's ve= ctorization > >> >> >> + factor and the maximum poly value for the epilogue's. If = the target > >> >> >> + has not provided with a sensible upper bound poly vectoriz= ation > >> >> >> + factors are likely to be favored over constant ones. */ > >> >> >> + if (main_poly_vf.is_constant (&main_vf) > >> >> >> + && LOOP_VINFO_NITERS_KNOWN_P (main_loop)) > >> >> >> + { > >> >> >> + unsigned HOST_WIDE_INT niters > >> >> >> + =3D LOOP_VINFO_INT_NITERS (main_loop) % main_vf; > >> >> >> + HOST_WIDE_INT other_likely_vf > >> >> >> + =3D estimated_poly_value (other_vf, POLY_VALUE_LIKELY); > >> >> >> + HOST_WIDE_INT this_likely_vf > >> >> >> + =3D estimated_poly_value (this_vf, POLY_VALUE_LIKELY); > >> >> >> + > >> >> >> + /* If the epilogue is using partial vectors we account fo= r the > >> >> >> + partial iteration here too. */ > >> >> >> + other_factor =3D niters / other_likely_vf; > >> >> >> + if (LOOP_VINFO_USING_PARTIAL_VECTORS_P (other_loop_vinfo) > >> >> >> + && niters % other_likely_vf !=3D 0) > >> >> >> + other_factor++; > >> >> >> + > >> >> >> + this_factor =3D niters / this_likely_vf; > >> >> >> + if (LOOP_VINFO_USING_PARTIAL_VECTORS_P (this_loop_vinfo) > >> >> >> + && niters % this_likely_vf !=3D 0) > >> >> >> + this_factor++; > >> >> >> + } > >> >> >> + else > >> >> >> + { > >> >> >> + unsigned HOST_WIDE_INT main_vf_max > >> >> >> + =3D estimated_poly_value (main_poly_vf, POLY_VALUE_MAX); > >> >> >> + > >> >> >> + other_factor =3D main_vf_max / estimated_poly_value (othe= r_vf, > >> >> >> + POLY_VALU= E_MAX); > >> >> >> + this_factor =3D main_vf_max / estimated_poly_value (this_= vf, > >> >> >> + POLY_VALU= E_MAX); > >> >> >> + > >> >> >> + /* If the loop is not using partial vectors then it will = iterate one > >> >> >> + time less than one that does. It is safe to subtract o= ne here, > >> >> >> + because the main loop's vf is always at least 2x bigger= than that > >> >> >> + of an epilogue. */ > >> >> >> + if (!LOOP_VINFO_USING_PARTIAL_VECTORS_P (other_loop_vinfo= )) > >> >> >> + other_factor -=3D 1; > >> >> >> + if (!LOOP_VINFO_USING_PARTIAL_VECTORS_P (this_loop_vinfo)= ) > >> >> >> + this_factor -=3D 1; > >> >> >> + } > >> >> >> + > >> >> >> + /* Compute the costs by multiplying the inside costs with the= factor and > >> >> >> + add the outside costs for a more complete picture. The fa= ctor is the > >> >> >> + amount of times we are expecting to iterate this epilogue.= */ > >> >> >> + other_cost =3D other->body_cost () * other_factor; > >> >> >> + this_cost =3D this->body_cost () * this_factor; > >> >> >> + other_cost +=3D other->outside_cost (); > >> >> >> + this_cost +=3D this->outside_cost (); > >> >> >> + return this_cost < other_cost; > >> >> >> +} > >> >> >> + > >> >> >> +/* A <=3D>-style subroutine of better_main_loop_than_p. Check = whether we can > >> >> >> + determine the return value of better_main_loop_than_p by com= paring the > >> >> >> + inside (loop body) costs of THIS and OTHER. Return: > >> >> >> + > >> >> >> + * -1 if better_main_loop_than_p should return true. > >> >> >> + * 1 if better_main_loop_than_p should return false. > >> >> >> + * 0 if we can't decide. */ > >> >> >> + > >> >> >> +int > >> >> >> +vector_costs::compare_inside_loop_cost (const vector_costs *oth= er) const > >> >> >> +{ > >> >> >> + loop_vec_info this_loop_vinfo =3D as_a (this->= m_vinfo); > >> >> >> + loop_vec_info other_loop_vinfo =3D as_a (other= ->m_vinfo); > >> >> >> + > >> >> >> + struct loop *loop =3D LOOP_VINFO_LOOP (this_loop_vinfo); > >> >> >> + gcc_assert (LOOP_VINFO_LOOP (other_loop_vinfo) =3D=3D loop); > >> >> >> + > >> >> >> + poly_int64 this_vf =3D LOOP_VINFO_VECT_FACTOR (this_loop_vinf= o); > >> >> >> + poly_int64 other_vf =3D LOOP_VINFO_VECT_FACTOR (other_loop_vi= nfo); > >> >> >> + > >> >> >> + /* Limit the VFs to what is likely to be the maximum number o= f iterations, > >> >> >> + to handle cases in which at least one loop_vinfo is fully-= masked. */ > >> >> >> + HOST_WIDE_INT estimated_max_niter =3D likely_max_stmt_executi= ons_int (loop); > >> >> >> + if (estimated_max_niter !=3D -1) > >> >> >> + { > >> >> >> + if (known_le (estimated_max_niter, this_vf)) > >> >> >> + this_vf =3D estimated_max_niter; > >> >> >> + if (known_le (estimated_max_niter, other_vf)) > >> >> >> + other_vf =3D estimated_max_niter; > >> >> >> + } > >> >> >> + > >> >> >> + /* Check whether the (fractional) cost per scalar iteration i= s lower or > >> >> >> + higher: this_inside_cost / this_vf vs. other_inside_cost /= other_vf. */ > >> >> >> + poly_int64 rel_this =3D this_loop_vinfo->vector_costs->body_c= ost () * other_vf; > >> >> >> + poly_int64 rel_other > >> >> >> + =3D other_loop_vinfo->vector_costs->body_cost () * this_vf; > >> >> >> + > >> >> >> + HOST_WIDE_INT est_rel_this_min > >> >> >> + =3D estimated_poly_value (rel_this, POLY_VALUE_MIN); > >> >> >> + HOST_WIDE_INT est_rel_this_max > >> >> >> + =3D estimated_poly_value (rel_this, POLY_VALUE_MAX); > >> >> >> + > >> >> >> + HOST_WIDE_INT est_rel_other_min > >> >> >> + =3D estimated_poly_value (rel_other, POLY_VALUE_MIN); > >> >> >> + HOST_WIDE_INT est_rel_other_max > >> >> >> + =3D estimated_poly_value (rel_other, POLY_VALUE_MAX); > >> >> >> + > >> >> >> + /* Check first if we can make out an unambigous total order f= rom the minimum > >> >> >> + and maximum estimates. */ > >> >> >> + if (est_rel_this_min < est_rel_other_min > >> >> >> + && est_rel_this_max < est_rel_other_max) > >> >> >> + return -1; > >> >> >> + > >> >> >> + if (est_rel_other_min < est_rel_this_min > >> >> >> + && est_rel_other_max < est_rel_this_max) > >> >> >> + return 1; > >> >> >> + > >> >> >> + /* When other_loop_vinfo uses a variable vectorization factor= , > >> >> >> + we know that it has a lower cost for at least one runtime = VF. > >> >> >> + However, we don't know how likely that VF is. > >> >> >> + > >> >> >> + One option would be to compare the costs for the estimated= VFs. > >> >> >> + The problem is that that can put too much pressure on the = cost > >> >> >> + model. E.g. if the estimated VF is also the lowest possib= le VF, > >> >> >> + and if other_loop_vinfo is 1 unit worse than this_loop_vin= fo > >> >> >> + for the estimated VF, we'd then choose this_loop_vinfo eve= n > >> >> >> + though (a) this_loop_vinfo might not actually be better th= an > >> >> >> + other_loop_vinfo for that VF and (b) it would be significa= ntly > >> >> >> + worse at larger VFs. > >> >> >> + > >> >> >> + Here we go for a hacky compromise: pick this_loop_vinfo if= it is > >> >> >> + no more expensive than other_loop_vinfo even after doublin= g the > >> >> >> + estimated other_loop_vinfo VF. For all but trivial loops,= this > >> >> >> + ensures that we only pick this_loop_vinfo if it is signifi= cantly > >> >> >> + better than other_loop_vinfo at the estimated VF. */ > >> >> >> + if (est_rel_other_min !=3D est_rel_this_min > >> >> >> + || est_rel_other_max !=3D est_rel_this_max) > >> >> >> + { > >> >> >> + HOST_WIDE_INT est_rel_this_likely > >> >> >> + =3D estimated_poly_value (rel_this, POLY_VALUE_LIKELY); > >> >> >> + HOST_WIDE_INT est_rel_other_likely > >> >> >> + =3D estimated_poly_value (rel_other, POLY_VALUE_LIKELY); > >> >> >> + > >> >> >> + return est_rel_this_likely * 2 <=3D est_rel_other_likely = ? -1 : 1; > >> >> >> + } > >> >> >> + > >> >> >> + return 0; > >> >> >> +} > >> >> >> + > >> >> >> +/* A <=3D>-style subroutine of better_main_loop_than_p, used wh= en there is > >> >> >> + nothing to choose between the inside (loop body) costs of TH= IS and OTHER. > >> >> >> + Check whether we can determine the return value of better_ma= in_loop_than_p > >> >> >> + by comparing the outside (prologue and epilogue) costs of TH= IS and OTHER. > >> >> >> + Return: > >> >> >> + > >> >> >> + * -1 if better_main_loop_than_p should return true. > >> >> >> + * 1 if better_main_loop_than_p should return false. > >> >> >> + * 0 if we can't decide. */ > >> >> >> + > >> >> >> +int > >> >> >> +vector_costs::compare_outside_loop_cost (const vector_costs *ot= her) const > >> >> >> +{ > >> >> >> + auto this_outside_cost =3D this->outside_cost (); > >> >> >> + auto other_outside_cost =3D other->outside_cost (); > >> >> >> + if (this_outside_cost !=3D other_outside_cost) > >> >> >> + return this_outside_cost < other_outside_cost ? -1 : 1; > >> >> >> + > >> >> >> + return 0; > >> >> >> +} > >> >> >> diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h > >> >> >> index 87d3f211a2e..0e3aad590e8 100644 > >> >> >> --- a/gcc/tree-vectorizer.h > >> >> >> +++ b/gcc/tree-vectorizer.h > >> >> >> @@ -1419,6 +1419,21 @@ public: > >> >> >> read back using the functions below. */ > >> >> >> virtual void finish_cost (); > >> >> >> > >> >> >> + /* The costs in THIS and OTHER both describe ways of vectoriz= ing > >> >> >> + a main loop. Return true if the costs described by THIS a= re > >> >> >> + cheaper than the costs described by OTHER. Return false i= f any > >> >> >> + of the following are true: > >> >> >> + > >> >> >> + - THIS and OTHER are of equal cost > >> >> >> + - OTHER is better than THIS > >> >> >> + - we can't be sure about the relative costs of THIS and OT= HER. */ > >> >> >> + virtual bool better_main_loop_than_p (const vector_costs *oth= er) const; > >> >> >> + > >> >> >> + /* Likewise, but the costs in THIS and OTHER both describe wa= ys of > >> >> >> + vectorizing an epilogue loop of MAIN_LOOP. */ > >> >> >> + virtual bool better_epilogue_loop_than_p (const vector_costs = *other, > >> >> >> + loop_vec_info main_l= oop) const; > >> >> >> + > >> >> >> unsigned int prologue_cost () const; > >> >> >> unsigned int body_cost () const; > >> >> >> unsigned int epilogue_cost () const; > >> >> >> @@ -1429,6 +1444,8 @@ protected: > >> >> >> unsigned int); > >> >> >> unsigned int adjust_cost_for_freq (stmt_vec_info, vect_cost_m= odel_location, > >> >> >> unsigned int); > >> >> >> + int compare_inside_loop_cost (const vector_costs *) const; > >> >> >> + int compare_outside_loop_cost (const vector_costs *) const; > >> >> >> > >> >> >> /* The region of code that we're considering vectorizing. */ > >> >> >> vec_info *m_vinfo; > >> >> >> -- > >> >> >> 2.25.1 > >> >> >>