From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com [IPv6:2a00:1450:4864:20::135]) by sourceware.org (Postfix) with ESMTPS id EE7043858D33 for ; Thu, 23 May 2024 11:59:59 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org EE7043858D33 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org EE7043858D33 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2a00:1450:4864:20::135 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1716465602; cv=none; b=Y6oZE9HvyrsBEG9ZjsEHzWYYncIiRVzzvmOzsHdQVdxLDXipntx9NPhG8vIb3IyQkg5V4nbVZ8UUN7m2vDjjqGQj0l+vi8b2nA96xASQRxUtOxVS3Aey/QRhbmx2WhHVSYtGKzgIhvplE8kyGwhWIuq4eO3aF7UVy9OS1ZUpleE= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1716465602; c=relaxed/simple; bh=igEfJHst0+ZjNlcfILAwGjBK4tMsFPCl0ieqbEUDHL4=; h=DKIM-Signature:MIME-Version:From:Date:Message-ID:Subject:To; b=AvlrEuUjWsWA2pqQc5ZF47TkeD5bbf+EkRoYOD6m1S00wmpB316cHaqgfm0kZBN2pBawpQcd7WTuBdOeTGeqLv0IfEADsbMt0N428LWXcMfm1Vh7/kGMtlvFwUSZ5wp8sY9zrm3eH7mLdRaRhPBq9Ogb6EjEDKfwEUPUt4gNGCI= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-lf1-x135.google.com with SMTP id 2adb3069b0e04-51f45104ef0so7594120e87.3 for ; Thu, 23 May 2024 04:59:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716465598; x=1717070398; darn=gcc.gnu.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=pXd1GoURzQM8uQA3+WpzaeTC6FgydDlMzqxDdlnXPQM=; b=hyLHnaw/A/M82VkpWWYG3NN9J04BVN3cTEjIdcJ/WJAFK2ijiyO4FQB0gswo8jCb9c W6XTE3tckwc/YeSDrF7jC0ID84AFDKen8zQXy8NZZGQYagSX+wFziLlP0URR+OyqYtJO 5ps3+DWxvvPxTu6UzeLWSzNEgQFICapnGsPrdH3vd5bZkCZILlrG31x244Oqqg+sQBUM cr1iPplvsiGa5++sEBPo6OtkOJ4QupLpTu7ldrZgPipr1EkQSKG+u7yV0Oi3DNaAgDtE QPN/RbD6ydHAs8QUJQ08QHmXEAlpjDU8L9dq+W7NrfV4h+xqp0xq0oMI5klvX8kBCoym +u1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716465598; x=1717070398; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pXd1GoURzQM8uQA3+WpzaeTC6FgydDlMzqxDdlnXPQM=; b=K8J9VLwhuLHZTOw38XUQmwmppTMkaqRQpicAmL2UpyCZpZwOJQlv1A+FdsvDiOdrXv uwUEPFE5qIYgOk9o1dKTAKxHHT628pBhcAZGsmHi/Itp5aZQDCgmq9bBV4DHyehk9W0I nojNEqlC0XxT1iWnNu66uzXtFJPe3j+3oWZrv+aKAGM9R83AYcnAYReqLhHiLRYFaZQk 5VDbYi78dDn+Q9XcGAf/ZH4jh7xP+0pjLjD5WU91iUYCKUKsFMn3pWfWlD7jMnPMWNIZ gPsGyF/Ctnkgw32dAwB9cbWGkHAhi7nvCatzWxVrttfC9MGtzaYmrttt4fvLzxb3OSIw q7Kw== X-Gm-Message-State: AOJu0Yx5+S9hdqgCx9P/jdsSCzoKhUIqnVxfPCEV/b/J3R4n0pHgkiBY QP/VCHgyTTDUVYIMS0Zq+rdwYu7cLun2k2hHXyjkzTZcDD4J7RqP0sPE0fWNDoZbQBFZhXYiOp3 njgomuIXx0hjs+LQIa/osyawsYokdZefY X-Google-Smtp-Source: AGHT+IHq3sO2tdL4vLDs2U2dLLZwfLJ+nfuM3X7Up/FJY/YtVL6gloGa/bQvLMi1E/rBdLIsUWalk8pn8dL8kIaxXys= X-Received: by 2002:a05:6512:3598:b0:523:af73:2a5b with SMTP id 2adb3069b0e04-526bf35cb79mr2849113e87.35.1716465597958; Thu, 23 May 2024 04:59:57 -0700 (PDT) MIME-Version: 1.0 References: <20240522050734.1129622-1-hongtao.liu@intel.com> In-Reply-To: <20240522050734.1129622-1-hongtao.liu@intel.com> From: Richard Biener Date: Thu, 23 May 2024 13:59:47 +0200 Message-ID: Subject: Re: [V2 PATCH] Don't reduce estimated unrolled size for innermost loop at cunrolli. To: liuhongt Cc: gcc-patches@gcc.gnu.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Wed, May 22, 2024 at 7:07=E2=80=AFAM liuhongt wr= ote: > > >> Hard to find a default value satisfying all testcases. > >> some require loop unroll with 7 insns increment, some don't want loop > >> unroll w/ 5 insn increment. > >> The original 2/3 reduction happened to meet all those testcases(or the > >> testcases are constructed based on the old 2/3). > >> Can we define the parameter as the size of the loop, below the size we > >> still do the reduction, so the small loop can be unrolled? > > >Yeah, that's also a sensible possibility. Does it work to have a parame= ter > >for the unrolled body size? Thus, amend the existing > >--param max-completely-peeled-insns with a --param > >max-completely-peeled-insns-nogrowth? > > Update V2: > It's still hard to find a default value for loop boday size. So I move th= e > 2 / 3 reduction from estimated_unrolled_size to try_unroll_loop_completel= y. > For the check of body size shrink, 2 / 3 reduction is added, so small loo= ps > can still be unrolled. > For the check of comparison between body size and param_max_completely_pe= eled_insns, > 2 / 3 is conditionally added for loop->inner || !cunrolli. > Then the patch avoid gcc testsuite regression, and also prevent big inner= loop > completely unrolled at cunrolli. > > ------------------ > > For the innermost loop, after completely loop unroll, it will most likely > not be able to reduce the body size to 2/3. The current 2/3 reduction > will make some of the larger loops completely unrolled during > cunrolli, which will then result in them not being able to be > vectorized. It also increases the register pressure. The patch move > from estimated_unrolled_size to > the 2/3 reduction at cunrolli. > > Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}. > Ok for trunk? > > gcc/ChangeLog: > > PR tree-optimization/112325 > * tree-ssa-loop-ivcanon.cc (estimated_unrolled_size): Move the > 2 / 3 loop body size reduction to .. > (try_unroll_loop_completely): .. here, add it for the check of > body size shrink, and the check of comparison against > param_max_completely_peeled_insns when > (!cunrolli ||loop->inner). > (canonicalize_loop_induction_variables): Add new parameter > cunrolli and pass down. > (tree_unroll_loops_completely_1): Ditto. > (tree_unroll_loops_completely): Ditto. > (canonicalize_induction_variables): Handle new parameter. > (pass_complete_unrolli::execute): Ditto. > (pass_complete_unroll::execute): Ditto. > > gcc/testsuite/ChangeLog: > > * gcc.dg/tree-ssa/pr112325.c: New test. > * gcc.dg/vect/pr69783.c: Add extra option --param > max-completely-peeled-insns=3D300. > --- > gcc/testsuite/gcc.dg/tree-ssa/pr112325.c | 57 ++++++++++++++++++++++++ > gcc/testsuite/gcc.dg/vect/pr69783.c | 2 +- > gcc/tree-ssa-loop-ivcanon.cc | 45 ++++++++++--------- > 3 files changed, 83 insertions(+), 21 deletions(-) > create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/pr112325.c > > diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c b/gcc/testsuite/gcc= .dg/tree-ssa/pr112325.c > new file mode 100644 > index 00000000000..14208b3e7f8 > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c > @@ -0,0 +1,57 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O2 -fdump-tree-cunrolli-details" } */ > + > +typedef unsigned short ggml_fp16_t; > +static float table_f32_f16[1 << 16]; > + > +inline static float ggml_lookup_fp16_to_fp32(ggml_fp16_t f) { > + unsigned short s; > + __builtin_memcpy(&s, &f, sizeof(unsigned short)); > + return table_f32_f16[s]; > +} > + > +typedef struct { > + ggml_fp16_t d; > + ggml_fp16_t m; > + unsigned char qh[4]; > + unsigned char qs[32 / 2]; > +} block_q5_1; > + > +typedef struct { > + float d; > + float s; > + char qs[32]; > +} block_q8_1; > + > +void ggml_vec_dot_q5_1_q8_1(const int n, float * restrict s, const void = * restrict vx, const void * restrict vy) { > + const int qk =3D 32; > + const int nb =3D n / qk; > + > + const block_q5_1 * restrict x =3D vx; > + const block_q8_1 * restrict y =3D vy; > + > + float sumf =3D 0.0; > + > + for (int i =3D 0; i < nb; i++) { > + unsigned qh; > + __builtin_memcpy(&qh, x[i].qh, sizeof(qh)); > + > + int sumi =3D 0; > + > + for (int j =3D 0; j < qk/2; ++j) { > + const unsigned char xh_0 =3D ((qh >> (j + 0)) << 4) & 0x10; > + const unsigned char xh_1 =3D ((qh >> (j + 12)) ) & 0x10; > + > + const int x0 =3D (x[i].qs[j] & 0xF) | xh_0; > + const int x1 =3D (x[i].qs[j] >> 4) | xh_1; > + > + sumi +=3D (x0 * y[i].qs[j]) + (x1 * y[i].qs[j + qk/2]); > + } > + > + sumf +=3D (ggml_lookup_fp16_to_fp32(x[i].d)*y[i].d)*sumi + ggml_= lookup_fp16_to_fp32(x[i].m)*y[i].s; > + } > + > + *s =3D sumf; > +} > + > +/* { dg-final { scan-tree-dump {(?n)Not unrolling loop [1-9] \(--param m= ax-completely-peel-times limit reached} "cunrolli"} } */ Since this was about vectorization can you instead add a testcase to gcc.dg/vect/ and check for vectorization to happen? > diff --git a/gcc/testsuite/gcc.dg/vect/pr69783.c b/gcc/testsuite/gcc.dg/v= ect/pr69783.c > index 5df95d0ce4e..a1f75514d72 100644 > --- a/gcc/testsuite/gcc.dg/vect/pr69783.c > +++ b/gcc/testsuite/gcc.dg/vect/pr69783.c > @@ -1,6 +1,6 @@ > /* { dg-do compile } */ > /* { dg-require-effective-target vect_float } */ > -/* { dg-additional-options "-Ofast -funroll-loops" } */ > +/* { dg-additional-options "-Ofast -funroll-loops --param max-completely= -peeled-insns=3D300" } */ It _looks_ like this was maybe also vectorizer related? Can you double-check the PR? We don't seem to check for whether we vectorize, does this change with the default --param max-completely-peeled-insns? I'd rather have a #pragma GCC unroll before the loop we need unrolled than = an adjusted --param max-completely-peeled-insns. But if we just trade one vectorized loop for another I'm not so sure about the patch. > #define NXX 516 > #define NYY 516 > diff --git a/gcc/tree-ssa-loop-ivcanon.cc b/gcc/tree-ssa-loop-ivcanon.cc > index bf017137260..cc53eee1301 100644 > --- a/gcc/tree-ssa-loop-ivcanon.cc > +++ b/gcc/tree-ssa-loop-ivcanon.cc > @@ -437,11 +437,7 @@ tree_estimate_loop_size (class loop *loop, edge exit= , edge edge_to_cancel, > It is (NUNROLL + 1) * size of loop body with taking into account > the fact that in last copy everything after exit conditional > is dead and that some instructions will be eliminated after > - peeling. > - > - Loop body is likely going to simplify further, this is difficult > - to guess, we just decrease the result by 1/3. */ > - > + peeling. */ > static unsigned HOST_WIDE_INT > estimated_unrolled_size (struct loop_size *size, > unsigned HOST_WIDE_INT nunroll) > @@ -453,7 +449,6 @@ estimated_unrolled_size (struct loop_size *size, > unr_insns =3D 0; > unr_insns +=3D size->last_iteration - size->last_iteration_eliminated_= by_peeling; > > - unr_insns =3D unr_insns * 2 / 3; > if (unr_insns <=3D 0) > unr_insns =3D 1; I believe the if (unr_insn <=3D 0) check can go as well. > @@ -734,7 +729,8 @@ try_unroll_loop_completely (class loop *loop, > edge exit, tree niter, bool may_be_zero, > enum unroll_level ul, > HOST_WIDE_INT maxiter, > - dump_user_location_t locus, bool allow_peel) > + dump_user_location_t locus, bool allow_peel, > + bool cunrolli) > { > unsigned HOST_WIDE_INT n_unroll =3D 0; > bool n_unroll_found =3D false; > @@ -847,8 +843,9 @@ try_unroll_loop_completely (class loop *loop, > > /* If the code is going to shrink, we don't need to be extra > cautious on guessing if the unrolling is going to be > - profitable. */ > - if (unr_insns > + profitable. > + Move from estimated_unrolled_size to unroll small loops. */ > + if (unr_insns * 2 / 3 > /* If there is IV variable that will become constant, we > save one instruction in the loop prologue we do not > account otherwise. */ > @@ -919,7 +916,13 @@ try_unroll_loop_completely (class loop *loop, > loop->num); > return false; > } > - else if (unr_insns > + /* Move 2 / 3 reduction from estimated_unrolled_size, but don't= reduce > + unrolled size for innermost loop when cunrolli. > + 1) It could increase register pressure. > + 2) Big loop after completely unroll may not be vectorized > + by BB vectorizer. */ > + else if ((cunrolli && !loop->inner > + ? unr_insns : unr_insns * 2 / 3) > > (unsigned) param_max_completely_peeled_insns) > { > if (dump_file && (dump_flags & TDF_DETAILS)) > @@ -1227,7 +1230,7 @@ try_peel_loop (class loop *loop, > static bool > canonicalize_loop_induction_variables (class loop *loop, > bool create_iv, enum unroll_level = ul, > - bool try_eval, bool allow_peel) > + bool try_eval, bool allow_peel, bo= ol cunrolli) > { > edge exit =3D NULL; > tree niter; > @@ -1314,7 +1317,7 @@ canonicalize_loop_induction_variables (class loop *= loop, > > dump_user_location_t locus =3D find_loop_location (loop); > if (try_unroll_loop_completely (loop, exit, niter, may_be_zero, ul, > - maxiter, locus, allow_peel)) > + maxiter, locus, allow_peel, cunrolli)) > return true; > > if (create_iv > @@ -1358,7 +1361,7 @@ canonicalize_induction_variables (void) > { > changed |=3D canonicalize_loop_induction_variables (loop, > true, UL_SINGLE_I= TER, > - true, false); > + true, false, fals= e); > } > gcc_assert (!need_ssa_update_p (cfun)); > > @@ -1392,7 +1395,7 @@ canonicalize_induction_variables (void) > > static bool > tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_oute= r, > - bitmap father_bbs, class loop *loop) > + bitmap father_bbs, class loop *loop, bool= cunrolli) > { > class loop *loop_father; > bool changed =3D false; > @@ -1410,7 +1413,7 @@ tree_unroll_loops_completely_1 (bool may_increase_s= ize, bool unroll_outer, > if (!child_father_bbs) > child_father_bbs =3D BITMAP_ALLOC (NULL); > if (tree_unroll_loops_completely_1 (may_increase_size, unroll_out= er, > - child_father_bbs, inner)) > + child_father_bbs, inner, cunr= olli)) > { > bitmap_ior_into (father_bbs, child_father_bbs); > bitmap_clear (child_father_bbs); > @@ -1456,7 +1459,7 @@ tree_unroll_loops_completely_1 (bool may_increase_s= ize, bool unroll_outer, > ul =3D UL_NO_GROWTH; > > if (canonicalize_loop_induction_variables > - (loop, false, ul, !flag_tree_loop_ivcanon, unroll_outer)) > + (loop, false, ul, !flag_tree_loop_ivcanon, unroll_outer, cunrolli)= ) > { > /* If we'll continue unrolling, we need to propagate constants > within the new basic blocks to fold away induction variable > @@ -1485,7 +1488,8 @@ tree_unroll_loops_completely_1 (bool may_increase_s= ize, bool unroll_outer, > size of the code does not increase. */ > > static unsigned int > -tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer) > +tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer, > + bool cunrolli) > { > bitmap father_bbs =3D BITMAP_ALLOC (NULL); > bool changed; > @@ -1507,7 +1511,8 @@ tree_unroll_loops_completely (bool may_increase_siz= e, bool unroll_outer) > > changed =3D tree_unroll_loops_completely_1 (may_increase_size, > unroll_outer, father_bbs, > - current_loops->tree_root)= ; > + current_loops->tree_root, > + cunrolli); as said, you want to do curolli =3D false; after the above since we are iterating and for a subsequent unrolling of an outer loop of an unrolled inner loop we _do_ want to apply the 2/3 reduction since there's likely inter-loop redundancies exposed (as happens in SPEC calculix for example). Not sure if that changes any of the testsuite outcome - it possibly avoids = the gcc.dg/vect/pr69783.c FAIL? Not sure about the arm fallout. Richard. > if (changed) > { > unsigned i; > @@ -1671,7 +1676,7 @@ pass_complete_unroll::execute (function *fun) > if (flag_peel_loops) > peeled_loops =3D BITMAP_ALLOC (NULL); > unsigned int val =3D tree_unroll_loops_completely (flag_cunroll_grow_s= ize, > - true); > + true, false); > if (peeled_loops) > { > BITMAP_FREE (peeled_loops); > @@ -1727,7 +1732,7 @@ pass_complete_unrolli::execute (function *fun) > if (number_of_loops (fun) > 1) > { > scev_initialize (); > - ret =3D tree_unroll_loops_completely (optimize >=3D 3, false); > + ret =3D tree_unroll_loops_completely (optimize >=3D 3, false, true= ); > scev_finalize (); > } > loop_optimizer_finalize (); > -- > 2.31.1 >