From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk1-x730.google.com (mail-qk1-x730.google.com [IPv6:2607:f8b0:4864:20::730]) by sourceware.org (Postfix) with ESMTPS id 25AC93858D38 for ; Thu, 23 May 2024 01:55:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 25AC93858D38 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 25AC93858D38 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::730 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1716429344; cv=none; b=pJG8aLS2XWdZXuSRcM7afqHwbB7L0y3AUgX33Rh0LZN5C/inQQowbBvuNO9ekmxIDMIjoC1BmYQN26nKh8AEpwGWys149fTQXMGkTXIKNhcLk+9rsdwHbFKlkJg+Ty6R1BLNGwRzKOKF5Z4okxqpqRD88T6jrZoWtThz9YC01sM= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1716429344; c=relaxed/simple; bh=nmuwXm5ObCtK/CwXQ7aH1rujCfd3H22fqwgMFFsPG9c=; h=DKIM-Signature:MIME-Version:From:Date:Message-ID:Subject:To; b=PJ3u3vnKpFGR7NBF2VB5r2Z+sVfHlGieTKQiYHOvyRzaMqKbVQ1K2esosIbmHaJGK5xC63rGqYaEoqxP6+teXNeSnZjQBtUbaXFPrZwFmE4yi9jhQcMXYwBzv8AHTsD9+OQcUkCHyJniojJ6rtAO28kK5tLmQ6N5seottDCfKsQ= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-qk1-x730.google.com with SMTP id af79cd13be357-7948b768497so85824385a.3 for ; Wed, 22 May 2024 18:55:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716429337; x=1717034137; darn=gcc.gnu.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=MmbcsyX7ygTR1h6/pILc9aeZ/eYHl3zVlQAEHXsRdHA=; b=aQ8LH3RrpMsDbfDUEogCSZPXVv1KL8OBRSRVNghmqcX4BEZRwbQf+KyXldQKjWEPFm NiDR3+7KjTcBioujSZI6s8W+B4X2xq0shy0T80gozdANn3ImfODgRAwsV0fyg1J3W1rJ YUtppLFqDK/9ugBMIEuYyMDPtzpvrDYyZKrq8wuZm75W8Y/yPMaK18Xfk42MQfVnmT5O TE2++Hv2qMKAlejBPxI7BvuwtQnrHsPBesGU8pKW/O7fCtlDH7nAs5Mce5xL7Wk2GVaM FCKbeYNW1Su3LYZeoO0IwvTWrfpjHnxSpWykce4L7ePp70L6RkxgznwQeKJrLYt20uz6 EOWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716429337; x=1717034137; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MmbcsyX7ygTR1h6/pILc9aeZ/eYHl3zVlQAEHXsRdHA=; b=vAwg/KhMW+1CJ4XxJPrRj9WmpmO4tv8piSxolufqHsjVPs1LGKPyVHv2DtjONEFqRQ 8fG8aJRBhSwnH635EgkBz8X7f4tjAthxWjKxFISnFQ4hdnx7YdXMQuGH9U63/tP+D4cL +zQifjWOSVdMPd8qRMcyS4ERAcDdxm4LYGlp9W+4yPJ7DuX54MQN/bLGhhjTqkIvZ3Vj OTNHhMG6B9Um30poP6Rl4AAC3atOQ9PSezTm96yAF1l4OBNfuBOTt1feAbGLnE1PETQk bbte3JwlONpFyt0xF6rNJ8Xt1UeNc9eHf1ZyGQEhXPVBFrcLhKSUJqn35237gtJQUHTH CX9A== X-Gm-Message-State: AOJu0Yw7gTWxRZrmxthgjiZ+xzHm4ljDrpG3fB83EZBLKa8TC5iqiI+v +CKJRtCiQqmzhknYOqeXeMRePzSmZtPK55d752TdqsfbJQJvV9J3+wnGrOoNzulZ3cvGcSW4AnY cVE7eIqNV0UGs8RMhSVXOsN5YD6m/JqOTtQQ= X-Google-Smtp-Source: AGHT+IGPXLdWj9/z7phzJPgUQFua8lzYuJoXHGK7bw9ZalgluLHNx6G5sm4ZET4Dvudsy7VpFGvAV/TcAvFbmenhP4k= X-Received: by 2002:a05:620a:4622:b0:794:9b38:2312 with SMTP id af79cd13be357-7949b382c66mr355431685a.62.1716429337143; Wed, 22 May 2024 18:55:37 -0700 (PDT) MIME-Version: 1.0 References: <20240522050734.1129622-1-hongtao.liu@intel.com> In-Reply-To: <20240522050734.1129622-1-hongtao.liu@intel.com> From: Hongtao Liu Date: Thu, 23 May 2024 09:55:25 +0800 Message-ID: Subject: Re: [V2 PATCH] Don't reduce estimated unrolled size for innermost loop at cunrolli. To: liuhongt Cc: gcc-patches@gcc.gnu.org, richard.guenther@gmail.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SCC_10_SHORT_WORD_LINES,SCC_5_SHORT_WORD_LINES,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Wed, May 22, 2024 at 1:07=E2=80=AFPM liuhongt wr= ote: > > >> Hard to find a default value satisfying all testcases. > >> some require loop unroll with 7 insns increment, some don't want loop > >> unroll w/ 5 insn increment. > >> The original 2/3 reduction happened to meet all those testcases(or the > >> testcases are constructed based on the old 2/3). > >> Can we define the parameter as the size of the loop, below the size we > >> still do the reduction, so the small loop can be unrolled? > > >Yeah, that's also a sensible possibility. Does it work to have a parame= ter > >for the unrolled body size? Thus, amend the existing > >--param max-completely-peeled-insns with a --param > >max-completely-peeled-insns-nogrowth? > > Update V2: > It's still hard to find a default value for loop boday size. So I move th= e > 2 / 3 reduction from estimated_unrolled_size to try_unroll_loop_completel= y. > For the check of body size shrink, 2 / 3 reduction is added, so small loo= ps > can still be unrolled. > For the check of comparison between body size and param_max_completely_pe= eled_insns, > 2 / 3 is conditionally added for loop->inner || !cunrolli. > Then the patch avoid gcc testsuite regression, and also prevent big inner= loop > completely unrolled at cunrolli. The patch regressed arm-*-eabi for FAIL: 3 regressions regressions.sum: =3D=3D=3D gcc tests =3D=3D=3D Running gcc:gcc.dg/tree-ssa/tree-ssa.exp ... FAIL: gcc.dg/tree-ssa/pr83403-1.c scan-tree-dump-times lim2 "Executing store motion of" 10 FAIL: gcc.dg/tree-ssa/pr83403-2.c scan-tree-dump-times lim2 "Executing store motion of" 10 =3D=3D=3D gfortran tests =3D=3D=3D Running gfortran:gfortran.dg/dg.exp ... FAIL: gfortran.dg/reassoc_4.f -O scan-tree-dump-times reassoc1 "[0-9] \\*= " 22 for 32-bit arm, estimate_num_insns_seq returns more for load/store of doubl= e. The loop in pr83403-1.c 198Estimating sizes for loop 4 199 BB: 6, after_exit: 0 200 size: 2 if (m_23 !=3D 10) 201 Exit condition will be eliminated in peeled copies. 202 Exit condition will be eliminated in last copy. 203 Constant conditional. 204 BB: 5, after_exit: 1 205 size: 1 _5 =3D n_24 * 10; 206 size: 1 _6 =3D _5 + m_23; 207 size: 1 _7 =3D _6 * 8; 208 size: 1 _8 =3D C_35 + _7; 209 size: 2 _9 =3D *_8; 210 size: 1 _10 =3D k_25 * 20; 211 size: 1 _11 =3D _10 + m_23; 212 size: 1 _12 =3D _11 * 8; 213 size: 1 _13 =3D A_31 + _12; 214 size: 2 _14 =3D *_13; 215 size: 1 _15 =3D n_24 * 20; 216 size: 1 _16 =3D _15 + k_25; 217 size: 1 _17 =3D _16 * 8; 218 size: 1 _18 =3D B_33 + _17; 219 size: 2 _19 =3D *_18; 220 size: 1 _20 =3D _14 * _19; 221 size: 1 _21 =3D _9 + _20; 222 size: 2 *_8 =3D _21; 223 size: 1 m_40 =3D m_23 + 1; 224 Induction variable computation will be folded away. 225size: 25-3, last_iteration: 2-2 226 Loop size: 25 227 Estimated size after unrolling: 220 For aarch64 and x86, it's ok 198Estimating sizes for loop 4 199 BB: 6, after_exit: 0 200 size: 2 if (m_27 !=3D 10) 201 Exit condition will be eliminated in peeled copies. 202 Exit condition will be eliminated in last copy. 203 Constant conditional. 204 BB: 5, after_exit: 1 205 size: 1 _6 =3D n_28 * 10; 206 size: 1 _7 =3D _6 + m_27; 207 size: 0 _8 =3D (long unsigned int) _7; 208 size: 1 _9 =3D _8 * 8; 209 size: 1 _10 =3D C_39 + _9; 210 size: 1 _11 =3D *_10; 211 size: 1 _12 =3D k_29 * 20; 212 size: 1 _13 =3D _12 + m_27; 213 size: 0 _14 =3D (long unsigned int) _13; 214 size: 1 _15 =3D _14 * 8; 215 size: 1 _16 =3D A_35 + _15; 216 size: 1 _17 =3D *_16; 217 size: 1 _18 =3D n_28 * 20; 218 size: 1 _19 =3D _18 + k_29; 219 size: 0 _20 =3D (long unsigned int) _19; 220 size: 1 _21 =3D _20 * 8; 221 size: 1 _22 =3D B_37 + _21; 222 size: 1 _23 =3D *_22; 223 size: 1 _24 =3D _17 * _23; 224 size: 1 _25 =3D _11 + _24; 225 size: 1 *_10 =3D _25; 226 size: 1 m_44 =3D m_27 + 1; 227 Induction variable computation will be folded away. 228size: 21-3, last_iteration: 2-2 229 Loop size: 21 230 Estimated size after unrolling: 180 > > ------------------ > > For the innermost loop, after completely loop unroll, it will most likely > not be able to reduce the body size to 2/3. The current 2/3 reduction > will make some of the larger loops completely unrolled during > cunrolli, which will then result in them not being able to be > vectorized. It also increases the register pressure. The patch move > from estimated_unrolled_size to > the 2/3 reduction at cunrolli. > > Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}. > Ok for trunk? > > gcc/ChangeLog: > > PR tree-optimization/112325 > * tree-ssa-loop-ivcanon.cc (estimated_unrolled_size): Move the > 2 / 3 loop body size reduction to .. > (try_unroll_loop_completely): .. here, add it for the check of > body size shrink, and the check of comparison against > param_max_completely_peeled_insns when > (!cunrolli ||loop->inner). > (canonicalize_loop_induction_variables): Add new parameter > cunrolli and pass down. > (tree_unroll_loops_completely_1): Ditto. > (tree_unroll_loops_completely): Ditto. > (canonicalize_induction_variables): Handle new parameter. > (pass_complete_unrolli::execute): Ditto. > (pass_complete_unroll::execute): Ditto. > > gcc/testsuite/ChangeLog: > > * gcc.dg/tree-ssa/pr112325.c: New test. > * gcc.dg/vect/pr69783.c: Add extra option --param > max-completely-peeled-insns=3D300. > --- > gcc/testsuite/gcc.dg/tree-ssa/pr112325.c | 57 ++++++++++++++++++++++++ > gcc/testsuite/gcc.dg/vect/pr69783.c | 2 +- > gcc/tree-ssa-loop-ivcanon.cc | 45 ++++++++++--------- > 3 files changed, 83 insertions(+), 21 deletions(-) > create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/pr112325.c > > diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c b/gcc/testsuite/gcc= .dg/tree-ssa/pr112325.c > new file mode 100644 > index 00000000000..14208b3e7f8 > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c > @@ -0,0 +1,57 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O2 -fdump-tree-cunrolli-details" } */ > + > +typedef unsigned short ggml_fp16_t; > +static float table_f32_f16[1 << 16]; > + > +inline static float ggml_lookup_fp16_to_fp32(ggml_fp16_t f) { > + unsigned short s; > + __builtin_memcpy(&s, &f, sizeof(unsigned short)); > + return table_f32_f16[s]; > +} > + > +typedef struct { > + ggml_fp16_t d; > + ggml_fp16_t m; > + unsigned char qh[4]; > + unsigned char qs[32 / 2]; > +} block_q5_1; > + > +typedef struct { > + float d; > + float s; > + char qs[32]; > +} block_q8_1; > + > +void ggml_vec_dot_q5_1_q8_1(const int n, float * restrict s, const void = * restrict vx, const void * restrict vy) { > + const int qk =3D 32; > + const int nb =3D n / qk; > + > + const block_q5_1 * restrict x =3D vx; > + const block_q8_1 * restrict y =3D vy; > + > + float sumf =3D 0.0; > + > + for (int i =3D 0; i < nb; i++) { > + unsigned qh; > + __builtin_memcpy(&qh, x[i].qh, sizeof(qh)); > + > + int sumi =3D 0; > + > + for (int j =3D 0; j < qk/2; ++j) { > + const unsigned char xh_0 =3D ((qh >> (j + 0)) << 4) & 0x10; > + const unsigned char xh_1 =3D ((qh >> (j + 12)) ) & 0x10; > + > + const int x0 =3D (x[i].qs[j] & 0xF) | xh_0; > + const int x1 =3D (x[i].qs[j] >> 4) | xh_1; > + > + sumi +=3D (x0 * y[i].qs[j]) + (x1 * y[i].qs[j + qk/2]); > + } > + > + sumf +=3D (ggml_lookup_fp16_to_fp32(x[i].d)*y[i].d)*sumi + ggml_= lookup_fp16_to_fp32(x[i].m)*y[i].s; > + } > + > + *s =3D sumf; > +} > + > +/* { dg-final { scan-tree-dump {(?n)Not unrolling loop [1-9] \(--param m= ax-completely-peel-times limit reached} "cunrolli"} } */ > diff --git a/gcc/testsuite/gcc.dg/vect/pr69783.c b/gcc/testsuite/gcc.dg/v= ect/pr69783.c > index 5df95d0ce4e..a1f75514d72 100644 > --- a/gcc/testsuite/gcc.dg/vect/pr69783.c > +++ b/gcc/testsuite/gcc.dg/vect/pr69783.c > @@ -1,6 +1,6 @@ > /* { dg-do compile } */ > /* { dg-require-effective-target vect_float } */ > -/* { dg-additional-options "-Ofast -funroll-loops" } */ > +/* { dg-additional-options "-Ofast -funroll-loops --param max-completely= -peeled-insns=3D300" } */ > > #define NXX 516 > #define NYY 516 > diff --git a/gcc/tree-ssa-loop-ivcanon.cc b/gcc/tree-ssa-loop-ivcanon.cc > index bf017137260..cc53eee1301 100644 > --- a/gcc/tree-ssa-loop-ivcanon.cc > +++ b/gcc/tree-ssa-loop-ivcanon.cc > @@ -437,11 +437,7 @@ tree_estimate_loop_size (class loop *loop, edge exit= , edge edge_to_cancel, > It is (NUNROLL + 1) * size of loop body with taking into account > the fact that in last copy everything after exit conditional > is dead and that some instructions will be eliminated after > - peeling. > - > - Loop body is likely going to simplify further, this is difficult > - to guess, we just decrease the result by 1/3. */ > - > + peeling. */ > static unsigned HOST_WIDE_INT > estimated_unrolled_size (struct loop_size *size, > unsigned HOST_WIDE_INT nunroll) > @@ -453,7 +449,6 @@ estimated_unrolled_size (struct loop_size *size, > unr_insns =3D 0; > unr_insns +=3D size->last_iteration - size->last_iteration_eliminated_= by_peeling; > > - unr_insns =3D unr_insns * 2 / 3; > if (unr_insns <=3D 0) > unr_insns =3D 1; > > @@ -734,7 +729,8 @@ try_unroll_loop_completely (class loop *loop, > edge exit, tree niter, bool may_be_zero, > enum unroll_level ul, > HOST_WIDE_INT maxiter, > - dump_user_location_t locus, bool allow_peel) > + dump_user_location_t locus, bool allow_peel, > + bool cunrolli) > { > unsigned HOST_WIDE_INT n_unroll =3D 0; > bool n_unroll_found =3D false; > @@ -847,8 +843,9 @@ try_unroll_loop_completely (class loop *loop, > > /* If the code is going to shrink, we don't need to be extra > cautious on guessing if the unrolling is going to be > - profitable. */ > - if (unr_insns > + profitable. > + Move from estimated_unrolled_size to unroll small loops. */ > + if (unr_insns * 2 / 3 > /* If there is IV variable that will become constant, we > save one instruction in the loop prologue we do not > account otherwise. */ > @@ -919,7 +916,13 @@ try_unroll_loop_completely (class loop *loop, > loop->num); > return false; > } > - else if (unr_insns > + /* Move 2 / 3 reduction from estimated_unrolled_size, but don't= reduce > + unrolled size for innermost loop when cunrolli. > + 1) It could increase register pressure. > + 2) Big loop after completely unroll may not be vectorized > + by BB vectorizer. */ > + else if ((cunrolli && !loop->inner > + ? unr_insns : unr_insns * 2 / 3) > > (unsigned) param_max_completely_peeled_insns) > { > if (dump_file && (dump_flags & TDF_DETAILS)) > @@ -1227,7 +1230,7 @@ try_peel_loop (class loop *loop, > static bool > canonicalize_loop_induction_variables (class loop *loop, > bool create_iv, enum unroll_level = ul, > - bool try_eval, bool allow_peel) > + bool try_eval, bool allow_peel, bo= ol cunrolli) > { > edge exit =3D NULL; > tree niter; > @@ -1314,7 +1317,7 @@ canonicalize_loop_induction_variables (class loop *= loop, > > dump_user_location_t locus =3D find_loop_location (loop); > if (try_unroll_loop_completely (loop, exit, niter, may_be_zero, ul, > - maxiter, locus, allow_peel)) > + maxiter, locus, allow_peel, cunrolli)) > return true; > > if (create_iv > @@ -1358,7 +1361,7 @@ canonicalize_induction_variables (void) > { > changed |=3D canonicalize_loop_induction_variables (loop, > true, UL_SINGLE_I= TER, > - true, false); > + true, false, fals= e); > } > gcc_assert (!need_ssa_update_p (cfun)); > > @@ -1392,7 +1395,7 @@ canonicalize_induction_variables (void) > > static bool > tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_oute= r, > - bitmap father_bbs, class loop *loop) > + bitmap father_bbs, class loop *loop, bool= cunrolli) > { > class loop *loop_father; > bool changed =3D false; > @@ -1410,7 +1413,7 @@ tree_unroll_loops_completely_1 (bool may_increase_s= ize, bool unroll_outer, > if (!child_father_bbs) > child_father_bbs =3D BITMAP_ALLOC (NULL); > if (tree_unroll_loops_completely_1 (may_increase_size, unroll_out= er, > - child_father_bbs, inner)) > + child_father_bbs, inner, cunr= olli)) > { > bitmap_ior_into (father_bbs, child_father_bbs); > bitmap_clear (child_father_bbs); > @@ -1456,7 +1459,7 @@ tree_unroll_loops_completely_1 (bool may_increase_s= ize, bool unroll_outer, > ul =3D UL_NO_GROWTH; > > if (canonicalize_loop_induction_variables > - (loop, false, ul, !flag_tree_loop_ivcanon, unroll_outer)) > + (loop, false, ul, !flag_tree_loop_ivcanon, unroll_outer, cunrolli)= ) > { > /* If we'll continue unrolling, we need to propagate constants > within the new basic blocks to fold away induction variable > @@ -1485,7 +1488,8 @@ tree_unroll_loops_completely_1 (bool may_increase_s= ize, bool unroll_outer, > size of the code does not increase. */ > > static unsigned int > -tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer) > +tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer, > + bool cunrolli) > { > bitmap father_bbs =3D BITMAP_ALLOC (NULL); > bool changed; > @@ -1507,7 +1511,8 @@ tree_unroll_loops_completely (bool may_increase_siz= e, bool unroll_outer) > > changed =3D tree_unroll_loops_completely_1 (may_increase_size, > unroll_outer, father_bbs, > - current_loops->tree_root)= ; > + current_loops->tree_root, > + cunrolli); > if (changed) > { > unsigned i; > @@ -1671,7 +1676,7 @@ pass_complete_unroll::execute (function *fun) > if (flag_peel_loops) > peeled_loops =3D BITMAP_ALLOC (NULL); > unsigned int val =3D tree_unroll_loops_completely (flag_cunroll_grow_s= ize, > - true); > + true, false); > if (peeled_loops) > { > BITMAP_FREE (peeled_loops); > @@ -1727,7 +1732,7 @@ pass_complete_unrolli::execute (function *fun) > if (number_of_loops (fun) > 1) > { > scev_initialize (); > - ret =3D tree_unroll_loops_completely (optimize >=3D 3, false); > + ret =3D tree_unroll_loops_completely (optimize >=3D 3, false, true= ); > scev_finalize (); > } > loop_optimizer_finalize (); > -- > 2.31.1 > --=20 BR, Hongtao