From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 26D643855587 for ; Fri, 28 Jul 2023 17:35:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 26D643855587 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 144642F4; Fri, 28 Jul 2023 10:35:48 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 649B93F67D; Fri, 28 Jul 2023 10:35:04 -0700 (PDT) From: Richard Sandiford To: Hao Liu OS Mail-Followup-To: Hao Liu OS ,Richard Biener , "GCC-patches\@gcc.gnu.org" , richard.sandiford@arm.com Cc: Richard Biener , "GCC-patches\@gcc.gnu.org" Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency by multiplying count [PR110625] References: Date: Fri, 28 Jul 2023 18:35:03 +0100 In-Reply-To: (Hao Liu's message of "Wed, 26 Jul 2023 12:54:52 +0000") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-26.0 required=5.0 tests=BAYES_00,GIT_PATCH_0,KAM_ASCII_DIVIDERS,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,KAM_SHORT,SPF_HELO_NONE,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Sorry for the slow response. Hao Liu OS writes: >> Ah, thanks. In that case, Hao, I think we can avoid the ICE by changing: >> >> if ((kind =3D=3D scalar_stmt || kind =3D=3D vector_stmt || kind =3D=3D= vec_to_scalar) >> && vect_is_reduction (stmt_info)) >> >> to: >> >> if ((kind =3D=3D scalar_stmt || kind =3D=3D vector_stmt || kind =3D=3D= vec_to_scalar) >> && STMT_VINFO_LIVE_P (stmt_info) >> && vect_is_reduction (stmt_info)) > > I tried this and it indeed can avoid ICE. But it seems the reduction_la= tency calculation is also skipped, after such modification, the redunction_= latency is 0 for this case. Previously, it is 1 and 2 for scalar and vector= separately. Which test case do you see this for? The two tests in the patch still seem to report correct latencies for me if I make the change above. Thanks, Richard > IMHO, to keep it consistent with previous result, should we move STMT_VIN= FO_LIVE_P check below and inside the if? such as: > > /* Calculate the minimum cycles per iteration imposed by a reduction > operation. */ > if ((kind =3D=3D scalar_stmt || kind =3D=3D vector_stmt || kind =3D=3D = vec_to_scalar) > && vect_is_reduction (stmt_info)) > { > unsigned int base > =3D aarch64_in_loop_reduction_latency (m_vinfo, stmt_info, m_vec_= flags); > if (STMT_VINFO_LIVE_P (stmt_info) && STMT_VINFO_FORCE_SINGLE_CYCLE ( > info_for_reduction (m_vinfo, stmt_info))) > /* ??? Ideally we'd use a tree to reduce the copies down to 1 vec= tor, > and then accumulate that, but at the moment the loop-carried > dependency includes all copies. */ > ops->reduction_latency =3D MAX (ops->reduction_latency, base * co= unt); > else > ops->reduction_latency =3D MAX (ops->reduction_latency, base); > > Thanks, > Hao > > ________________________________________ > From: Richard Sandiford > Sent: Wednesday, July 26, 2023 17:14 > To: Richard Biener > Cc: Hao Liu OS; GCC-patches@gcc.gnu.org > Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency = by multiplying count [PR110625] > > Richard Biener writes: >> On Wed, Jul 26, 2023 at 4:02=E2=80=AFAM Hao Liu OS via Gcc-patches >> wrote: >>> >>> > When was STMT_VINFO_REDUC_DEF empty? I just want to make sure that w= e're not papering over an issue elsewhere. >>> >>> Yes, I also wonder if this is an issue in vectorizable_reduction. Belo= w is the the gimple of "gcc.target/aarch64/sve/cost_model_13.c": >>> >>> : >>> # res_18 =3D PHI >>> # i_20 =3D PHI >>> _1 =3D (long unsigned int) i_20; >>> _2 =3D _1 * 2; >>> _3 =3D x_14(D) + _2; >>> _4 =3D *_3; >>> _5 =3D (unsigned short) _4; >>> res.0_6 =3D (unsigned short) res_18; >>> _7 =3D _5 + res.0_6; <-- The current stmt= _info >>> res_15 =3D (short int) _7; >>> i_16 =3D i_20 + 1; >>> if (n_11(D) > i_16) >>> goto ; >>> else >>> goto ; >>> >>> : >>> goto ; >>> >>> It looks like that STMT_VINFO_REDUC_DEF should be "res_18 =3D PHI "? >>> The status here is: >>> STMT_VINFO_REDUC_IDX (stmt_info): 1 >>> STMT_VINFO_REDUC_TYPE (stmt_info): TREE_CODE_REDUCTION >>> STMT_VINFO_REDUC_VECTYPE (stmt_info): 0x0 >> >> Not all stmts in the SSA cycle forming the reduction have >> STMT_VINFO_REDUC_DEF set, >> only the last (latch def) and live stmts have at the moment. > > Ah, thanks. In that case, Hao, I think we can avoid the ICE by changing: > > if ((kind =3D=3D scalar_stmt || kind =3D=3D vector_stmt || kind =3D=3D = vec_to_scalar) > && vect_is_reduction (stmt_info)) > > to: > > if ((kind =3D=3D scalar_stmt || kind =3D=3D vector_stmt || kind =3D=3D = vec_to_scalar) > && STMT_VINFO_LIVE_P (stmt_info) > && vect_is_reduction (stmt_info)) > > instead of using a null check. > > I see that vectorizable_reduction calculates a reduc_chain_length. > Would it be OK to store that in the stmt_vec_info? I suppose the > AArch64 code should be multiplying by that as well. (It would be a > separate patch from this one though.) > > Richard > > >> >> Richard. >> >>> Thanks, >>> Hao >>> >>> ________________________________________ >>> From: Richard Sandiford >>> Sent: Tuesday, July 25, 2023 17:44 >>> To: Hao Liu OS >>> Cc: GCC-patches@gcc.gnu.org >>> Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latenc= y by multiplying count [PR110625] >>> >>> Hao Liu OS writes: >>> > Hi, >>> > >>> > Thanks for the suggestion. I tested it and found a gcc_assert failur= e: >>> > gcc.target/aarch64/sve/cost_model_13.c (internal compiler error: = in info_for_reduction, at tree-vect-loop.cc:5473) >>> > >>> > It is caused by empty STMT_VINFO_REDUC_DEF. >>> >>> When was STMT_VINFO_REDUC_DEF empty? I just want to make sure that >>> we're not papering over an issue elsewhere. >>> >>> Thanks, >>> Richard >>> >>> So, I added an extra check before checking single_defuse_cycle. The u= pdated patch is below. Is it OK for trunk? >>> > >>> > --- >>> > >>> > The new costs should only count reduction latency by multiplying coun= t for >>> > single_defuse_cycle. For other situations, this will increase the re= duction >>> > latency a lot and miss vectorization opportunities. >>> > >>> > Tested on aarch64-linux-gnu. >>> > >>> > gcc/ChangeLog: >>> > >>> > PR target/110625 >>> > * config/aarch64/aarch64.cc (count_ops): Only '* count' for >>> > single_defuse_cycle while counting reduction_latency. >>> > >>> > gcc/testsuite/ChangeLog: >>> > >>> > * gcc.target/aarch64/pr110625_1.c: New testcase. >>> > * gcc.target/aarch64/pr110625_2.c: New testcase. >>> > --- >>> > gcc/config/aarch64/aarch64.cc | 13 ++++-- >>> > gcc/testsuite/gcc.target/aarch64/pr110625_1.c | 46 +++++++++++++++++= ++ >>> > gcc/testsuite/gcc.target/aarch64/pr110625_2.c | 14 ++++++ >>> > 3 files changed, 69 insertions(+), 4 deletions(-) >>> > create mode 100644 gcc/testsuite/gcc.target/aarch64/pr110625_1.c >>> > create mode 100644 gcc/testsuite/gcc.target/aarch64/pr110625_2.c >>> > >>> > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch= 64.cc >>> > index 560e5431636..478a4e00110 100644 >>> > --- a/gcc/config/aarch64/aarch64.cc >>> > +++ b/gcc/config/aarch64/aarch64.cc >>> > @@ -16788,10 +16788,15 @@ aarch64_vector_costs::count_ops (unsigned i= nt count, vect_cost_for_stmt kind, >>> > { >>> > unsigned int base >>> > =3D aarch64_in_loop_reduction_latency (m_vinfo, stmt_info, m_ve= c_flags); >>> > - >>> > - /* ??? Ideally we'd do COUNT reductions in parallel, but unfor= tunately >>> > - that's not yet the case. */ >>> > - ops->reduction_latency =3D MAX (ops->reduction_latency, base *= count); >>> > + if (STMT_VINFO_REDUC_DEF (stmt_info) >>> > + && STMT_VINFO_FORCE_SINGLE_CYCLE ( >>> > + info_for_reduction (m_vinfo, stmt_info))) >>> > + /* ??? Ideally we'd use a tree to reduce the copies down to 1 v= ector, >>> > + and then accumulate that, but at the moment the loop-carried >>> > + dependency includes all copies. */ >>> > + ops->reduction_latency =3D MAX (ops->reduction_latency, base * = count); >>> > + else >>> > + ops->reduction_latency =3D MAX (ops->reduction_latency, base); >>> > } >>> > >>> > /* Assume that multiply-adds will become a single operation. */ >>> > diff --git a/gcc/testsuite/gcc.target/aarch64/pr110625_1.c b/gcc/test= suite/gcc.target/aarch64/pr110625_1.c >>> > new file mode 100644 >>> > index 00000000000..0965cac33a0 >>> > --- /dev/null >>> > +++ b/gcc/testsuite/gcc.target/aarch64/pr110625_1.c >>> > @@ -0,0 +1,46 @@ >>> > +/* { dg-do compile } */ >>> > +/* { dg-options "-Ofast -mcpu=3Dneoverse-n2 -fdump-tree-vect-details= -fno-tree-slp-vectorize" } */ >>> > +/* { dg-final { scan-tree-dump-not "reduction latency =3D 8" "vect" = } } */ >>> > + >>> > +/* Do not increase the vector body cost due to the incorrect reducti= on latency >>> > + Original vector body cost =3D 51 >>> > + Scalar issue estimate: >>> > + ... >>> > + reduction latency =3D 2 >>> > + estimated min cycles per iteration =3D 2.000000 >>> > + estimated cycles per vector iteration (for VF 2) =3D 4.000000 >>> > + Vector issue estimate: >>> > + ... >>> > + reduction latency =3D 8 <-- Too large >>> > + estimated min cycles per iteration =3D 8.000000 >>> > + Increasing body cost to 102 because scalar code would issue more= quickly >>> > + ... >>> > + missed: cost model: the vector iteration cost =3D 102 divided b= y the scalar iteration cost =3D 44 is greater or equal to the vectorization= factor =3D 2. >>> > + missed: not vectorized: vectorization not profitable. */ >>> > + >>> > +typedef struct >>> > +{ >>> > + unsigned short m1, m2, m3, m4; >>> > +} the_struct_t; >>> > +typedef struct >>> > +{ >>> > + double m1, m2, m3, m4, m5; >>> > +} the_struct2_t; >>> > + >>> > +double >>> > +bar (the_struct2_t *); >>> > + >>> > +double >>> > +foo (double *k, unsigned int n, the_struct_t *the_struct) >>> > +{ >>> > + unsigned int u; >>> > + the_struct2_t result; >>> > + for (u =3D 0; u < n; u++, k--) >>> > + { >>> > + result.m1 +=3D (*k) * the_struct[u].m1; >>> > + result.m2 +=3D (*k) * the_struct[u].m2; >>> > + result.m3 +=3D (*k) * the_struct[u].m3; >>> > + result.m4 +=3D (*k) * the_struct[u].m4; >>> > + } >>> > + return bar (&result); >>> > +} >>> > diff --git a/gcc/testsuite/gcc.target/aarch64/pr110625_2.c b/gcc/test= suite/gcc.target/aarch64/pr110625_2.c >>> > new file mode 100644 >>> > index 00000000000..7a84aa8355e >>> > --- /dev/null >>> > +++ b/gcc/testsuite/gcc.target/aarch64/pr110625_2.c >>> > @@ -0,0 +1,14 @@ >>> > +/* { dg-do compile } */ >>> > +/* { dg-options "-Ofast -mcpu=3Dneoverse-n2 -fdump-tree-vect-details= -fno-tree-slp-vectorize" } */ >>> > +/* { dg-final { scan-tree-dump "reduction latency =3D 8" "vect" } } = */ >>> > + >>> > +/* The reduction latency should be multiplied by the count for >>> > + single_defuse_cycle. */ >>> > + >>> > +long >>> > +f (long res, short *ptr1, short *ptr2, int n) >>> > +{ >>> > + for (int i =3D 0; i < n; ++i) >>> > + res +=3D (long) ptr1[i] << ptr2[i]; >>> > + return res; >>> > +}