From: Hao Liu OS <hliu@os.amperecomputing.com>
To: Richard Sandiford <richard.sandiford@arm.com>
Cc: Richard Biener <richard.guenther@gmail.com>,
"GCC-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>
Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency by multiplying count [PR110625]
Date: Mon, 31 Jul 2023 02:39:16 +0000 [thread overview]
Message-ID: <SJ2PR01MB863577E6A62DB68DC5E73726E105A@SJ2PR01MB8635.prod.exchangelabs.com> (raw)
In-Reply-To: <mptsf98aqig.fsf@arm.com>
> Which test case do you see this for? The two tests in the patch still
> seem to report correct latencies for me if I make the change above.
Not the newly added tests. It is still the existing case causing the previous ICE (i.e. assertion problem): gcc.target/aarch64/sve/cost_model_13.c.
It's not the test case itself failed, but the dump message of vect says the "reduction latency" is 0:
Before the change:
cost_model_13.c:7:21: note: Original vector body cost = 6
cost_model_13.c:7:21: note: Scalar issue estimate:
cost_model_13.c:7:21: note: load operations = 1
cost_model_13.c:7:21: note: store operations = 0
cost_model_13.c:7:21: note: general operations = 1
cost_model_13.c:7:21: note: reduction latency = 1
cost_model_13.c:7:21: note: estimated min cycles per iteration = 1.000000
cost_model_13.c:7:21: note: estimated cycles per vector iteration (for VF 8) = 8.000000
cost_model_13.c:7:21: note: Vector issue estimate:
cost_model_13.c:7:21: note: load operations = 1
cost_model_13.c:7:21: note: store operations = 0
cost_model_13.c:7:21: note: general operations = 1
cost_model_13.c:7:21: note: reduction latency = 2
cost_model_13.c:7:21: note: estimated min cycles per iteration = 2.000000
After the change:
cost_model_13.c:7:21: note: Original vector body cost = 6
cost_model_13.c:7:21: note: Scalar issue estimate:
cost_model_13.c:7:21: note: load operations = 1
cost_model_13.c:7:21: note: store operations = 0
cost_model_13.c:7:21: note: general operations = 1
cost_model_13.c:7:21: note: reduction latency = 0 <--- seems not consistent with above result
cost_model_13.c:7:21: note: estimated min cycles per iteration = 1.000000
cost_model_13.c:7:21: note: estimated cycles per vector iteration (for VF 8) = 8.000000
cost_model_13.c:7:21: note: Vector issue estimate:
cost_model_13.c:7:21: note: load operations = 1
cost_model_13.c:7:21: note: store operations = 0
cost_model_13.c:7:21: note: general operations = 1
cost_model_13.c:7:21: note: reduction latency = 0 <--- seems not consistent with above result
cost_model_13.c:7:21: note: estimated min cycles per iteration = 1.000000 <--- seems not consistent with above result
BTW. this should be caused by the reduction stmt is not live, which indicates whether this stmts is part of a computation whose result is used outside the loop (tree-vectorized.h:1204):
<bb 3>:
# res_18 = PHI <res_15(7), 0(6)>
# i_20 = PHI <i_16(7), 0(6)>
_1 = (long unsigned int) i_20;
_2 = _1 * 2;
_3 = x_14(D) + _2;
_4 = *_3;
_5 = (unsigned short) _4;
res.0_6 = (unsigned short) res_18;
_7 = _5 + res.0_6; <-- This is not live, may be caused by the below type cast stmt.
res_15 = (short int) _7;
i_16 = i_20 + 1;
if (n_11(D) > i_16)
goto <bb 7>;
else
goto <bb 4>;
<bb 7>:
goto <bb 3>;
Thanks,
-Hao
________________________________________
From: Richard Sandiford <richard.sandiford@arm.com>
Sent: Saturday, July 29, 2023 1:35
To: Hao Liu OS
Cc: Richard Biener; GCC-patches@gcc.gnu.org
Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency by multiplying count [PR110625]
Sorry for the slow response.
Hao Liu OS <hliu@os.amperecomputing.com> writes:
>> Ah, thanks. In that case, Hao, I think we can avoid the ICE by changing:
>>
>> if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
>> && vect_is_reduction (stmt_info))
>>
>> to:
>>
>> if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
>> && STMT_VINFO_LIVE_P (stmt_info)
>> && vect_is_reduction (stmt_info))
>
> I tried this and it indeed can avoid ICE. But it seems the reduction_latency calculation is also skipped, after such modification, the redunction_latency is 0 for this case. Previously, it is 1 and 2 for scalar and vector separately.
Which test case do you see this for? The two tests in the patch still
seem to report correct latencies for me if I make the change above.
Thanks,
Richard
> IMHO, to keep it consistent with previous result, should we move STMT_VINFO_LIVE_P check below and inside the if? such as:
>
> /* Calculate the minimum cycles per iteration imposed by a reduction
> operation. */
> if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
> && vect_is_reduction (stmt_info))
> {
> unsigned int base
> = aarch64_in_loop_reduction_latency (m_vinfo, stmt_info, m_vec_flags);
> if (STMT_VINFO_LIVE_P (stmt_info) && STMT_VINFO_FORCE_SINGLE_CYCLE (
> info_for_reduction (m_vinfo, stmt_info)))
> /* ??? Ideally we'd use a tree to reduce the copies down to 1 vector,
> and then accumulate that, but at the moment the loop-carried
> dependency includes all copies. */
> ops->reduction_latency = MAX (ops->reduction_latency, base * count);
> else
> ops->reduction_latency = MAX (ops->reduction_latency, base);
>
> Thanks,
> Hao
>
> ________________________________________
> From: Richard Sandiford <richard.sandiford@arm.com>
> Sent: Wednesday, July 26, 2023 17:14
> To: Richard Biener
> Cc: Hao Liu OS; GCC-patches@gcc.gnu.org
> Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency by multiplying count [PR110625]
>
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Wed, Jul 26, 2023 at 4:02 AM Hao Liu OS via Gcc-patches
>> <gcc-patches@gcc.gnu.org> wrote:
>>>
>>> > When was STMT_VINFO_REDUC_DEF empty? I just want to make sure that we're not papering over an issue elsewhere.
>>>
>>> Yes, I also wonder if this is an issue in vectorizable_reduction. Below is the the gimple of "gcc.target/aarch64/sve/cost_model_13.c":
>>>
>>> <bb 3>:
>>> # res_18 = PHI <res_15(7), 0(6)>
>>> # i_20 = PHI <i_16(7), 0(6)>
>>> _1 = (long unsigned int) i_20;
>>> _2 = _1 * 2;
>>> _3 = x_14(D) + _2;
>>> _4 = *_3;
>>> _5 = (unsigned short) _4;
>>> res.0_6 = (unsigned short) res_18;
>>> _7 = _5 + res.0_6; <-- The current stmt_info
>>> res_15 = (short int) _7;
>>> i_16 = i_20 + 1;
>>> if (n_11(D) > i_16)
>>> goto <bb 7>;
>>> else
>>> goto <bb 4>;
>>>
>>> <bb 7>:
>>> goto <bb 3>;
>>>
>>> It looks like that STMT_VINFO_REDUC_DEF should be "res_18 = PHI <res_15(7), 0(6)>"?
>>> The status here is:
>>> STMT_VINFO_REDUC_IDX (stmt_info): 1
>>> STMT_VINFO_REDUC_TYPE (stmt_info): TREE_CODE_REDUCTION
>>> STMT_VINFO_REDUC_VECTYPE (stmt_info): 0x0
>>
>> Not all stmts in the SSA cycle forming the reduction have
>> STMT_VINFO_REDUC_DEF set,
>> only the last (latch def) and live stmts have at the moment.
>
> Ah, thanks. In that case, Hao, I think we can avoid the ICE by changing:
>
> if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
> && vect_is_reduction (stmt_info))
>
> to:
>
> if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
> && STMT_VINFO_LIVE_P (stmt_info)
> && vect_is_reduction (stmt_info))
>
> instead of using a null check.
>
> I see that vectorizable_reduction calculates a reduc_chain_length.
> Would it be OK to store that in the stmt_vec_info? I suppose the
> AArch64 code should be multiplying by that as well. (It would be a
> separate patch from this one though.)
>
> Richard
>
>
>>
>> Richard.
>>
>>> Thanks,
>>> Hao
>>>
>>> ________________________________________
>>> From: Richard Sandiford <richard.sandiford@arm.com>
>>> Sent: Tuesday, July 25, 2023 17:44
>>> To: Hao Liu OS
>>> Cc: GCC-patches@gcc.gnu.org
>>> Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency by multiplying count [PR110625]
>>>
>>> Hao Liu OS <hliu@os.amperecomputing.com> writes:
>>> > Hi,
>>> >
>>> > Thanks for the suggestion. I tested it and found a gcc_assert failure:
>>> > gcc.target/aarch64/sve/cost_model_13.c (internal compiler error: in info_for_reduction, at tree-vect-loop.cc:5473)
>>> >
>>> > It is caused by empty STMT_VINFO_REDUC_DEF.
>>>
>>> When was STMT_VINFO_REDUC_DEF empty? I just want to make sure that
>>> we're not papering over an issue elsewhere.
>>>
>>> Thanks,
>>> Richard
>>>
>>> So, I added an extra check before checking single_defuse_cycle. The updated patch is below. Is it OK for trunk?
>>> >
>>> > ---
>>> >
>>> > The new costs should only count reduction latency by multiplying count for
>>> > single_defuse_cycle. For other situations, this will increase the reduction
>>> > latency a lot and miss vectorization opportunities.
>>> >
>>> > Tested on aarch64-linux-gnu.
>>> >
>>> > gcc/ChangeLog:
>>> >
>>> > PR target/110625
>>> > * config/aarch64/aarch64.cc (count_ops): Only '* count' for
>>> > single_defuse_cycle while counting reduction_latency.
>>> >
>>> > gcc/testsuite/ChangeLog:
>>> >
>>> > * gcc.target/aarch64/pr110625_1.c: New testcase.
>>> > * gcc.target/aarch64/pr110625_2.c: New testcase.
>>> > ---
>>> > gcc/config/aarch64/aarch64.cc | 13 ++++--
>>> > gcc/testsuite/gcc.target/aarch64/pr110625_1.c | 46 +++++++++++++++++++
>>> > gcc/testsuite/gcc.target/aarch64/pr110625_2.c | 14 ++++++
>>> > 3 files changed, 69 insertions(+), 4 deletions(-)
>>> > create mode 100644 gcc/testsuite/gcc.target/aarch64/pr110625_1.c
>>> > create mode 100644 gcc/testsuite/gcc.target/aarch64/pr110625_2.c
>>> >
>>> > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
>>> > index 560e5431636..478a4e00110 100644
>>> > --- a/gcc/config/aarch64/aarch64.cc
>>> > +++ b/gcc/config/aarch64/aarch64.cc
>>> > @@ -16788,10 +16788,15 @@ aarch64_vector_costs::count_ops (unsigned int count, vect_cost_for_stmt kind,
>>> > {
>>> > unsigned int base
>>> > = aarch64_in_loop_reduction_latency (m_vinfo, stmt_info, m_vec_flags);
>>> > -
>>> > - /* ??? Ideally we'd do COUNT reductions in parallel, but unfortunately
>>> > - that's not yet the case. */
>>> > - ops->reduction_latency = MAX (ops->reduction_latency, base * count);
>>> > + if (STMT_VINFO_REDUC_DEF (stmt_info)
>>> > + && STMT_VINFO_FORCE_SINGLE_CYCLE (
>>> > + info_for_reduction (m_vinfo, stmt_info)))
>>> > + /* ??? Ideally we'd use a tree to reduce the copies down to 1 vector,
>>> > + and then accumulate that, but at the moment the loop-carried
>>> > + dependency includes all copies. */
>>> > + ops->reduction_latency = MAX (ops->reduction_latency, base * count);
>>> > + else
>>> > + ops->reduction_latency = MAX (ops->reduction_latency, base);
>>> > }
>>> >
>>> > /* Assume that multiply-adds will become a single operation. */
>>> > diff --git a/gcc/testsuite/gcc.target/aarch64/pr110625_1.c b/gcc/testsuite/gcc.target/aarch64/pr110625_1.c
>>> > new file mode 100644
>>> > index 00000000000..0965cac33a0
>>> > --- /dev/null
>>> > +++ b/gcc/testsuite/gcc.target/aarch64/pr110625_1.c
>>> > @@ -0,0 +1,46 @@
>>> > +/* { dg-do compile } */
>>> > +/* { dg-options "-Ofast -mcpu=neoverse-n2 -fdump-tree-vect-details -fno-tree-slp-vectorize" } */
>>> > +/* { dg-final { scan-tree-dump-not "reduction latency = 8" "vect" } } */
>>> > +
>>> > +/* Do not increase the vector body cost due to the incorrect reduction latency
>>> > + Original vector body cost = 51
>>> > + Scalar issue estimate:
>>> > + ...
>>> > + reduction latency = 2
>>> > + estimated min cycles per iteration = 2.000000
>>> > + estimated cycles per vector iteration (for VF 2) = 4.000000
>>> > + Vector issue estimate:
>>> > + ...
>>> > + reduction latency = 8 <-- Too large
>>> > + estimated min cycles per iteration = 8.000000
>>> > + Increasing body cost to 102 because scalar code would issue more quickly
>>> > + ...
>>> > + missed: cost model: the vector iteration cost = 102 divided by the scalar iteration cost = 44 is greater or equal to the vectorization factor = 2.
>>> > + missed: not vectorized: vectorization not profitable. */
>>> > +
>>> > +typedef struct
>>> > +{
>>> > + unsigned short m1, m2, m3, m4;
>>> > +} the_struct_t;
>>> > +typedef struct
>>> > +{
>>> > + double m1, m2, m3, m4, m5;
>>> > +} the_struct2_t;
>>> > +
>>> > +double
>>> > +bar (the_struct2_t *);
>>> > +
>>> > +double
>>> > +foo (double *k, unsigned int n, the_struct_t *the_struct)
>>> > +{
>>> > + unsigned int u;
>>> > + the_struct2_t result;
>>> > + for (u = 0; u < n; u++, k--)
>>> > + {
>>> > + result.m1 += (*k) * the_struct[u].m1;
>>> > + result.m2 += (*k) * the_struct[u].m2;
>>> > + result.m3 += (*k) * the_struct[u].m3;
>>> > + result.m4 += (*k) * the_struct[u].m4;
>>> > + }
>>> > + return bar (&result);
>>> > +}
>>> > diff --git a/gcc/testsuite/gcc.target/aarch64/pr110625_2.c b/gcc/testsuite/gcc.target/aarch64/pr110625_2.c
>>> > new file mode 100644
>>> > index 00000000000..7a84aa8355e
>>> > --- /dev/null
>>> > +++ b/gcc/testsuite/gcc.target/aarch64/pr110625_2.c
>>> > @@ -0,0 +1,14 @@
>>> > +/* { dg-do compile } */
>>> > +/* { dg-options "-Ofast -mcpu=neoverse-n2 -fdump-tree-vect-details -fno-tree-slp-vectorize" } */
>>> > +/* { dg-final { scan-tree-dump "reduction latency = 8" "vect" } } */
>>> > +
>>> > +/* The reduction latency should be multiplied by the count for
>>> > + single_defuse_cycle. */
>>> > +
>>> > +long
>>> > +f (long res, short *ptr1, short *ptr2, int n)
>>> > +{
>>> > + for (int i = 0; i < n; ++i)
>>> > + res += (long) ptr1[i] << ptr2[i];
>>> > + return res;
>>> > +}
next prev parent reply other threads:[~2023-07-31 2:39 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-19 4:33 Hao Liu OS
2023-07-24 1:58 ` Hao Liu OS
2023-07-24 11:10 ` Richard Sandiford
2023-07-25 9:10 ` Hao Liu OS
2023-07-25 9:44 ` Richard Sandiford
2023-07-26 2:01 ` Hao Liu OS
2023-07-26 8:47 ` Richard Biener
2023-07-26 9:14 ` Richard Sandiford
2023-07-26 10:02 ` Richard Biener
2023-07-26 10:12 ` Richard Sandiford
2023-07-26 12:00 ` Richard Biener
2023-07-26 12:54 ` Hao Liu OS
2023-07-28 10:06 ` Hao Liu OS
2023-07-28 17:35 ` Richard Sandiford
2023-07-31 2:39 ` Hao Liu OS [this message]
2023-07-31 9:11 ` Richard Sandiford
2023-07-31 9:25 ` Hao Liu OS
2023-08-01 9:43 ` Hao Liu OS
2023-08-02 3:45 ` Hao Liu OS
2023-08-03 9:33 ` Hao Liu OS
2023-08-03 10:10 ` Richard Sandiford
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=SJ2PR01MB863577E6A62DB68DC5E73726E105A@SJ2PR01MB8635.prod.exchangelabs.com \
--to=hliu@os.amperecomputing.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=richard.guenther@gmail.com \
--cc=richard.sandiford@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).