public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Biener <richard.guenther@gmail.com>
To: Richard Biener <richard.guenther@gmail.com>,
	Hao Liu OS <hliu@os.amperecomputing.com>,
	 "GCC-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>,
	richard.sandiford@arm.com
Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency by multiplying count [PR110625]
Date: Wed, 26 Jul 2023 12:02:01 +0200	[thread overview]
Message-ID: <CAFiYyc0F-vvXpu_UUCs_=poJw=m5Rzdgbnb8NjQVbpDZ6frYYA@mail.gmail.com> (raw)
In-Reply-To: <mptv8e7c9wi.fsf@arm.com>

On Wed, Jul 26, 2023 at 11:14 AM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Richard Biener <richard.guenther@gmail.com> writes:
> > On Wed, Jul 26, 2023 at 4:02 AM Hao Liu OS via Gcc-patches
> > <gcc-patches@gcc.gnu.org> wrote:
> >>
> >> > When was STMT_VINFO_REDUC_DEF empty?  I just want to make sure that we're not papering over an issue elsewhere.
> >>
> >> Yes, I also wonder if this is an issue in vectorizable_reduction.  Below is the the gimple of "gcc.target/aarch64/sve/cost_model_13.c":
> >>
> >>   <bb 3>:
> >>   # res_18 = PHI <res_15(7), 0(6)>
> >>   # i_20 = PHI <i_16(7), 0(6)>
> >>   _1 = (long unsigned int) i_20;
> >>   _2 = _1 * 2;
> >>   _3 = x_14(D) + _2;
> >>   _4 = *_3;
> >>   _5 = (unsigned short) _4;
> >>   res.0_6 = (unsigned short) res_18;
> >>   _7 = _5 + res.0_6;                             <-- The current stmt_info
> >>   res_15 = (short int) _7;
> >>   i_16 = i_20 + 1;
> >>   if (n_11(D) > i_16)
> >>     goto <bb 7>;
> >>   else
> >>     goto <bb 4>;
> >>
> >>   <bb 7>:
> >>   goto <bb 3>;
> >>
> >> It looks like that STMT_VINFO_REDUC_DEF should be "res_18 = PHI <res_15(7), 0(6)>"?
> >> The status here is:
> >>   STMT_VINFO_REDUC_IDX (stmt_info): 1
> >>   STMT_VINFO_REDUC_TYPE (stmt_info): TREE_CODE_REDUCTION
> >>   STMT_VINFO_REDUC_VECTYPE (stmt_info): 0x0
> >
> > Not all stmts in the SSA cycle forming the reduction have
> > STMT_VINFO_REDUC_DEF set,
> > only the last (latch def) and live stmts have at the moment.
>
> Ah, thanks.  In that case, Hao, I think we can avoid the ICE by changing:
>
>   if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
>       && vect_is_reduction (stmt_info))
>
> to:
>
>   if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
>       && STMT_VINFO_LIVE_P (stmt_info)
>       && vect_is_reduction (stmt_info))
>
> instead of using a null check.

But as seen you will miss stmts that are part of the reduction then?
In principle we could put STMT_VINFO_REDUC_DEF to other stmts
as well.  See vectorizable_reduction in the

  while (reduc_def != PHI_RESULT (reduc_def_phi))

loop.

> I see that vectorizable_reduction calculates a reduc_chain_length.
> Would it be OK to store that in the stmt_vec_info?  I suppose the
> AArch64 code should be multiplying by that as well.  (It would be a
> separate patch from this one though.)

I don't think that's too relevant here (it also counts noop conversions).

Richard.

>
> Richard
>
>
> >
> > Richard.
> >
> >> Thanks,
> >> Hao
> >>
> >> ________________________________________
> >> From: Richard Sandiford <richard.sandiford@arm.com>
> >> Sent: Tuesday, July 25, 2023 17:44
> >> To: Hao Liu OS
> >> Cc: GCC-patches@gcc.gnu.org
> >> Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency by multiplying count [PR110625]
> >>
> >> Hao Liu OS <hliu@os.amperecomputing.com> writes:
> >> > Hi,
> >> >
> >> > Thanks for the suggestion.  I tested it and found a gcc_assert failure:
> >> >     gcc.target/aarch64/sve/cost_model_13.c (internal compiler error: in info_for_reduction, at tree-vect-loop.cc:5473)
> >> >
> >> > It is caused by empty STMT_VINFO_REDUC_DEF.
> >>
> >> When was STMT_VINFO_REDUC_DEF empty?  I just want to make sure that
> >> we're not papering over an issue elsewhere.
> >>
> >> Thanks,
> >> Richard
> >>
> >>   So, I added an extra check before checking single_defuse_cycle. The updated patch is below.  Is it OK for trunk?
> >> >
> >> > ---
> >> >
> >> > The new costs should only count reduction latency by multiplying count for
> >> > single_defuse_cycle.  For other situations, this will increase the reduction
> >> > latency a lot and miss vectorization opportunities.
> >> >
> >> > Tested on aarch64-linux-gnu.
> >> >
> >> > gcc/ChangeLog:
> >> >
> >> >       PR target/110625
> >> >       * config/aarch64/aarch64.cc (count_ops): Only '* count' for
> >> >       single_defuse_cycle while counting reduction_latency.
> >> >
> >> > gcc/testsuite/ChangeLog:
> >> >
> >> >       * gcc.target/aarch64/pr110625_1.c: New testcase.
> >> >       * gcc.target/aarch64/pr110625_2.c: New testcase.
> >> > ---
> >> >  gcc/config/aarch64/aarch64.cc                 | 13 ++++--
> >> >  gcc/testsuite/gcc.target/aarch64/pr110625_1.c | 46 +++++++++++++++++++
> >> >  gcc/testsuite/gcc.target/aarch64/pr110625_2.c | 14 ++++++
> >> >  3 files changed, 69 insertions(+), 4 deletions(-)
> >> >  create mode 100644 gcc/testsuite/gcc.target/aarch64/pr110625_1.c
> >> >  create mode 100644 gcc/testsuite/gcc.target/aarch64/pr110625_2.c
> >> >
> >> > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> >> > index 560e5431636..478a4e00110 100644
> >> > --- a/gcc/config/aarch64/aarch64.cc
> >> > +++ b/gcc/config/aarch64/aarch64.cc
> >> > @@ -16788,10 +16788,15 @@ aarch64_vector_costs::count_ops (unsigned int count, vect_cost_for_stmt kind,
> >> >      {
> >> >        unsigned int base
> >> >       = aarch64_in_loop_reduction_latency (m_vinfo, stmt_info, m_vec_flags);
> >> > -
> >> > -      /* ??? Ideally we'd do COUNT reductions in parallel, but unfortunately
> >> > -      that's not yet the case.  */
> >> > -      ops->reduction_latency = MAX (ops->reduction_latency, base * count);
> >> > +      if (STMT_VINFO_REDUC_DEF (stmt_info)
> >> > +       && STMT_VINFO_FORCE_SINGLE_CYCLE (
> >> > +         info_for_reduction (m_vinfo, stmt_info)))
> >> > +     /* ??? Ideally we'd use a tree to reduce the copies down to 1 vector,
> >> > +        and then accumulate that, but at the moment the loop-carried
> >> > +        dependency includes all copies.  */
> >> > +     ops->reduction_latency = MAX (ops->reduction_latency, base * count);
> >> > +      else
> >> > +     ops->reduction_latency = MAX (ops->reduction_latency, base);
> >> >      }
> >> >
> >> >    /* Assume that multiply-adds will become a single operation.  */
> >> > diff --git a/gcc/testsuite/gcc.target/aarch64/pr110625_1.c b/gcc/testsuite/gcc.target/aarch64/pr110625_1.c
> >> > new file mode 100644
> >> > index 00000000000..0965cac33a0
> >> > --- /dev/null
> >> > +++ b/gcc/testsuite/gcc.target/aarch64/pr110625_1.c
> >> > @@ -0,0 +1,46 @@
> >> > +/* { dg-do compile } */
> >> > +/* { dg-options "-Ofast -mcpu=neoverse-n2 -fdump-tree-vect-details -fno-tree-slp-vectorize" } */
> >> > +/* { dg-final { scan-tree-dump-not "reduction latency = 8" "vect" } } */
> >> > +
> >> > +/* Do not increase the vector body cost due to the incorrect reduction latency
> >> > +    Original vector body cost = 51
> >> > +    Scalar issue estimate:
> >> > +      ...
> >> > +      reduction latency = 2
> >> > +      estimated min cycles per iteration = 2.000000
> >> > +      estimated cycles per vector iteration (for VF 2) = 4.000000
> >> > +    Vector issue estimate:
> >> > +      ...
> >> > +      reduction latency = 8      <-- Too large
> >> > +      estimated min cycles per iteration = 8.000000
> >> > +    Increasing body cost to 102 because scalar code would issue more quickly
> >> > +      ...
> >> > +    missed:  cost model: the vector iteration cost = 102 divided by the scalar iteration cost = 44 is greater or equal to the vectorization factor = 2.
> >> > +    missed:  not vectorized: vectorization not profitable.  */
> >> > +
> >> > +typedef struct
> >> > +{
> >> > +  unsigned short m1, m2, m3, m4;
> >> > +} the_struct_t;
> >> > +typedef struct
> >> > +{
> >> > +  double m1, m2, m3, m4, m5;
> >> > +} the_struct2_t;
> >> > +
> >> > +double
> >> > +bar (the_struct2_t *);
> >> > +
> >> > +double
> >> > +foo (double *k, unsigned int n, the_struct_t *the_struct)
> >> > +{
> >> > +  unsigned int u;
> >> > +  the_struct2_t result;
> >> > +  for (u = 0; u < n; u++, k--)
> >> > +    {
> >> > +      result.m1 += (*k) * the_struct[u].m1;
> >> > +      result.m2 += (*k) * the_struct[u].m2;
> >> > +      result.m3 += (*k) * the_struct[u].m3;
> >> > +      result.m4 += (*k) * the_struct[u].m4;
> >> > +    }
> >> > +  return bar (&result);
> >> > +}
> >> > diff --git a/gcc/testsuite/gcc.target/aarch64/pr110625_2.c b/gcc/testsuite/gcc.target/aarch64/pr110625_2.c
> >> > new file mode 100644
> >> > index 00000000000..7a84aa8355e
> >> > --- /dev/null
> >> > +++ b/gcc/testsuite/gcc.target/aarch64/pr110625_2.c
> >> > @@ -0,0 +1,14 @@
> >> > +/* { dg-do compile } */
> >> > +/* { dg-options "-Ofast -mcpu=neoverse-n2 -fdump-tree-vect-details -fno-tree-slp-vectorize" } */
> >> > +/* { dg-final { scan-tree-dump "reduction latency = 8" "vect" } } */
> >> > +
> >> > +/* The reduction latency should be multiplied by the count for
> >> > +   single_defuse_cycle.  */
> >> > +
> >> > +long
> >> > +f (long res, short *ptr1, short *ptr2, int n)
> >> > +{
> >> > +  for (int i = 0; i < n; ++i)
> >> > +    res += (long) ptr1[i] << ptr2[i];
> >> > +  return res;
> >> > +}

  reply	other threads:[~2023-07-26 10:02 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-19  4:33 Hao Liu OS
2023-07-24  1:58 ` Hao Liu OS
2023-07-24 11:10 ` Richard Sandiford
2023-07-25  9:10   ` Hao Liu OS
2023-07-25  9:44     ` Richard Sandiford
2023-07-26  2:01       ` Hao Liu OS
2023-07-26  8:47         ` Richard Biener
2023-07-26  9:14           ` Richard Sandiford
2023-07-26 10:02             ` Richard Biener [this message]
2023-07-26 10:12               ` Richard Sandiford
2023-07-26 12:00                 ` Richard Biener
2023-07-26 12:54             ` Hao Liu OS
2023-07-28 10:06               ` Hao Liu OS
2023-07-28 17:35               ` Richard Sandiford
2023-07-31  2:39                 ` Hao Liu OS
2023-07-31  9:11                   ` Richard Sandiford
2023-07-31  9:25                     ` Hao Liu OS
2023-08-01  9:43                     ` Hao Liu OS
2023-08-02  3:45                       ` Hao Liu OS
2023-08-03  9:33                         ` Hao Liu OS
2023-08-03 10:10                         ` Richard Sandiford

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAFiYyc0F-vvXpu_UUCs_=poJw=m5Rzdgbnb8NjQVbpDZ6frYYA@mail.gmail.com' \
    --to=richard.guenther@gmail.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=hliu@os.amperecomputing.com \
    --cc=richard.sandiford@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).