public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "rguenth at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug target/110625] [AArch64] Vect: SLP fails to vectorize a loop as the reduction_latency calculated by new costs is too large
Date: Tue, 11 Jul 2023 10:41:44 +0000	[thread overview]
Message-ID: <bug-110625-4-AhYmgXI4sC@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-110625-4@http.gcc.gnu.org/bugzilla/>

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110625

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Target|                            |aarch64
           Keywords|                            |missed-optimization

--- Comment #1 from Richard Biener <rguenth at gcc dot gnu.org> ---
Well, I think count is handled correctly even for SLP.  Given we accumulate
'short' to 'double' we likely perform 'count' adds to the m's here and those
are chained in a simple way.  We specifically avoid creating more
reduction variables because of register pressure issues with and without SLP
if possible.  Note when you have for example three scalar reductions we will
up the number of IVs to use with SLP, so using 'count' isn't always 100%
accurate but it the case of the testcase it should be.

But I'm not sure what "reduction-latency" tries to measure.

  reply	other threads:[~2023-07-11 10:41 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-11  9:15 [Bug target/110625] New: " hliu at amperecomputing dot com
2023-07-11 10:41 ` rguenth at gcc dot gnu.org [this message]
2023-07-12  0:58 ` [Bug target/110625] " hliu at amperecomputing dot com
2023-07-14  8:58 ` hliu at amperecomputing dot com
2023-07-18 10:41 ` rsandifo at gcc dot gnu.org
2023-07-18 12:03 ` rsandifo at gcc dot gnu.org
2023-07-19  2:57 ` hliu at amperecomputing dot com
2023-07-19  8:55 ` rsandifo at gcc dot gnu.org
2023-07-19  9:36 ` hliu at amperecomputing dot com
2023-07-28 16:50 ` rsandifo at gcc dot gnu.org
2023-07-28 16:53 ` rsandifo at gcc dot gnu.org
2023-07-31  2:45 ` hliu at amperecomputing dot com
2023-07-31 12:56 ` cvs-commit at gcc dot gnu.org
2023-08-01  9:09 ` tnfchris at gcc dot gnu.org
2023-08-01  9:19 ` tnfchris at gcc dot gnu.org
2023-08-01  9:45 ` hliu at amperecomputing dot com
2023-08-01  9:49 ` tnfchris at gcc dot gnu.org
2023-08-01  9:50 ` hliu at amperecomputing dot com
2023-08-01 13:49 ` tnfchris at gcc dot gnu.org
2023-08-02  3:49 ` hliu at amperecomputing dot com
2023-08-04  2:34 ` cvs-commit at gcc dot gnu.org
2023-12-08 10:20 ` [Bug target/110625] [14 Regression][AArch64] " tnfchris at gcc dot gnu.org
2023-12-12 16:26 ` tnfchris at gcc dot gnu.org
2023-12-26 14:55 ` tnfchris at gcc dot gnu.org
2023-12-29 15:59 ` cvs-commit at gcc dot gnu.org
2023-12-29 16:16 ` tnfchris at gcc dot gnu.org
2023-12-30  8:34 ` hliu at amperecomputing dot com

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-110625-4-AhYmgXI4sC@http.gcc.gnu.org/bugzilla/ \
    --to=gcc-bugzilla@gcc.gnu.org \
    --cc=gcc-bugs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).