public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Biener <richard.guenther@gmail.com>
To: Di Zhao OS <dizhao@os.amperecomputing.com>
Cc: "gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>
Subject: Re: [PATCH v4] [tree-optimization/110279] Consider FMA in get_reassociation_width
Date: Fri, 6 Oct 2023 11:33:06 +0200	[thread overview]
Message-ID: <CAFiYyc02ooZrRicHc1rDtjKuc2pHn_DwfBW441F-yTWE5UG-Ew@mail.gmail.com> (raw)
In-Reply-To: <SN6PR01MB4240D8EE68CF46689F9C243AE8F7A@SN6PR01MB4240.prod.exchangelabs.com>

On Thu, Sep 14, 2023 at 2:43 PM Di Zhao OS
<dizhao@os.amperecomputing.com> wrote:
>
> This is a new version of the patch on "nested FMA".
> Sorry for updating this after so long, I've been studying and
> writing micro cases to sort out the cause of the regression.

Sorry for taking so long to reply.

> First, following previous discussion:
> (https://gcc.gnu.org/pipermail/gcc-patches/2023-September/629080.html)
>
> 1. From testing more altered cases, I don't think the
> problem is that reassociation works locally. In that:
>
>   1) On the example with multiplications:
>
>         tmp1 = a + c * c + d * d + x * y;
>         tmp2 = x * tmp1;
>         result += (a + c + d + tmp2);
>
>   Given "result" rewritten by width=2, the performance is
>   worse if we rewrite "tmp1" with width=2. In contrast, if we
>   remove the multiplications from the example (and make "tmp1"
>   not singe used), and still rewrite "result" by width=2, then
>   rewriting "tmp1" with width=2 is better. (Make sense because
>   the tree's depth at "result" is still smaller if we rewrite
>   "tmp1".)
>
>   2) I tried to modify the assembly code of the example without
>   FMA, so the width of "result" is 4. On Ampere1 there's no
>   obvious improvement. So although this is an interesting
>   problem, it doesn't seem like the cause of the regression.

OK, I see.

> 2. From assembly code of the case with FMA, one problem is
> that, rewriting "tmp1" to parallel didn't decrease the
> minimum CPU cycles (taking MULT_EXPRs into account), but
> increased code size, so the overhead is increased.
>
>    a) When "tmp1" is not re-written to parallel:
>         fmadd d31, d2, d2, d30
>         fmadd d31, d3, d3, d31
>         fmadd d31, d4, d5, d31  //"tmp1"
>         fmadd d31, d31, d4, d3
>
>    b) When "tmp1" is re-written to parallel:
>         fmul  d31, d4, d5
>         fmadd d27, d2, d2, d30
>         fmadd d31, d3, d3, d31
>         fadd  d31, d31, d27     //"tmp1"
>         fmadd d31, d31, d4, d3
>
> For version a), there are 3 dependent FMAs to calculate "tmp1".
> For version b), there are also 3 dependent instructions in the
> longer path: the 1st, 3rd and 4th.

Yes, it doesn't really change anything.  The patch has

+  /* If there's code like "acc = a * b + c * d + acc" in a tight loop, some
+     uarchs can execute results like:
+
+       _1 = a * b;
+       _2 = .FMA (c, d, _1);
+       acc_1 = acc_0 + _2;
+
+     in parallel, while turning it into
+
+       _1 = .FMA(a, b, acc_0);
+       acc_1 = .FMA(c, d, _1);
+
+     hinders that, because then the first FMA depends on the result
of preceding
+     iteration.  */

I can't see what can be run in parallel for the first case.  The .FMA
depends on the multiplication a * b.  Iff the uarch somehow decomposes
.FMA into multiply + add then the c * d multiply could run in parallel
with the a * b multiply which _might_ be able to hide some of the
latency of the full .FMA.  Like on x86 Zen FMA has a latency of 4
cycles but a multiply only 3.  But I never got confirmation from any
of the CPU designers that .FMAs are issued when the multiply
operands are ready and the add operand can be forwarded.

I also wonder why the multiplications of the two-FMA sequence
then cannot be executed at the same time?  So I have some doubt
of the theory above.

Iff this really is the reason for the sequence to execute with lower
overall latency and we want to attack this on GIMPLE then I think
we need a target hook telling us this fact (I also wonder if such
behavior can be modeled in the scheduler pipeline description at all?)

> So it seems to me the current get_reassociation_width algorithm
> isn't optimal in the presence of FMA. So I modified the patch to
> improve get_reassociation_width, rather than check for code
> patterns. (Although there could be some other complicated
> factors so the regression is more obvious when there's "nested
> FMA". But with this patch that should be avoided or reduced.)
>
> With this patch 508.namd_r 1-copy run has 7% improvement on
> Ampere1, on Intel Xeon there's about 3%. While I'm still
> collecting data on other CPUs, I'd like to know how do you
> think of this.
>
> About changes in the patch:
>
> 1. When the op list forms a complete FMA chain, try to search
> for a smaller width considering the benefit of using FMA. With
> a smaller width, the increment of code size is smaller when
> breaking the chain.

But this is all highly target specific (code size even more so).

How I understand your approach to fixing the issue leads me to
the suggestion to prioritize parallel rewriting, thus alter rank_ops_for_fma,
taking the reassoc width into account (the computed width should be
unchanged from rank_ops_for_fma) instead of "fixing up" the parallel
rewriting of FMAs (well, they are not yet formed of course).
get_reassociation_width has 'get_required_cycles', the above theory
could be verified with a very simple toy pipeline model.  We'd have
to ask the target for the reassoc width for MULT_EXPRs as well (or maybe
even FMA_EXPRs).

Taking the width of FMAs into account when computing the reassoc width
might be another way to attack this.

> 2. To avoid regressions, included the other patch
> (https://gcc.gnu.org/pipermail/gcc-patches/2023-September/629203.html)
> on this tracker again. This is because more FMA will be kept
> with 1., so we need to rule out the loop dependent
> FMA chains when param_avoid_fma_max_bits is set.

Sorry again for taking so long to reply.

I'll note we have an odd case on x86 Zen2(?) as well which we don't really
understand from a CPU behavior perspective.

Thanks,
Richard.

> Thanks,
> Di Zhao
>
> ----
>
>         PR tree-optimization/110279
>
> gcc/ChangeLog:
>
>         * tree-ssa-reassoc.cc (rank_ops_for_better_parallelism_p):
>         New function to check whether ranking the ops results in
>         better parallelism.
>         (get_reassociation_width): Add new parameters. Search for
>         smaller width considering the benefit of FMA.
>         (rank_ops_for_fma): Change return value to be number of
>         MULT_EXPRs.
>         (reassociate_bb): For 3 ops, refine the condition to call
>         swap_ops_for_binary_stmt.
>
> gcc/testsuite/ChangeLog:
>
>         * gcc.dg/pr110279.c: New test.

  reply	other threads:[~2023-10-06  9:33 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-14 12:43 Di Zhao OS
2023-10-06  9:33 ` Richard Biener [this message]
2023-10-08 16:39   ` Di Zhao OS
2023-10-23  3:49     ` [PING][PATCH " Di Zhao OS
2023-10-31 13:47     ` [PATCH " Richard Biener
2023-11-09 17:53       ` Di Zhao OS
2023-11-21 13:01         ` Richard Biener
2023-11-29 14:35           ` Di Zhao OS
2023-12-11 11:01             ` Richard Biener
2023-12-13  8:14               ` Di Zhao OS
2023-12-13  9:00                 ` Richard Biener
2023-12-14 20:55                   ` Di Zhao OS
2023-12-15  7:23                     ` Richard Biener
2023-12-15  9:46                 ` Thomas Schwinge
2023-12-17 12:30                   ` Di Zhao OS
2023-12-22 15:05                     ` Di Zhao OS
2023-12-22 15:39                       ` Richard Biener
2023-12-27  9:35                         ` Di Zhao OS

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFiYyc02ooZrRicHc1rDtjKuc2pHn_DwfBW441F-yTWE5UG-Ew@mail.gmail.com \
    --to=richard.guenther@gmail.com \
    --cc=dizhao@os.amperecomputing.com \
    --cc=gcc-patches@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).