public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Kito Cheng <kito.cheng@sifive.com>
To: Robin Dapp <rdapp.gcc@gmail.com>
Cc: "juzhe.zhong@rivai.ai" <juzhe.zhong@rivai.ai>,
	Richard Biener <richard.guenther@gmail.com>,
	 gcc-patches <gcc-patches@gcc.gnu.org>,
	palmer <palmer@dabbelt.com>,  "kito.cheng" <kito.cheng@gmail.com>,
	jeffreyalaw <jeffreyalaw@gmail.com>,
	 "pan2.li" <pan2.li@intel.com>
Subject: Re: [PATCH] RISC-V: Basic VLS code gen for RISC-V
Date: Tue, 30 May 2023 17:16:59 +0800	[thread overview]
Message-ID: <CALLt3TjOPZoEFsZuYajXwzonET06hDFtZ3dYggG9w6OvPSTDvw@mail.gmail.com> (raw)
In-Reply-To: <CALLt3Tic_G5uGvSDEntwB5jRKetzDgsn=+brqD28c1nmGb6KNA@mail.gmail.com>

One more note: we found a real case in spec 2006, SLP convert two 8
bit into int8x2_t, but the value has live across the function call, it
only need to save-restore 16 bit, but it become save-restore VLEN bits
because it using VLA mode in backend, you could imagine when VLEN is
larger, the performance penalty will also increase, which is opposite
way we expect - larger VLEN better performance.

On Tue, May 30, 2023 at 5:11 PM Kito Cheng <kito.cheng@sifive.com> wrote:
>
> (I am still on the meeting hell, and will be released very later,
> apology for short and incomplete reply, and will reply complete later)
>
> One point for adding VLS mode support is because SLP, especially for
> those SLP candidate not in the loop, those case use VLS type can be
> better, of cause using larger safe VLA type can optimize too, but that
> will cause one issue we found in RISC-V in LLVM - it will spill/reload
> whole register instead of exact size.
>
> e.g.
>
> int32x4_t a;
> // def a
> // spill a
> foo ()
> // reload a
> // use a
>
> Consider we use a VLA mode for a, it will spill and reload with whole
> register VLA mode
> Online demo here: https://godbolt.org/z/Y1fThbxE6
>
> On Tue, May 30, 2023 at 5:05 PM Robin Dapp <rdapp.gcc@gmail.com> wrote:
> >
> > >>> but ideally the user would be able to specify -mrvv-size=32 for an
> > >>> implementation with 32 byte vectors and then vector lowering would make use
> > >>> of vectors up to 32 bytes?
> > >
> > > Actually, we don't want to specify -mrvv-size = 32 to enable vectorization on GNU vectors.
> > > You can take a look this example:
> > > https://godbolt.org/z/3jYqoM84h <https://godbolt.org/z/3jYqoM84h>
> > >
> > > GCC need to specify the mrvv size to enable GNU vectors and the codegen only can run on CPU with vector-length = 128bit.
> > > However, LLVM doesn't need to specify the vector length, and the codegen can run on any CPU with RVV  vector-length >= 128 bits.
> > >
> > > This is what this patch want to do.
> > >
> > > Thanks.
> > I think Richard's question was rather if it wasn't better to do it more
> > generically and lower vectors to what either the current cpu or what the
> > user specified rather than just 16-byte vectors (i.e. indeed a fixed
> > vlmin and not a fixed vlmin == fixed vlmax).
> >
> > This patch assumes everything is fixed for optimization purposes and then
> > switches over to variable-length when nothing can be changed anymore.  That
> > is, we would work on "vlmin"-sized chunks in a VLA fashion at runtime?
> > We would need to make sure that no pass after reload makes use of VLA
> > properties at all.
> >
> > In general I don't have a good overview of which optimizations we gain by
> > such an approach or rather which ones are prevented by VLA altogether?
> > What's the idea for the future?  Still use LEN_LOAD et al. (and masking)
> > with "fixed vlmin"?  Wouldn't we select different IVs with this patch than
> > what we would have for pure VLA?
> >
> > Regards
> >  Robin

  reply	other threads:[~2023-05-30  9:17 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-30  6:06 Kito Cheng
2023-05-30  6:32 ` juzhe.zhong
2023-05-30  6:51   ` Kito Cheng
2023-05-30  6:59     ` juzhe.zhong
2023-05-30  7:13 ` Richard Biener
2023-05-30  7:45   ` juzhe.zhong
2023-05-30  9:05     ` Robin Dapp
2023-05-30  9:11       ` Kito Cheng
2023-05-30  9:16         ` Kito Cheng [this message]
2023-05-30  9:16       ` juzhe.zhong
2023-05-30  9:29         ` Richard Biener
2023-05-30  9:37           ` juzhe.zhong
2023-05-30  9:44           ` juzhe.zhong
2023-05-30 15:45             ` Kito Cheng
2023-05-30 23:19               ` 钟居哲
     [not found]             ` <DC99791C4B2B4D40+106F137E-2B0D-4732-A7C5-8EE0242F9F5A@rivai.ai>
2023-06-12 23:34               ` Jeff Law
     [not found]               ` <529320C359BE5467+690CDE73-D54E-48E2-81C4-B742060D7F28@rivai.ai>
2023-06-13 16:10                 ` Jeff Law
2023-05-30  7:27 ` Robin Dapp
2023-05-30  7:40   ` juzhe.zhong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALLt3TjOPZoEFsZuYajXwzonET06hDFtZ3dYggG9w6OvPSTDvw@mail.gmail.com \
    --to=kito.cheng@sifive.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jeffreyalaw@gmail.com \
    --cc=juzhe.zhong@rivai.ai \
    --cc=kito.cheng@gmail.com \
    --cc=palmer@dabbelt.com \
    --cc=pan2.li@intel.com \
    --cc=rdapp.gcc@gmail.com \
    --cc=richard.guenther@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).