From: Richard Biener <richard.guenther@gmail.com>
To: "juzhe.zhong@rivai.ai" <juzhe.zhong@rivai.ai>
Cc: Robin Dapp <rdapp.gcc@gmail.com>,
"Kito.cheng" <kito.cheng@sifive.com>,
gcc-patches <gcc-patches@gcc.gnu.org>,
palmer <palmer@dabbelt.com>, "kito.cheng" <kito.cheng@gmail.com>,
jeffreyalaw <jeffreyalaw@gmail.com>,
"pan2.li" <pan2.li@intel.com>
Subject: Re: Re: [PATCH] RISC-V: Basic VLS code gen for RISC-V
Date: Tue, 30 May 2023 11:29:47 +0200 [thread overview]
Message-ID: <CAFiYyc2gEewszbnrTLjLYVzCVruSZxXZFzWKaiGhLfbYvU+hNw@mail.gmail.com> (raw)
In-Reply-To: <FAC899C5E263E4F9+20230530171648030743348@rivai.ai>
On Tue, May 30, 2023 at 11:17 AM juzhe.zhong@rivai.ai
<juzhe.zhong@rivai.ai> wrote:
>
> In the future, we will definitely mixing VLA and VLS-vlmin together in a codegen and it will not cause any issues.
> For VLS-vlmin, I prefer it is used in length style auto-vectorization (I am not sure since my SELECT_VL patch is not
> finished, I will check if can work when I am working in SELECT_VL patch).
For the future it would be then good to have the vectorizer
re-vectorize loops with
VLS vector uses to VLA style? I think there's a PR with a draft patch
from a few
years ago attached (from me) somewhere. Currently the vectorizer will give
up when seeing vector operations in a loop but ideally those should simply
be SLPed.
> >> In general I don't have a good overview of which optimizations we gain by
> >> such an approach or rather which ones are prevented by VLA altogether?
> These patches VLS modes can help for SLP auto-vectorization.
>
> ________________________________
> juzhe.zhong@rivai.ai
>
>
> From: Robin Dapp
> Date: 2023-05-30 17:05
> To: juzhe.zhong@rivai.ai; Richard Biener; Kito.cheng
> CC: rdapp.gcc; gcc-patches; palmer; kito.cheng; jeffreyalaw; pan2.li
> Subject: Re: [PATCH] RISC-V: Basic VLS code gen for RISC-V
> >>> but ideally the user would be able to specify -mrvv-size=32 for an
> >>> implementation with 32 byte vectors and then vector lowering would make use
> >>> of vectors up to 32 bytes?
> >
> > Actually, we don't want to specify -mrvv-size = 32 to enable vectorization on GNU vectors.
> > You can take a look this example:
> > https://godbolt.org/z/3jYqoM84h <https://godbolt.org/z/3jYqoM84h>
> >
> > GCC need to specify the mrvv size to enable GNU vectors and the codegen only can run on CPU with vector-length = 128bit.
> > However, LLVM doesn't need to specify the vector length, and the codegen can run on any CPU with RVV vector-length >= 128 bits.
> >
> > This is what this patch want to do.
> >
> > Thanks.
> I think Richard's question was rather if it wasn't better to do it more
> generically and lower vectors to what either the current cpu or what the
> user specified rather than just 16-byte vectors (i.e. indeed a fixed
> vlmin and not a fixed vlmin == fixed vlmax).
>
> This patch assumes everything is fixed for optimization purposes and then
> switches over to variable-length when nothing can be changed anymore. That
> is, we would work on "vlmin"-sized chunks in a VLA fashion at runtime?
> We would need to make sure that no pass after reload makes use of VLA
> properties at all.
>
> In general I don't have a good overview of which optimizations we gain by
> such an approach or rather which ones are prevented by VLA altogether?
> What's the idea for the future? Still use LEN_LOAD et al. (and masking)
> with "fixed vlmin"? Wouldn't we select different IVs with this patch than
> what we would have for pure VLA?
>
> Regards
> Robin
>
next prev parent reply other threads:[~2023-05-30 9:32 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-30 6:06 Kito Cheng
2023-05-30 6:32 ` juzhe.zhong
2023-05-30 6:51 ` Kito Cheng
2023-05-30 6:59 ` juzhe.zhong
2023-05-30 7:13 ` Richard Biener
2023-05-30 7:45 ` juzhe.zhong
2023-05-30 9:05 ` Robin Dapp
2023-05-30 9:11 ` Kito Cheng
2023-05-30 9:16 ` Kito Cheng
2023-05-30 9:16 ` juzhe.zhong
2023-05-30 9:29 ` Richard Biener [this message]
2023-05-30 9:37 ` juzhe.zhong
2023-05-30 9:44 ` juzhe.zhong
2023-05-30 15:45 ` Kito Cheng
2023-05-30 23:19 ` 钟居哲
[not found] ` <DC99791C4B2B4D40+106F137E-2B0D-4732-A7C5-8EE0242F9F5A@rivai.ai>
2023-06-12 23:34 ` Jeff Law
[not found] ` <529320C359BE5467+690CDE73-D54E-48E2-81C4-B742060D7F28@rivai.ai>
2023-06-13 16:10 ` Jeff Law
2023-05-30 7:27 ` Robin Dapp
2023-05-30 7:40 ` juzhe.zhong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAFiYyc2gEewszbnrTLjLYVzCVruSZxXZFzWKaiGhLfbYvU+hNw@mail.gmail.com \
--to=richard.guenther@gmail.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=jeffreyalaw@gmail.com \
--cc=juzhe.zhong@rivai.ai \
--cc=kito.cheng@gmail.com \
--cc=kito.cheng@sifive.com \
--cc=palmer@dabbelt.com \
--cc=pan2.li@intel.com \
--cc=rdapp.gcc@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).