From: "Li, Pan2" <pan2.li@intel.com>
To: Kito Cheng <kito.cheng@gmail.com>
Cc: "juzhe.zhong@rivai.ai" <juzhe.zhong@rivai.ai>,
gcc-patches <gcc-patches@gcc.gnu.org>,
Kito.cheng <kito.cheng@sifive.com>,
"Wang, Yanzhang" <yanzhang.wang@intel.com>
Subject: RE: [PATCH v2] RISC-V: Add test cases for the RVV mask insn shortcut.
Date: Fri, 14 Apr 2023 06:47:09 +0000 [thread overview]
Message-ID: <MW5PR11MB5908C29CFBA8B97206AA3E00A9999@MW5PR11MB5908.namprd11.prod.outlook.com> (raw)
In-Reply-To: <CA+yXCZCyC-aSDJN-a7xtFpSikokS3KXfS5fcFcJZ-ixJV3UbBQ@mail.gmail.com>
You're very welcome!
Looks vmorn(v,v) doesn't perform any shortcut, while vmandn(v, v) will be converted to vmclr in upstream. As I understand, there should be no difference between vmORn and vmANDn excepts the operator, will take a look from RTL CSE for more details, 😊!
Pan
-----Original Message-----
From: Kito Cheng <kito.cheng@gmail.com>
Sent: Friday, April 14, 2023 2:42 PM
To: Li, Pan2 <pan2.li@intel.com>
Cc: juzhe.zhong@rivai.ai; gcc-patches <gcc-patches@gcc.gnu.org>; Kito.cheng <kito.cheng@sifive.com>; Wang, Yanzhang <yanzhang.wang@intel.com>
Subject: Re: [PATCH v2] RISC-V: Add test cases for the RVV mask insn shortcut.
OK, thanks for the patch :)
On Fri, Apr 14, 2023 at 11:27 AM Li, Pan2 via Gcc-patches <gcc-patches@gcc.gnu.org> wrote:
>
> Thanks juzhe, update new version [PATCH v3] for even more checks.
>
> Pan
>
> From: juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
> Sent: Friday, April 14, 2023 10:46 AM
> To: Li, Pan2 <pan2.li@intel.com>; gcc-patches
> <gcc-patches@gcc.gnu.org>
> Cc: Kito.cheng <kito.cheng@sifive.com>; Wang, Yanzhang
> <yanzhang.wang@intel.com>; Li, Pan2 <pan2.li@intel.com>
> Subject: Re: [PATCH v2] RISC-V: Add test cases for the RVV mask insn shortcut.
>
> LGTM. Wait for Kito more comments.
>
> ________________________________
> juzhe.zhong@rivai.ai<mailto:juzhe.zhong@rivai.ai>
>
> From: pan2.li<mailto:pan2.li@intel.com>
> Date: 2023-04-14 10:45
> To: gcc-patches<mailto:gcc-patches@gcc.gnu.org>
> CC: juzhe.zhong<mailto:juzhe.zhong@rivai.ai>;
> kito.cheng<mailto:kito.cheng@sifive.com>;
> yanzhang.wang<mailto:yanzhang.wang@intel.com>;
> pan2.li<mailto:pan2.li@intel.com>
> Subject: [PATCH v2] RISC-V: Add test cases for the RVV mask insn shortcut.
> From: Pan Li <pan2.li@intel.com<mailto:pan2.li@intel.com>>
>
> There are sorts of shortcut codegen for the RVV mask insn. For
> example.
>
> vmxor vd, va, va => vmclr vd.
>
> We would like to add more optimization like this but first of all we
> must add the tests for the existing shortcut optimization, to ensure
> we don't break existing optimization from underlying shortcut
> optimization.
>
> gcc/testsuite/ChangeLog:
>
> * gcc.target/riscv/rvv/base/mask_insn_shortcut.c: New test.
>
> Signed-off-by: Pan Li <pan2.li@intel.com<mailto:pan2.li@intel.com>>
> ---
> .../riscv/rvv/base/mask_insn_shortcut.c | 239 ++++++++++++++++++
> 1 file changed, 239 insertions(+)
> create mode 100644
> gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c
>
> diff --git
> a/gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c
> b/gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c
> new file mode 100644
> index 00000000000..efc3af39fc3
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/mask_insn_shortcut.c
> @@ -0,0 +1,239 @@
> +/* { dg-do compile } */
> +/* { dg-options "-march=rv64gcv -mabi=lp64 -O3" } */
> +
> +#include "riscv_vector.h"
> +
> +vbool1_t test_shortcut_for_riscv_vmand_case_0(vbool1_t v1, size_t vl)
> +{
> + return __riscv_vmand_mm_b1(v1, v1, vl); }
> +
> +vbool2_t test_shortcut_for_riscv_vmand_case_1(vbool2_t v1, size_t vl)
> +{
> + return __riscv_vmand_mm_b2(v1, v1, vl); }
> +
> +vbool4_t test_shortcut_for_riscv_vmand_case_2(vbool4_t v1, size_t vl)
> +{
> + return __riscv_vmand_mm_b4(v1, v1, vl); }
> +
> +vbool8_t test_shortcut_for_riscv_vmand_case_3(vbool8_t v1, size_t vl)
> +{
> + return __riscv_vmand_mm_b8(v1, v1, vl); }
> +
> +vbool16_t test_shortcut_for_riscv_vmand_case_4(vbool16_t v1, size_t
> +vl) {
> + return __riscv_vmand_mm_b16(v1, v1, vl); }
> +
> +vbool32_t test_shortcut_for_riscv_vmand_case_5(vbool32_t v1, size_t
> +vl) {
> + return __riscv_vmand_mm_b32(v1, v1, vl); }
> +
> +vbool64_t test_shortcut_for_riscv_vmand_case_6(vbool64_t v1, size_t
> +vl) {
> + return __riscv_vmand_mm_b64(v1, v1, vl); }
> +
> +vbool1_t test_shortcut_for_riscv_vmnand_case_0(vbool1_t v1, size_t
> +vl) {
> + return __riscv_vmnand_mm_b1(v1, v1, vl); }
> +
> +vbool2_t test_shortcut_for_riscv_vmnand_case_1(vbool2_t v1, size_t
> +vl) {
> + return __riscv_vmnand_mm_b2(v1, v1, vl); }
> +
> +vbool4_t test_shortcut_for_riscv_vmnand_case_2(vbool4_t v1, size_t
> +vl) {
> + return __riscv_vmnand_mm_b4(v1, v1, vl); }
> +
> +vbool8_t test_shortcut_for_riscv_vmnand_case_3(vbool8_t v1, size_t
> +vl) {
> + return __riscv_vmnand_mm_b8(v1, v1, vl); }
> +
> +vbool16_t test_shortcut_for_riscv_vmnand_case_4(vbool16_t v1, size_t
> +vl) {
> + return __riscv_vmnand_mm_b16(v1, v1, vl); }
> +
> +vbool32_t test_shortcut_for_riscv_vmnand_case_5(vbool32_t v1, size_t
> +vl) {
> + return __riscv_vmnand_mm_b32(v1, v1, vl); }
> +
> +vbool64_t test_shortcut_for_riscv_vmnand_case_6(vbool64_t v1, size_t
> +vl) {
> + return __riscv_vmnand_mm_b64(v1, v1, vl); }
> +
> +vbool1_t test_shortcut_for_riscv_vmandn_case_0(vbool1_t v1, size_t
> +vl) {
> + return __riscv_vmandn_mm_b1(v1, v1, vl); }
> +
> +vbool2_t test_shortcut_for_riscv_vmandn_case_1(vbool2_t v1, size_t
> +vl) {
> + return __riscv_vmandn_mm_b2(v1, v1, vl); }
> +
> +vbool4_t test_shortcut_for_riscv_vmandn_case_2(vbool4_t v1, size_t
> +vl) {
> + return __riscv_vmandn_mm_b4(v1, v1, vl); }
> +
> +vbool8_t test_shortcut_for_riscv_vmandn_case_3(vbool8_t v1, size_t
> +vl) {
> + return __riscv_vmandn_mm_b8(v1, v1, vl); }
> +
> +vbool16_t test_shortcut_for_riscv_vmandn_case_4(vbool16_t v1, size_t
> +vl) {
> + return __riscv_vmandn_mm_b16(v1, v1, vl); }
> +
> +vbool32_t test_shortcut_for_riscv_vmandn_case_5(vbool32_t v1, size_t
> +vl) {
> + return __riscv_vmandn_mm_b32(v1, v1, vl); }
> +
> +vbool64_t test_shortcut_for_riscv_vmandn_case_6(vbool64_t v1, size_t
> +vl) {
> + return __riscv_vmandn_mm_b64(v1, v1, vl); }
> +
> +vbool1_t test_shortcut_for_riscv_vmxor_case_0(vbool1_t v1, size_t vl)
> +{
> + return __riscv_vmxor_mm_b1(v1, v1, vl); }
> +
> +vbool2_t test_shortcut_for_riscv_vmxor_case_1(vbool2_t v1, size_t vl)
> +{
> + return __riscv_vmxor_mm_b2(v1, v1, vl); }
> +
> +vbool4_t test_shortcut_for_riscv_vmxor_case_2(vbool4_t v1, size_t vl)
> +{
> + return __riscv_vmxor_mm_b4(v1, v1, vl); }
> +
> +vbool8_t test_shortcut_for_riscv_vmxor_case_3(vbool8_t v1, size_t vl)
> +{
> + return __riscv_vmxor_mm_b8(v1, v1, vl); }
> +
> +vbool16_t test_shortcut_for_riscv_vmxor_case_4(vbool16_t v1, size_t
> +vl) {
> + return __riscv_vmxor_mm_b16(v1, v1, vl); }
> +
> +vbool32_t test_shortcut_for_riscv_vmxor_case_5(vbool32_t v1, size_t
> +vl) {
> + return __riscv_vmxor_mm_b32(v1, v1, vl); }
> +
> +vbool64_t test_shortcut_for_riscv_vmxor_case_6(vbool64_t v1, size_t
> +vl) {
> + return __riscv_vmxor_mm_b64(v1, v1, vl); }
> +
> +vbool1_t test_shortcut_for_riscv_vmor_case_0(vbool1_t v1, size_t vl)
> +{
> + return __riscv_vmor_mm_b1(v1, v1, vl); }
> +
> +vbool2_t test_shortcut_for_riscv_vmor_case_1(vbool2_t v1, size_t vl)
> +{
> + return __riscv_vmor_mm_b2(v1, v1, vl); }
> +
> +vbool4_t test_shortcut_for_riscv_vmor_case_2(vbool4_t v1, size_t vl)
> +{
> + return __riscv_vmor_mm_b4(v1, v1, vl); }
> +
> +vbool8_t test_shortcut_for_riscv_vmor_case_3(vbool8_t v1, size_t vl)
> +{
> + return __riscv_vmor_mm_b8(v1, v1, vl); }
> +
> +vbool16_t test_shortcut_for_riscv_vmor_case_4(vbool16_t v1, size_t
> +vl) {
> + return __riscv_vmor_mm_b16(v1, v1, vl); }
> +
> +vbool32_t test_shortcut_for_riscv_vmor_case_5(vbool32_t v1, size_t
> +vl) {
> + return __riscv_vmor_mm_b32(v1, v1, vl); }
> +
> +vbool64_t test_shortcut_for_riscv_vmor_case_6(vbool64_t v1, size_t
> +vl) {
> + return __riscv_vmor_mm_b64(v1, v1, vl); }
> +
> +vbool1_t test_shortcut_for_riscv_vmnor_case_0(vbool1_t v1, size_t vl)
> +{
> + return __riscv_vmnor_mm_b1(v1, v1, vl); }
> +
> +vbool2_t test_shortcut_for_riscv_vmnor_case_1(vbool2_t v1, size_t vl)
> +{
> + return __riscv_vmnor_mm_b2(v1, v1, vl); }
> +
> +vbool4_t test_shortcut_for_riscv_vmnor_case_2(vbool4_t v1, size_t vl)
> +{
> + return __riscv_vmnor_mm_b4(v1, v1, vl); }
> +
> +vbool8_t test_shortcut_for_riscv_vmnor_case_3(vbool8_t v1, size_t vl)
> +{
> + return __riscv_vmnor_mm_b8(v1, v1, vl); }
> +
> +vbool16_t test_shortcut_for_riscv_vmnor_case_4(vbool16_t v1, size_t
> +vl) {
> + return __riscv_vmnor_mm_b16(v1, v1, vl); }
> +
> +vbool32_t test_shortcut_for_riscv_vmnor_case_5(vbool32_t v1, size_t
> +vl) {
> + return __riscv_vmnor_mm_b32(v1, v1, vl); }
> +
> +vbool64_t test_shortcut_for_riscv_vmnor_case_6(vbool64_t v1, size_t
> +vl) {
> + return __riscv_vmnor_mm_b64(v1, v1, vl); }
> +
> +vbool1_t test_shortcut_for_riscv_vmorn_case_0(vbool1_t v1, size_t vl)
> +{
> + return __riscv_vmorn_mm_b1(v1, v1, vl); }
> +
> +vbool2_t test_shortcut_for_riscv_vmorn_case_1(vbool2_t v1, size_t vl)
> +{
> + return __riscv_vmorn_mm_b2(v1, v1, vl); }
> +
> +vbool4_t test_shortcut_for_riscv_vmorn_case_2(vbool4_t v1, size_t vl)
> +{
> + return __riscv_vmorn_mm_b4(v1, v1, vl); }
> +
> +vbool8_t test_shortcut_for_riscv_vmorn_case_3(vbool8_t v1, size_t vl)
> +{
> + return __riscv_vmorn_mm_b8(v1, v1, vl); }
> +
> +vbool16_t test_shortcut_for_riscv_vmorn_case_4(vbool16_t v1, size_t
> +vl) {
> + return __riscv_vmorn_mm_b16(v1, v1, vl); }
> +
> +vbool32_t test_shortcut_for_riscv_vmorn_case_5(vbool32_t v1, size_t
> +vl) {
> + return __riscv_vmorn_mm_b32(v1, v1, vl); }
> +
> +vbool64_t test_shortcut_for_riscv_vmorn_case_6(vbool64_t v1, size_t
> +vl) {
> + return __riscv_vmorn_mm_b64(v1, v1, vl); }
> +
> +vbool1_t test_shortcut_for_riscv_vmxnor_case_0(vbool1_t v1, size_t
> +vl) {
> + return __riscv_vmxnor_mm_b1(v1, v1, vl); }
> +
> +vbool2_t test_shortcut_for_riscv_vmxnor_case_1(vbool2_t v1, size_t
> +vl) {
> + return __riscv_vmxnor_mm_b2(v1, v1, vl); }
> +
> +vbool4_t test_shortcut_for_riscv_vmxnor_case_2(vbool4_t v1, size_t
> +vl) {
> + return __riscv_vmxnor_mm_b4(v1, v1, vl); }
> +
> +vbool8_t test_shortcut_for_riscv_vmxnor_case_3(vbool8_t v1, size_t
> +vl) {
> + return __riscv_vmxnor_mm_b8(v1, v1, vl); }
> +
> +vbool16_t test_shortcut_for_riscv_vmxnor_case_4(vbool16_t v1, size_t
> +vl) {
> + return __riscv_vmxnor_mm_b16(v1, v1, vl); }
> +
> +vbool32_t test_shortcut_for_riscv_vmxnor_case_5(vbool32_t v1, size_t
> +vl) {
> + return __riscv_vmxnor_mm_b32(v1, v1, vl); }
> +
> +vbool64_t test_shortcut_for_riscv_vmxnor_case_6(vbool64_t v1, size_t
> +vl) {
> + return __riscv_vmxnor_mm_b64(v1, v1, vl); }
> +
> +/* { dg-final { scan-assembler-not {vmand\.mm\s+v[0-9]+,\s*v[0-9]+} }
> +} */
> +/* { dg-final { scan-assembler-not {vmnand\.mm\s+v[0-9]+,\s*v[0-9]+}
> +} } */
> +/* { dg-final { scan-assembler-not {vmnandn\.mm\s+v[0-9]+,\s*v[0-9]+}
> +} } */
> +/* { dg-final { scan-assembler-not {vmxor\.mm\s+v[0-9]+,\s*v[0-9]+} }
> +} */
> +/* { dg-final { scan-assembler-not {vmor\.mm\s+v[0-9]+,\s*v[0-9]+} }
> +} */
> +/* { dg-final { scan-assembler-not {vmnor\.mm\s+v[0-9]+,\s*v[0-9]+} }
> +} */
> +/* { dg-final { scan-assembler-times
> +{vmorn\.mm\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 7 } } */
> +/* { dg-final { scan-assembler-not {vmxnor\.mm\s+v[0-9]+,\s*v[0-9]+}
> +} } */
> +/* { dg-final { scan-assembler-times {vmclr\.m\s+v[0-9]+} 14 } } */
> +/* { dg-final { scan-assembler-times {vmset\.m\s+v[0-9]+} 7 } } */
> --
> 2.34.1
>
>
next prev parent reply other threads:[~2023-04-14 6:47 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-14 2:32 [PATCH] " pan2.li
2023-04-14 2:35 ` juzhe.zhong
2023-04-14 2:37 ` Li, Pan2
2023-04-14 2:45 ` [PATCH v2] " pan2.li
2023-04-14 2:46 ` juzhe.zhong
2023-04-14 3:26 ` Li, Pan2
2023-04-14 6:41 ` Kito Cheng
2023-04-14 6:47 ` Li, Pan2 [this message]
2023-04-17 1:46 ` Li, Pan2
2023-04-17 1:52 ` Kito Cheng
2023-04-17 1:55 ` Li, Pan2
2023-04-14 3:25 ` [PATCH v3] " pan2.li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=MW5PR11MB5908C29CFBA8B97206AA3E00A9999@MW5PR11MB5908.namprd11.prod.outlook.com \
--to=pan2.li@intel.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=juzhe.zhong@rivai.ai \
--cc=kito.cheng@gmail.com \
--cc=kito.cheng@sifive.com \
--cc=yanzhang.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).