public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: "Kewen.Lin" <linkw@linux.ibm.com>
To: "juzhe.zhong@rivai.ai" <juzhe.zhong@rivai.ai>
Cc: gcc-patches <gcc-patches@gcc.gnu.org>,
	jeffreyalaw <jeffreyalaw@gmail.com>, rdapp <rdapp@linux.ibm.com>,
	"richard.sandiford" <richard.sandiford@arm.com>,
	rguenther <rguenther@suse.de>
Subject: Re: [PATCH] VECT: Add WHILE_LEN pattern for decrement IV support for auto-vectorization
Date: Fri, 14 Apr 2023 15:41:17 +0800	[thread overview]
Message-ID: <8b00d0d4-1d76-1064-e285-e264911f09fb@linux.ibm.com> (raw)
In-Reply-To: <D0C8B4B7D859DB89+2023041414392177914725@rivai.ai>

on 2023/4/14 14:39, juzhe.zhong@rivai.ai wrote:
>>> Yeah, MIN_EXPR is enough for IBM ports, but with seeing the special semantic of vsetvli on
>>> "vl = ceil(AVL / 2) for VLMAX < AVL < 2*VLMAX", I'm not sure if it's a good idea for RV, it seems
>>> to put the burden to RV backend.  For one case that the iteration count is known, on the environment
>>> with the above capability, using the vector setting as [1], assuming the given iterations is 10,
>>> fully unrolled, when using MIN_EXPR, the lengths for two iterations would be folded into 8/2, while
>>> using WHILE_LEN artificial folding can make the lengths be 5/5.  I assumed that on the environment
>>> with the above capability 5/5 is optimal than 8/2? that means if we use MIN then RV backend has to
>>> try to make 8/2 to 5/5.  Or it's trivial since RV backend already supports and plans to support this
>>> kind of vsetvli load re-balancing?
> This is a trivial power optimization feature of RVV. I don't think making it in middle-end is a good idea
> since the middle-end is supposed to be totally target-independent. And I figured out re-balancing
> vsetvli is not difficult to do that in RISC-V port. 
> 

OK, thanks for clarifying, sounds good.

> 
>>> Yes, if you don't need WHILE_LEN, this proposal is more like to enhance the current partial
>>> vectorization with length (mainly on length preparation and loop control).  But why would we need
>>> a new target hook?  You want to keep the existing length handlings in vect_set_loop_controls_directly
>>> unchanged? it seems not necessary.  IIUC, not requiring WHILE_LEN also means that this patch
>>> doesn't necessarily block the other RV backend patches on vector with length exploitation since
>>> the existing vector with length support already works well on functionality.
> Ok, I get your point. I am gonna refine the patch to make it work for both RVV and IBM.

Thanks!

BR,
Kewen

  reply	other threads:[~2023-04-14  7:41 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-07  1:47 juzhe.zhong
2023-04-07  3:23 ` Li, Pan2
2023-04-11 12:12 ` juzhe.zhong
2023-04-11 12:44   ` Richard Sandiford
2023-04-12  7:00     ` Richard Biener
2023-04-12  8:00       ` juzhe.zhong
2023-04-12  8:42         ` Richard Biener
2023-04-12  9:15           ` juzhe.zhong
2023-04-12  9:29             ` Richard Biener
2023-04-12  9:42               ` Robin Dapp
2023-04-12 11:17               ` Richard Sandiford
2023-04-12 11:37                 ` juzhe.zhong
2023-04-12 12:24                   ` Richard Sandiford
2023-04-12 14:18                     ` 钟居哲
2023-04-13  6:47                       ` Richard Biener
2023-04-13  9:54                         ` juzhe.zhong
2023-04-18  9:32                           ` Richard Sandiford
2023-04-12 12:56                   ` Kewen.Lin
2023-04-12 13:22                     ` 钟居哲
2023-04-13  7:29                       ` Kewen.Lin
2023-04-13 13:44                         ` 钟居哲
2023-04-14  2:54                           ` Kewen.Lin
2023-04-14  3:09                             ` juzhe.zhong
2023-04-14  5:40                               ` Kewen.Lin
2023-04-14  3:39                             ` juzhe.zhong
2023-04-14  6:31                               ` Kewen.Lin
2023-04-14  6:39                                 ` juzhe.zhong
2023-04-14  7:41                                   ` Kewen.Lin [this message]
2023-04-14  6:52                               ` Richard Biener
2023-04-12 11:42                 ` Richard Biener
     [not found]           ` <2023041217154958074655@rivai.ai>
2023-04-12  9:20             ` juzhe.zhong
2023-04-19 21:53 ` 钟居哲
2023-04-20  8:52   ` Richard Sandiford
2023-04-20  8:57     ` juzhe.zhong
2023-04-20  9:11       ` Richard Sandiford
2023-04-20  9:19         ` juzhe.zhong
2023-04-20  9:22           ` Richard Sandiford
2023-04-20  9:50             ` Richard Biener
2023-04-20  9:54               ` Richard Sandiford
2023-04-20 10:38                 ` juzhe.zhong
2023-04-20 12:05                   ` Richard Biener

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8b00d0d4-1d76-1064-e285-e264911f09fb@linux.ibm.com \
    --to=linkw@linux.ibm.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jeffreyalaw@gmail.com \
    --cc=juzhe.zhong@rivai.ai \
    --cc=rdapp@linux.ibm.com \
    --cc=rguenther@suse.de \
    --cc=richard.sandiford@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).