public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: "juzhe.zhong@rivai.ai" <juzhe.zhong@rivai.ai>
To: rguenther <rguenther@suse.de>,
	 richard.sandiford <richard.sandiford@arm.com>
Cc: gcc-patches <gcc-patches@gcc.gnu.org>,  linkw <linkw@linux.ibm.com>
Subject: Re: Re: [PATCH] VECT: Change flow of decrement IV
Date: Wed, 31 May 2023 15:49:32 +0800	[thread overview]
Message-ID: <8AFCA325C35C9FB5+2023053115493192772923@rivai.ai> (raw)
In-Reply-To: <nycvar.YFH.7.77.849.2305310734120.4723@jbgna.fhfr.qr>

[-- Attachment #1: Type: text/plain, Size: 4269 bytes --]


>> I'm just saying that to go forward the vectorizer change looks
>>more promising (also considering the pace RISC-V people are working at
>>...)

Yeah,  RVV needs a lot of middle-end support:
SELECT_VL, LEN_MASK_LOAD/LEN_MASK_STORE,.....etc

LEN_ADD for RVV reduction support like COND_ADD for ARM SVE...etc

SELECT_VL is still pending.

Without support in middle-end, GCC can not support powerful auto-vectorization (Performance will be much worse than RVV LLVM).
And unfortunately, I am the only guy working on middle-end support of RVV auto-vectorization. :)

I think we can make this patch merged and record the enhancement of SCEV in bugzilla to see we can improve that in the future.

Thanks.


juzhe.zhong@rivai.ai
 
From: Richard Biener
Date: 2023-05-31 15:38
To: Richard Sandiford
CC: juzhe.zhong@rivai.ai; gcc-patches; linkw
Subject: Re: [PATCH] VECT: Change flow of decrement IV
On Wed, 31 May 2023, Richard Sandiford wrote:
 
> Richard Biener <rguenther@suse.de> writes:
> > On Wed, 31 May 2023, juzhe.zhong@rivai.ai wrote:
> >
> >> Hi?all. I have posted my several investigations:
> >> https://gcc.gnu.org/pipermail/gcc-patches/2023-May/620101.html 
> >> https://gcc.gnu.org/pipermail/gcc-patches/2023-May/620105.html 
> >> https://gcc.gnu.org/pipermail/gcc-patches/2023-May/620108.html 
> >> 
> >> Turns out when "niters is a constant value and vf is a constant value"
> >> This patch can allow SCEV/IVOPTS optimize a lot for RVV too (Take tesecase from IBM's testsuite for example) and I think this patch can fix IBM's cunroll issue.
> >> Even though it will produce a 'mv' instruction in some ohter cases for RVV, I think Gain > Pain overal.
> >> 
> >> Actually, for current flow:
> >> 
> >> step = MIN ()
> >> ...
> >> remain = remain - step.
> >> 
> >> I don't know how difficult to extend SCEV/IVOPTS to fix this issue.
> >> So, could you make a decision for this patch?
> >> 
> >> I wonder whether we should apply the approach of this patch (the codes can be refined after well reviewed) or
> >> we should extend SCEV/IVOPTS ?
> >
> > I don't think we can do anything in SCEV for this which means we'd
> > need to special-case this in niter analysis, in IVOPTs and any other
> > passes that might be affected (and not fixed by handling it in niter
> > analysis).  While improving niter analysis would be good (the user
> > could write this pattern as well) I do not have time to try
> > implementing that (I have no idea how ugly or robust it is going to be).
> >
> > So I think we should patch this up in the vectorizer itself like with
> > your patch.  I'm going to wait for Richards input though since he
> > seems to disagree.
> 
> I think my main disagreement is that the IV phi can be analysed
> as a SCEV with sufficient work (realising that the MIN result is
> always VF when the latch is executed).  That SCEV might be useful
> ?as is? for things like IVOPTS, without specific work in those passes.
> (Although perhaps not too useful, since most other IVs will be upcounting.)
 
I think we'd need another API for SCEV there then,
analyze_scalar_evolution_for_latch () so we can disregard the
value on the exit edges then.  That means we'd still need to touch
all users and decide whether it's safe to use that or not.
 
> I don't object though.  It just feels like we're giving up easily.
> And that's a bit frustrating, since this potential problem was flagged
> ahead of time.
 
Well, I expect that massaging SCEV and niter analysis will take
up quite some developer time while avoiding the situation in
the vectorizer is possible (and would fix the observed regressions).
We can always improve later here and I'd suggest to file an
enhancement bugreport with a simple C testcase using this kind of
iteration.
 
I'm just saying that to go forward the vectorizer change looks
more promising (also considering the pace RISC-V people are working at 
...)
 
Richard.
 
> > Note with SELECT_VL all bets will be off since as I understand the
> > value it gives can vary from iteration to iteration (but we know
> > a lower and maybe an upper bound?)
> 
> Right.  All IVs will have a variable step for SELECT_VL.
> 
> Thanks,
> Richard
> 
 

  reply	other threads:[~2023-05-31  7:49 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-30 11:28 juzhe.zhong
2023-05-30 11:31 ` Richard Sandiford
2023-05-30 11:36   ` juzhe.zhong
2023-05-30 11:41     ` Richard Sandiford
     [not found]     ` <5C7770CA9FB40F7E+8BE427DD-97DA-4B93-A73A-8CDD1D92089B@rivai.ai>
2023-05-30 12:01       ` Richard Sandiford
     [not found]     ` <685EE879E20B3272+6338EB42-0A9D-4147-993D-99DC8FF7C832@rivai.ai>
2023-05-30 12:33       ` Richard Biener
2023-05-30 12:37         ` 钟居哲
     [not found]           ` <FA43CAC5-BCCE-42AF-8A6B-E69F1A496F5C@suse.de>
2023-05-30 22:51             ` 钟居哲
2023-05-30 14:13         ` 钟居哲
2023-05-30 14:47         ` 钟居哲
2023-05-30 15:05         ` 钟居哲
2023-05-31  1:42           ` juzhe.zhong
2023-05-31  6:41             ` Richard Biener
2023-05-31  6:50               ` juzhe.zhong
2023-05-31  7:38                 ` Kewen.Lin
2023-05-31  7:50                   ` juzhe.zhong
2023-05-31  7:28               ` Richard Sandiford
2023-05-31  7:36                 ` juzhe.zhong
2023-05-31  8:44                   ` Richard Biener
2023-05-31  7:38                 ` Richard Biener
2023-05-31  7:49                   ` juzhe.zhong [this message]
2023-05-31  9:01                   ` Richard Sandiford
2023-05-31  9:30                     ` juzhe.zhong
2023-05-31 10:53                       ` Richard Biener
2023-05-31 12:16                         ` 钟居哲
2023-05-30 11:38   ` juzhe.zhong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8AFCA325C35C9FB5+2023053115493192772923@rivai.ai \
    --to=juzhe.zhong@rivai.ai \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=linkw@linux.ibm.com \
    --cc=rguenther@suse.de \
    --cc=richard.sandiford@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).