public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Lehua Ding <lehua.ding@rivai.ai>
To: Robin Dapp <rdapp.gcc@gmail.com>, gcc-patches@gcc.gnu.org
Cc: juzhe.zhong@rivai.ai, kito.cheng@gmail.com, palmer@rivosinc.com,
	jeffreyalaw@gmail.com
Subject: Re: [PATCH] RISC-V: Adjusting the comments of the emit_vlmax_insn/emit_vlmax_insn_lra/emit_nonvlmax_insn functions
Date: Thu, 21 Sep 2023 17:51:31 +0800	[thread overview]
Message-ID: <15CE81335CC42F21+c25f29e5-4570-464a-9efb-fdcc109f525a@rivai.ai> (raw)
In-Reply-To: <346cc997-ca97-5cf9-d20c-e50206f6eefd@gmail.com>

Hi Robin,

> I once had different comments for those but either I never pushed them
> or they got buried in the process of refactoring.  The explanatory
> comment explaining vlmax is also in "nowhere land" below autovec_use_vlmax_p.
> (it says vsetvli instead of vsetvl as well...)  It would be useful
> to move it to above the function comments you touch.

I would like to move this comment to insn_expander::emit_insn body 
before set avl in another patch which add VLS avl_type.

> 
>> +/* Emit RVV insn which vl is the number of units of the vector mode.
>> +   This function can only be used before LRA pass or for VLS_AVL_IMM modes.  */
> 
> Emit an RVV insn with a vector length that equals the number of units of
> the vector mode.  For VLA modes this corresponds to VLMAX.
> 
> Unless the vector length can be encoded in the vsetivl[i] instruction this
> function must only be used as long as we can create pseudo registers.
> This is because it will set a pseudo register to VLMAX using vsetvl and
> use this as definition for the vector length.
> 
> 
> Besides, we could add a const_vlmax_p () || can_create_pseudo_p assert here?
> 
> 
>> +/* Like emit_vlmax_insn but can be only used after LRA pass that can't create
>> +   pseudo register.  */
> 
> Like emit_vlmax_insn but must only be used when we cannot create pseudo
> registers anymore.  This function, however, takes a predefined vector
> length from the value in VL.
> 
>> +/* Emit RVV insn which vl is the VL argument.  */
>> +emit_nonvlmax_insn (unsigned icode, unsigned insn_flags, rtx *ops, rtx vl)
> 
> I think I renamed this to emit_len_insn or something before but Juzhe didn't
> like it ;)
> 
> How about something like:
> Emit an RVV insn with a predefined vector length.  Contrary to emit_vlmax_insn
> the instruction's vector length is not deduced from its mode but taken from
> the value in VL.

Thank you very much, I used all of them. Here the V2 patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-September/631114.html

-- 
Best,
Lehua (RiVAI)
lehua.ding@rivai.ai

      reply	other threads:[~2023-09-21  9:51 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-21  7:11 Lehua Ding
2023-09-21  9:07 ` Robin Dapp
2023-09-21  9:51   ` Lehua Ding [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=15CE81335CC42F21+c25f29e5-4570-464a-9efb-fdcc109f525a@rivai.ai \
    --to=lehua.ding@rivai.ai \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jeffreyalaw@gmail.com \
    --cc=juzhe.zhong@rivai.ai \
    --cc=kito.cheng@gmail.com \
    --cc=palmer@rivosinc.com \
    --cc=rdapp.gcc@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).