public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: "juzhe.zhong@rivai.ai" <juzhe.zhong@rivai.ai>
To: kito.cheng <kito.cheng@gmail.com>
Cc: gcc-patches <gcc-patches@gcc.gnu.org>,
	 Kito.cheng <kito.cheng@sifive.com>,  palmer <palmer@dabbelt.com>,
	 palmer <palmer@rivosinc.com>,
	 jeffreyalaw <jeffreyalaw@gmail.com>,
	 "Robin Dapp" <rdapp.gcc@gmail.com>,
	 richard.sandiford <richard.sandiford@arm.com>
Subject: Re: Re: [PATCH V2] RISC-V: Add RVV comparison autovectorization
Date: Wed, 24 May 2023 11:30:57 +0800	[thread overview]
Message-ID: <EFCC8CD9BCA8945E+20230524113057358536203@rivai.ai> (raw)
In-Reply-To: <CA+yXCZDM_RVSUGyS1LyggBjYyjQKanmerAZ+66WajoZWoXBQfQ@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 3312 bytes --]

Thanks a lot. Part of the comments has already been fixed in V4.
But forget about V4 patch.

Could you continue review V5 patch that I just send ?
https://gcc.gnu.org/pipermail/gcc-patches/2023-May/619366.html 
with all comments from you have been fixed.
Thanks.



juzhe.zhong@rivai.ai
 
From: Kito Cheng
Date: 2023-05-24 11:20
To: juzhe.zhong
CC: gcc-patches; kito.cheng; palmer; palmer; jeffreyalaw; rdapp.gcc; Richard Sandiford
Subject: Re: [PATCH V2] RISC-V: Add RVV comparison autovectorization
> +void
> +expand_vec_cmp (rtx target, rtx_code code, rtx mask, rtx maskoff, rtx op0,
> +               rtx op1)
> ...
> +  rtx cmp = gen_rtx_fmt_ee (code, mask_mode, op0, op1);
> +  rtx ops[RVV_CMP_OP + 2] = {target, mask, maskoff, cmp, op0, op1};
> +  emit_vlmax_cmp_insn (icode, RVV_CMP_OP + 2, ops);
 
It's too magic.
 
> +/* This function emits cmp instruction.  */
> +void
> +emit_vlmax_cmp_insn (unsigned icode, int op_num, rtx *ops)
> +{
> +  machine_mode mode = GET_MODE (ops[0]);
> +  bool fully_unmasked_p = op_num == RVV_CMP_OP ? true : false;
> +  bool use_real_merge_p = op_num == RVV_CMP_OP ? false : true;
 
Don't do that, plz separate break this function into two.
 
> +  /* We have a maximum of 11 operands for RVV instruction patterns according to
> +   * vector.md.  */
> +  insn_expander<11> e (/*OP_NUM*/ op_num, /*HAS_DEST_P*/ true,
> +                      /*FULLY_UNMASKED_P*/ fully_unmasked_p,
> +                      /*USE_REAL_MERGE_P*/ use_real_merge_p,
> +                      /*HAS_AVL_P*/ true,
> +                      /*VLMAX_P*/ true,
> +                      /*DEST_MODE*/ mode, /*MASK_MODE*/ mode);
> +  e.set_policy (op_num == RVV_CMP_OP ? MASK_UNDISTURBED : MASK_ANY);
> +  e.emit_insn ((enum insn_code) icode, ops);
> +}
> +
>  /* Expand series const vector.  */
>
>  void
> +void
> +expand_vec_cmp (rtx target, rtx_code code, rtx op0, rtx op1)
> +{
> +  machine_mode mask_mode = GET_MODE (target);
> +  machine_mode data_mode = GET_MODE (op0);
> +  insn_code icode = get_cmp_insn_code (code, data_mode);
> +
> +  if (code == LTGT)
> +    {
> +      rtx gt = gen_reg_rtx (mask_mode);
> +      rtx lt = gen_reg_rtx (mask_mode);
> +      expand_vec_cmp (gt, GT, op0, op1);
> +      expand_vec_cmp (lt, LT, op0, op1);
> +      icode = code_for_pred (IOR, mask_mode);
> +      rtx ops[3] = {target, gt, lt};
 
rtx ops[] = {target, gt, lt};
 
> +      emit_vlmax_insn (icode, riscv_vector::RVV_BINOP, ops);
> +      return;
> +    }
> +
> +  rtx cmp = gen_rtx_fmt_ee (code, mask_mode, op0, op1);
> +  rtx ops[RVV_CMP_OP] = {target, cmp, op0, op1};
 
rtx ops[] = {target, cmp, op0, op1};
 
> +  emit_vlmax_cmp_insn (icode, RVV_CMP_OP, ops);
> +}
> +
 
> +  /* There is native support for the inverse comparison.  */
> +  code = reverse_condition_maybe_unordered (code);
> +  if (code == ORDERED)
> +    emit_move_insn (target, eq0);
> +  else
> +    expand_vec_cmp (eq0, code, eq0, eq0, op0, op1);
> +
> +  if (can_invert_p)
> +    {
> +      emit_move_insn (target, eq0);
> +      return true;
> +    }
> +  insn_code icode = code_for_pred_not (mask_mode);
> +  rtx ops[RVV_UNOP] = {target, eq0};
> +  emit_vlmax_insn (icode, RVV_UNOP, ops);
 
rtx ops[] = {target, eq0};
 

      reply	other threads:[~2023-05-24  3:31 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-23 13:50 juzhe.zhong
2023-05-23 14:12 ` Robin Dapp
2023-05-23 15:08   ` 钟居哲
2023-05-23 21:07     ` Robin Dapp
2023-05-24  1:13       ` juzhe.zhong
2023-05-24  3:20 ` Kito Cheng
2023-05-24  3:30   ` juzhe.zhong [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=EFCC8CD9BCA8945E+20230524113057358536203@rivai.ai \
    --to=juzhe.zhong@rivai.ai \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jeffreyalaw@gmail.com \
    --cc=kito.cheng@gmail.com \
    --cc=kito.cheng@sifive.com \
    --cc=palmer@dabbelt.com \
    --cc=palmer@rivosinc.com \
    --cc=rdapp.gcc@gmail.com \
    --cc=richard.sandiford@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).