public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Sandiford <richard.sandiford@arm.com>
To: "juzhe.zhong\@rivai.ai" <juzhe.zhong@rivai.ai>
Cc: rguenther <rguenther@suse.de>,
	 incarnation.p.lee <incarnation.p.lee@outlook.com>,
	 gcc-patches <gcc-patches@gcc.gnu.org>,
	 Kito.cheng <kito.cheng@sifive.com>,  ams <ams@codesourcery.com>
Subject: Re: [PATCH] RISC-V: Bugfix for mode tieable of the rvv bool types
Date: Mon, 13 Feb 2023 10:18:55 +0000	[thread overview]
Message-ID: <mptfsb9zx4g.fsf@arm.com> (raw)
In-Reply-To: <5CDAF1EC0059903D+20230213174828831645116@rivai.ai> (juzhe's message of "Mon, 13 Feb 2023 17:48:29 +0800")

"juzhe.zhong@rivai.ai" <juzhe.zhong@rivai.ai> writes:
>>> What's the byte size of VNx1BI, expressed as a function of N?
>>> If it's CEIL (N, 8) then we don't have a way of representing that yet.
> N is a poly value.
> RVV like SVE support scalable vector.
> the N is poly (1,1).
>
> VNx1B mode nunits = poly(1,1) units.
> VNx1B mode bitsize =poly (1,1) bitsize.
> VNx1B mode bytesize = poly(1,1) units (currently). Ideally and more accurate, it should be VNx1B mode bytesize =poly (1/8,1/8).

But this would be a fractional bytesize, and like Richard says,
the memory subsystem would always access full bytes.  So I think
the bytesize would have to be at least CEIL (N, 8).

> However, it can't represent it like this. GCC consider its bytesize as  poly (1,1) bytesize.

Ah, OK.  That (making the size N bytes) does seem like a reasonable
workaround, provided that it matches the C types, etc.  So the total
amount of padding is 7N bits (I assume at the msb of the type when
viewed as an integer).

I agree that what (IIUC) was discussed upthread works, i.e.:

  bytesize = N
  bitsize = N * 8 (fixed function of bytesize)
  precision = N
  nunits = N
  unit_size = 1
  unit_precision = 1

But target-independent code won't expect this layout, so supporting
it will involve more than just adjusting the parameters.

Thanks,
Richard

  reply	other threads:[~2023-02-13 10:18 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-11  8:46 incarnation.p.lee
2023-02-11 13:00 ` juzhe.zhong
2023-02-11 13:06 ` juzhe.zhong
2023-02-13  8:07   ` Richard Biener
2023-02-13  8:19     ` juzhe.zhong
2023-02-13  8:46       ` Richard Biener
2023-02-13  9:04         ` juzhe.zhong
2023-02-13  9:41         ` Richard Sandiford
2023-02-13  9:48           ` Richard Biener
2023-02-13  9:48           ` juzhe.zhong
2023-02-13 10:18             ` Richard Sandiford [this message]
2023-02-13 10:28               ` juzhe.zhong
     [not found]               ` <20230213182800944794123@rivai.ai>
2023-02-13 10:39                 ` juzhe.zhong
2023-02-13 11:00     ` Andrew Stubbs
2023-02-13 15:34       ` 盼 李
2023-02-13 15:47         ` Richard Biener
2023-02-15 15:57           ` 盼 李
2023-02-16 15:17             ` 盼 李

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mptfsb9zx4g.fsf@arm.com \
    --to=richard.sandiford@arm.com \
    --cc=ams@codesourcery.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=incarnation.p.lee@outlook.com \
    --cc=juzhe.zhong@rivai.ai \
    --cc=kito.cheng@sifive.com \
    --cc=rguenther@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).