public inbox for binutils@sourceware.org
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: "Jiang, Haochen" <haochen.jiang@intel.com>
Cc: "binutils@sourceware.org" <binutils@sourceware.org>,
	"hjl.tools@gmail.com" <hjl.tools@gmail.com>
Subject: Re: [PATCH v2] Support Intel AVX10.1
Date: Thu, 17 Aug 2023 11:08:18 +0200	[thread overview]
Message-ID: <fa7a56de-cca9-5498-b346-25bad795888d@suse.com> (raw)
In-Reply-To: <93eedcac-d2b5-578e-0630-2ed71bf5ddae@suse.com>

On 16.08.2023 10:59, Jan Beulich via Binutils wrote:
> On 16.08.2023 10:21, Jiang, Haochen wrote:
>>>> After I think twice on that, I suppose maybe it is not that appropriate to put it
>>>> into i386_opcode_modifier since in AVX10, the vector width is depends on CPU.
>>>> I suppose i386_opcode_modifier is a feature for instructions but not CPU.
>>>
>>> I disagree. See the uses of EVex, for example. As said above, I think
>>> maximum vector width and ISA extensions want dealing with separately,
>>> and only the latter would generally qualify for Cpu* flags. Furthermore
>>> recall that the attribute wants widening sooner or later, and Cpu*
>>> flags are uniformly boolean. Only attributes may have numeric values.
>>
>> After I checked code, I still miss the point here.
>>
>> My concern is how to actually disable the zmm registers for AVX10/256
>> and ymm registers for theoretical AVX10/128.
> 
> That's the easy part: That'll want doing in check_register(). The issue
> is with insns which do 512-bit operation despite not using zmm registers
> (think of vfpclassp* with memory operand).
> 
>> I suppose i386_opcode_modifier
>> is more related to building up the whole encoding. But each AVX10.X/256 is an
>> actual arch.
> 
> I wouldn't agree with the last sentence, but ...
> 
>> Adding a feature in i386_opcode_modifier can indicate what is the maximum
>> vector length the instruction is allowed on all archs but has nothing to do with
>> disabling zmm registers on an 256-bit only arch.
> 
> ... you still have a point here. Maybe it only wants to be a boolean,
> indicating that an insn is vector-length sensitive. Yet re-using the
> EVex attribute continues to be an option: With vector length
> constrained to 256 (or 128) bits, EVEX512 (or EVEX256) simply become
> unavailable for encoding, and EVEXDYN would be equally constrained.
> And if re-using that attribute continues to be an option, adding a
> new non-boolean attribute clearly is also possible.
> 
> So I guess there may have been a slight misunderstanding: I was
> suggesting an attribute expressing permissible vector lengths (hence
> the consideration of re-using EVex), which would then be checked
> against the established (through whatever directive / command line
> option) maximum vector length. I did not suggest a new "max vector
> length" attribute.

Just to mention it: I've meanwhile realized that re-using EVex here will
collide with APX introducing EVEX-encoded KMOV*. So it'll need to be a
very similar but distinct attribute. And if it turned out that the
attribute then is really only needed on the mask insns (using EVex
elsewhere), it could equally well be a "permitted vector lengths" or a
"maximum vector length" one, as both are then equal. Question is what
AVX10/128 would mean for VEX-encoded insns. It seems likely that 256-bit
forms wouldn't be permitted there either then, in which case applicable
VEX-encoded insns would then need to gain such attributes as well. In
that case it would of course be more logical to stick to "permitted
vector lengths".

Jan

  reply	other threads:[~2023-08-17  9:08 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-27  7:15 [PATCH] " Haochen Jiang
2023-07-27 11:23 ` Jan Beulich
2023-07-28  2:50   ` Jiang, Haochen
2023-07-28  6:53 ` Jan Beulich
2023-08-01  2:18   ` Jiang, Haochen
2023-08-01  6:49     ` Jan Beulich
2023-08-04  7:45       ` Jiang, Haochen
2023-08-04  7:57         ` Jan Beulich
2023-08-14  6:45           ` [PATCH v2] " Haochen Jiang
2023-08-14  8:19             ` Jan Beulich
2023-08-14  8:46               ` Jiang, Haochen
2023-08-14 10:33                 ` Jan Beulich
2023-08-14 10:35                   ` Jan Beulich
2023-08-15  8:32                   ` Jiang, Haochen
2023-08-15 14:10                     ` Jan Beulich
2023-08-16  8:21                       ` Jiang, Haochen
2023-08-16  8:59                         ` Jan Beulich
2023-08-17  9:08                           ` Jan Beulich [this message]
2023-08-18  6:53                             ` Jan Beulich
2023-08-18 13:03             ` Jan Beulich
2023-08-23  2:20               ` Jiang, Haochen
2023-08-23  3:34                 ` [RFC][PATCH v3] " Haochen Jiang
2023-08-23  7:17                   ` Jan Beulich
2023-08-23  5:54                 ` [PATCH v2] " Jan Beulich
2023-08-23  6:21                   ` Jiang, Haochen
2023-08-23  6:24                     ` Jan Beulich
2023-08-23  6:25                       ` Jiang, Haochen
2023-08-23  6:39                         ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fa7a56de-cca9-5498-b346-25bad795888d@suse.com \
    --to=jbeulich@suse.com \
    --cc=binutils@sourceware.org \
    --cc=haochen.jiang@intel.com \
    --cc=hjl.tools@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).