public inbox for binutils@sourceware.org
 help / color / mirror / Atom feed
* [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32.
@ 2024-03-22  9:49 Cui, Lili
  2024-03-25 12:22 ` Jan Beulich
  2024-03-25 12:31 ` Jan Beulich
  0 siblings, 2 replies; 8+ messages in thread
From: Cui, Lili @ 2024-03-22  9:49 UTC (permalink / raw)
  To: binutils; +Cc: hjl.tools, jbeulich, Hu, Lin1

From: "Hu, Lin1" <lin1.hu@intel.com>

Based on the V1, there are mainly the following changes:
1. Added more test cases to cover each ins template.
2. The Intel format test has been removed in this patch.
3. Droped the newly added %XE and use evex_from_legacy for unified judgment.
4. Add %ME to movbe to print {evex} correctly.

This patch is based on APX NF patch and also adds test cases for Checking 64-bit insns not sizeable through
register operands with evex.

gas/ChangeLog:

        * testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d: Adjusted test cases.
        * testsuite/gas/i386/x86-64-apx-evex-promoted-intel.d: Added no-egpr testcases for movbe.
        * testsuite/gas/i386/x86-64-apx-evex-promoted-wig.d: Ditto.
        * testsuite/gas/i386/x86-64-apx-evex-promoted.d: Ditto.
        * testsuite/gas/i386/x86-64-apx-evex-promoted.s: Ditto.
        * testsuite/gas/i386/x86-64.exp: Added tests.
        * testsuite/gas/i386/noreg64-evex.d: New test.
        * testsuite/gas/i386/noreg64-evex.e: Ditto.
        * testsuite/gas/i386/noreg64-evex.s: Ditto.
        * testsuite/gas/i386/x86-64-apx_f-evex.d: Ditto.
        * testsuite/gas/i386/x86-64-apx_f-evex.s: Ditto.

opcodes/ChangeLog:

        * i386-dis-evex.h: Added %ME to movbe.
        * i386-dis.c : Added %XE to evex_from_vex instructions to output {evex}.
        (struct dis386): New %ME.
        (putop): Handle %ME and output {evex} for evex_from_legacy instructions.
---
 gas/testsuite/gas/i386/noreg64-evex.d         |  81 +++
 gas/testsuite/gas/i386/noreg64-evex.e         |  72 ++
 gas/testsuite/gas/i386/noreg64-evex.s         |  74 +++
 .../gas/i386/x86-64-apx-evex-promoted-bad.d   |   6 +-
 .../gas/i386/x86-64-apx-evex-promoted-intel.d |   3 +
 .../gas/i386/x86-64-apx-evex-promoted-wig.d   |   3 +
 .../gas/i386/x86-64-apx-evex-promoted.d       |   3 +
 .../gas/i386/x86-64-apx-evex-promoted.s       |   3 +
 gas/testsuite/gas/i386/x86-64-apx_f-evex.d    | 629 ++++++++++++++++++
 gas/testsuite/gas/i386/x86-64-apx_f-evex.s    | 626 +++++++++++++++++
 gas/testsuite/gas/i386/x86-64.exp             |   2 +
 opcodes/i386-dis-evex.h                       |   4 +-
 opcodes/i386-dis.c                            | 100 +--
 13 files changed, 1561 insertions(+), 45 deletions(-)
 create mode 100644 gas/testsuite/gas/i386/noreg64-evex.d
 create mode 100644 gas/testsuite/gas/i386/noreg64-evex.e
 create mode 100644 gas/testsuite/gas/i386/noreg64-evex.s
 create mode 100644 gas/testsuite/gas/i386/x86-64-apx_f-evex.d
 create mode 100644 gas/testsuite/gas/i386/x86-64-apx_f-evex.s

diff --git a/gas/testsuite/gas/i386/noreg64-evex.d b/gas/testsuite/gas/i386/noreg64-evex.d
new file mode 100644
index 00000000000..71759375166
--- /dev/null
+++ b/gas/testsuite/gas/i386/noreg64-evex.d
@@ -0,0 +1,81 @@
+#objdump: -dw
+#name: 64-bit insns not sizeable through register operands evex
+#source: noreg64-evex.s
+#warning_output: noreg64-evex.e
+
+.*: +file format .*
+
+Disassembly of section \.text:
+
+0+ <_start>:
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 83 10 01[	 ]+\{evex\} adcl \$0x1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 10 89 00 00 00[	 ]+\{evex\} adcl \$0x89,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 10 34 12 00 00[	 ]+\{evex\} adcl \$0x1234,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 10 78 56 34 12[	 ]+\{evex\} adcl \$0x12345678,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 83 00 01[	 ]+\{evex\} addl \$0x1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 00 89 00 00 00[	 ]+\{evex\} addl \$0x89,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 00 34 12 00 00[	 ]+\{evex\} addl \$0x1234,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 00 78 56 34 12[	 ]+\{evex\} addl \$0x12345678,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 83 20 01[	 ]+\{evex\} andl \$0x1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 20 89 00 00 00[	 ]+\{evex\} andl \$0x89,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 20 34 12 00 00[	 ]+\{evex\} andl \$0x1234,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 20 78 56 34 12[	 ]+\{evex\} andl \$0x12345678,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 f1 00[	 ]+\{evex\} crc32l \(%rax\),%eax
+[	 ]*[a-f0-9]+:[	 ]*62 f4 fc 08 f1 00[	 ]+\{evex\} crc32q \(%rax\),%rax
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 ff 08[	 ]+\{evex\} decl \(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 f7 30[	 ]+\{evex\} divl \(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 f7 38[	 ]+\{evex\} idivl \(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 f7 28[	 ]+\{evex\} imull \(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 ff 00[	 ]+\{evex\} incl \(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 f7 20[	 ]+\{evex\} mull \(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 f7 18[	 ]+\{evex\} negl \(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 f7 10[	 ]+\{evex\} notl \(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 83 08 01[	 ]+\{evex\} orl \$0x1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 08 89 00 00 00[	 ]+\{evex\} orl \$0x89,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 08 34 12 00 00[	 ]+\{evex\} orl \$0x1234,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 08 78 56 34 12[	 ]+\{evex\} orl \$0x12345678,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 10[	 ]+\{evex\} rcll \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 c1 10 02[	 ]+\{evex\} rcll \$0x2,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d3 10[	 ]+\{evex\} rcll %cl,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 10[	 ]+\{evex\} rcll \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 18[	 ]+\{evex\} rcrl \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 c1 18 02[	 ]+\{evex\} rcrl \$0x2,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d3 18[	 ]+\{evex\} rcrl %cl,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 18[	 ]+\{evex\} rcrl \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 00[	 ]+\{evex\} roll \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 c1 00 02[	 ]+\{evex\} roll \$0x2,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d3 00[	 ]+\{evex\} roll %cl,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 00[	 ]+\{evex\} roll \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 08[	 ]+\{evex\} rorl \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 c1 08 02[	 ]+\{evex\} rorl \$0x2,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d3 08[	 ]+\{evex\} rorl %cl,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 08[	 ]+\{evex\} rorl \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 83 18 01[	 ]+\{evex\} sbbl \$0x1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 18 89 00 00 00[	 ]+\{evex\} sbbl \$0x89,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 18 34 12 00 00[	 ]+\{evex\} sbbl \$0x1234,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 18 78 56 34 12[	 ]+\{evex\} sbbl \$0x12345678,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 20[	 ]+\{evex\} shll \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 c1 20 02[	 ]+\{evex\} shll \$0x2,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d3 20[	 ]+\{evex\} shll %cl,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 20[	 ]+\{evex\} shll \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 38[	 ]+\{evex\} sarl \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 c1 38 02[	 ]+\{evex\} sarl \$0x2,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d3 38[	 ]+\{evex\} sarl %cl,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 38[	 ]+\{evex\} sarl \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 20[	 ]+\{evex\} shll \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 c1 20 02[	 ]+\{evex\} shll \$0x2,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d3 20[	 ]+\{evex\} shll %cl,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 20[	 ]+\{evex\} shll \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 28[	 ]+\{evex\} shrl \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 c1 28 02[	 ]+\{evex\} shrl \$0x2,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d3 28[	 ]+\{evex\} shrl %cl,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 d1 28[	 ]+\{evex\} shrl \$1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 83 28 01[	 ]+\{evex\} subl \$0x1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 28 89 00 00 00[	 ]+\{evex\} subl \$0x89,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 28 34 12 00 00[	 ]+\{evex\} subl \$0x1234,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 28 78 56 34 12[	 ]+\{evex\} subl \$0x12345678,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 83 30 01[	 ]+\{evex\} xorl \$0x1,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 30 89 00 00 00[	 ]+\{evex\} xorl \$0x89,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 30 34 12 00 00[	 ]+\{evex\} xorl \$0x1234,\(%rax\)
+[	 ]*[a-f0-9]+:[	 ]*62 f4 7c 08 81 30 78 56 34 12[	 ]+\{evex\} xorl \$0x12345678,\(%rax\)
+#pass
diff --git a/gas/testsuite/gas/i386/noreg64-evex.e b/gas/testsuite/gas/i386/noreg64-evex.e
new file mode 100644
index 00000000000..516eb73ed6d
--- /dev/null
+++ b/gas/testsuite/gas/i386/noreg64-evex.e
@@ -0,0 +1,72 @@
+.*: Assembler messages:
+.*:[0-9]+: Warning:.*`adc'
+.*:[0-9]+: Warning:.*`adc'
+.*:[0-9]+: Warning:.*`adc'
+.*:[0-9]+: Warning:.*`adc'
+.*:[0-9]+: Warning:.*`add'
+.*:[0-9]+: Warning:.*`add'
+.*:[0-9]+: Warning:.*`add'
+.*:[0-9]+: Warning:.*`add'
+.*:[0-9]+: Warning:.*`and'
+.*:[0-9]+: Warning:.*`and'
+.*:[0-9]+: Warning:.*`and'
+.*:[0-9]+: Warning:.*`and'
+.*:[0-9]+: Warning:.*`crc32'
+.*:[0-9]+: Warning:.*`crc32'
+.*:[0-9]+: Warning:.*`dec'
+.*:[0-9]+: Warning:.*`div'
+.*:[0-9]+: Warning:.*`idiv'
+.*:[0-9]+: Warning:.*`imul'
+.*:[0-9]+: Warning:.*`inc'
+.*:[0-9]+: Warning:.*`mul'
+.*:[0-9]+: Warning:.*`neg'
+.*:[0-9]+: Warning:.*`not'
+.*:[0-9]+: Warning:.*`or'
+.*:[0-9]+: Warning:.*`or'
+.*:[0-9]+: Warning:.*`or'
+.*:[0-9]+: Warning:.*`or'
+.*:[0-9]+: Warning:.*`rcl'
+.*:[0-9]+: Warning:.*`rcl'
+.*:[0-9]+: Warning:.*`rcl'
+.*:[0-9]+: Warning:.*`rcl'
+.*:[0-9]+: Warning:.*`rcr'
+.*:[0-9]+: Warning:.*`rcr'
+.*:[0-9]+: Warning:.*`rcr'
+.*:[0-9]+: Warning:.*`rcr'
+.*:[0-9]+: Warning:.*`rol'
+.*:[0-9]+: Warning:.*`rol'
+.*:[0-9]+: Warning:.*`rol'
+.*:[0-9]+: Warning:.*`rol'
+.*:[0-9]+: Warning:.*`ror'
+.*:[0-9]+: Warning:.*`ror'
+.*:[0-9]+: Warning:.*`ror'
+.*:[0-9]+: Warning:.*`ror'
+.*:[0-9]+: Warning:.*`sbb'
+.*:[0-9]+: Warning:.*`sbb'
+.*:[0-9]+: Warning:.*`sbb'
+.*:[0-9]+: Warning:.*`sbb'
+.*:[0-9]+: Warning:.*`sal'
+.*:[0-9]+: Warning:.*`sal'
+.*:[0-9]+: Warning:.*`sal'
+.*:[0-9]+: Warning:.*`sal'
+.*:[0-9]+: Warning:.*`sar'
+.*:[0-9]+: Warning:.*`sar'
+.*:[0-9]+: Warning:.*`sar'
+.*:[0-9]+: Warning:.*`sar'
+.*:[0-9]+: Warning:.*`shl'
+.*:[0-9]+: Warning:.*`shl'
+.*:[0-9]+: Warning:.*`shl'
+.*:[0-9]+: Warning:.*`shl'
+.*:[0-9]+: Warning:.*`shr'
+.*:[0-9]+: Warning:.*`shr'
+.*:[0-9]+: Warning:.*`shr'
+.*:[0-9]+: Warning:.*`shr'
+.*:[0-9]+: Warning:.*`sub'
+.*:[0-9]+: Warning:.*`sub'
+.*:[0-9]+: Warning:.*`sub'
+.*:[0-9]+: Warning:.*`sub'
+.*:[0-9]+: Warning:.*`xor'
+.*:[0-9]+: Warning:.*`xor'
+.*:[0-9]+: Warning:.*`xor'
+.*:[0-9]+: Warning:.*`xor'
+#pass
diff --git a/gas/testsuite/gas/i386/noreg64-evex.s b/gas/testsuite/gas/i386/noreg64-evex.s
new file mode 100644
index 00000000000..76e001f3b8e
--- /dev/null
+++ b/gas/testsuite/gas/i386/noreg64-evex.s
@@ -0,0 +1,74 @@
+# Check 64-bit insns not sizeable through register operands with evex
+
+        .text
+_start:
+	{evex} adc	$1, (%rax)
+	{evex} adc	$0x89, (%rax)
+	{evex} adc	$0x1234, (%rax)
+	{evex} adc	$0x12345678, (%rax)
+	{evex} add	$1, (%rax)
+	{evex} add	$0x89, (%rax)
+	{evex} add	$0x1234, (%rax)
+	{evex} add	$0x12345678, (%rax)
+	{evex} and	$1, (%rax)
+	{evex} and	$0x89, (%rax)
+	{evex} and	$0x1234, (%rax)
+	{evex} and	$0x12345678, (%rax)
+	{evex} crc32	(%rax), %eax
+	{evex} crc32	(%rax), %rax
+	{evex} dec	(%rax)
+	{evex} div	(%rax)
+	{evex} idiv	(%rax)
+	{evex} imul	(%rax)
+	{evex} inc	(%rax)
+	{evex} mul	(%rax)
+	{evex} neg	(%rax)
+	{evex} not	(%rax)
+	{evex} or 	$1, (%rax)
+	{evex} or 	$0x89, (%rax)
+	{evex} or 	$0x1234, (%rax)
+	{evex} or 	$0x12345678, (%rax)
+	{evex} rcl	$1, (%rax)
+	{evex} rcl	$2, (%rax)
+	{evex} rcl	%cl, (%rax)
+	{evex} rcl	(%rax)
+	{evex} rcr	$1, (%rax)
+	{evex} rcr	$2, (%rax)
+	{evex} rcr	%cl, (%rax)
+	{evex} rcr	(%rax)
+	{evex} rol	$1, (%rax)
+	{evex} rol	$2, (%rax)
+	{evex} rol	%cl, (%rax)
+	{evex} rol	(%rax)
+	{evex} ror	$1, (%rax)
+	{evex} ror	$2, (%rax)
+	{evex} ror	%cl, (%rax)
+	{evex} ror	(%rax)
+	{evex} sbb	$1, (%rax)
+	{evex} sbb	$0x89, (%rax)
+	{evex} sbb	$0x1234, (%rax)
+	{evex} sbb	$0x12345678, (%rax)
+	{evex} sal	$1, (%rax)
+	{evex} sal	$2, (%rax)
+	{evex} sal	%cl, (%rax)
+	{evex} sal	(%rax)
+	{evex} sar	$1, (%rax)
+	{evex} sar	$2, (%rax)
+	{evex} sar	%cl, (%rax)
+	{evex} sar	(%rax)
+	{evex} shl	$1, (%rax)
+	{evex} shl	$2, (%rax)
+	{evex} shl	%cl, (%rax)
+	{evex} shl	(%rax)
+	{evex} shr	$1, (%rax)
+	{evex} shr	$2, (%rax)
+	{evex} shr	%cl, (%rax)
+	{evex} shr	(%rax)
+	{evex} sub	$1, (%rax)
+	{evex} sub	$0x89, (%rax)
+	{evex} sub	$0x1234, (%rax)
+	{evex} sub	$0x12345678, (%rax)
+	{evex} xor	$1, (%rax)
+	{evex} xor	$0x89, (%rax)
+	{evex} xor	$0x1234, (%rax)
+	{evex} xor	$0x12345678, (%rax)
diff --git a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
index 99002a91ffb..2047609f252 100644
--- a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
+++ b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
@@ -30,16 +30,16 @@ Disassembly of section .text:
 [ 	]*[a-f0-9]+:[ 	]+0c 18[ 	]+or.*
 [ 	]*[a-f0-9]+:[ 	]+62 f2 fc 18 f5[ 	]+\(bad\)
 [ 	]*[a-f0-9]+:[ 	]+0c 18[ 	]+or.*
-[ 	]*[a-f0-9]+:[ 	]+62 f4 e4[ 	]+\(bad\)
+[ 	]*[a-f0-9]+:[ 	]+62 f4 e4[ 	]+\{evex\} \(bad\)
 [ 	]*[a-f0-9]+:[ 	]+08 ff[ 	]+.*
 [ 	]*[a-f0-9]+:[ 	]+04 08[ 	]+.*
-[ 	]*[a-f0-9]+:[ 	]+62 f4 3c[ 	]+\(bad\)
+[ 	]*[a-f0-9]+:[ 	]+62 f4 3c[ 	]+\{evex\} \(bad\)
 [ 	]*[a-f0-9]+:[ 	]+08 8f c0 ff ff ff[ 	]+or.*
 [ 	]*[a-f0-9]+:[ 	]+62 74 7c 18 8f c0[ 	]+pop2   %rax,\(bad\)
 [ 	]*[a-f0-9]+:[ 	]+62 d4 24 18 8f[ 	]+\(bad\)
 [ 	]*[a-f0-9]+:[ 	]+c3[ 	]+.*
 [ 	]*[a-f0-9]+:[ 	]+62 e4 7e 08 dc 20[ 	]+aesenc128kl \(%rax\),%xmm20\(bad\)
-[ 	]*[a-f0-9]+:[ 	]+62 b4 7c 08 d9 c4[ 	]+sha1msg1 %xmm20\(bad\),%xmm0
+[ 	]*[a-f0-9]+:[ 	]+62 b4 7c 08 d9 c4[ 	]+{evex} sha1msg1 %xmm20\(bad\),%xmm0
 [ 	]*[a-f0-9]+:[ 	]+62 e4 7c 08 d9 20[ 	]+sha1msg1 \(%rax\),%xmm20\(bad\)
 [ 	]*[a-f0-9]+:[ 	]+62 fc 7d 0c 60 c7[ 	]+movbe  \{bad\-nf\},%r23w,%ax
 #pass
diff --git a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-intel.d b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-intel.d
index 9c88880191b..8c564a5f961 100644
--- a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-intel.d
+++ b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-intel.d
@@ -111,11 +111,14 @@ Disassembly of section \.text:
 [	 ]*[a-f0-9]+:[	 ]*62 d9 7c 08 90 ac 87 23 01 00 00[	 ]+kmovw[	 ]+k5,WORD PTR \[r31\+rax\*4\+0x123\]
 [	 ]*[a-f0-9]+:[	 ]*62 da 7c 08 49 84 87 23 01 00 00[	 ]+ldtilecfg[	 ]+\[r31\+rax\*4\+0x123\]
 [	 ]*[a-f0-9]+:[	 ]*62 fc 7d 08 60 c2[	 ]+movbe[	 ]+ax,r18w
+[	 ]*[a-f0-9]+:[	 ]*62 d4 7d 08 60 c7[	 ]+movbe[	 ]+ax,r15w
 [	 ]*[a-f0-9]+:[	 ]*62 ec 7d 08 61 94 80 23 01 00 00[	 ]+movbe[	 ]+WORD PTR \[r16\+rax\*4\+0x123\],r18w
 [	 ]*[a-f0-9]+:[	 ]*62 cc 7d 08 61 94 87 23 01 00 00[	 ]+movbe[	 ]+WORD PTR \[r31\+rax\*4\+0x123\],r18w
 [	 ]*[a-f0-9]+:[	 ]*62 dc 7c 08 60 d1[	 ]+movbe[	 ]+edx,r25d
+[	 ]*[a-f0-9]+:[	 ]*62 d4 7c 08 60 d7[	 ]+movbe[	 ]+edx,r15d
 [	 ]*[a-f0-9]+:[	 ]*62 6c 7c 08 61 8c 80 23 01 00 00[	 ]+movbe[	 ]+DWORD PTR \[r16\+rax\*4\+0x123\],r25d
 [	 ]*[a-f0-9]+:[	 ]*62 5c fc 08 60 ff[	 ]+movbe[	 ]+r15,r31
+[	 ]*[a-f0-9]+:[	 ]*62 54 fc 08 60 f8[	 ]+movbe[	 ]+r15,r8
 [	 ]*[a-f0-9]+:[	 ]*62 6c fc 08 61 bc 80 23 01 00 00[	 ]+movbe[	 ]+QWORD PTR \[r16\+rax\*4\+0x123\],r31
 [	 ]*[a-f0-9]+:[	 ]*62 4c fc 08 61 bc 87 23 01 00 00[	 ]+movbe[	 ]+QWORD PTR \[r31\+rax\*4\+0x123\],r31
 [	 ]*[a-f0-9]+:[	 ]*62 6c fc 08 60 bc 80 23 01 00 00[	 ]+movbe[	 ]+r31,QWORD PTR \[r16\+rax\*4\+0x123\]
diff --git a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-wig.d b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-wig.d
index db6860f30a6..845863bf5f9 100644
--- a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-wig.d
+++ b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-wig.d
@@ -111,11 +111,14 @@ Disassembly of section \.text:
 [	 ]*[a-f0-9]+:[	 ]*62 d9 7c 08 90 ac 87 23 01 00 00[	 ]+kmovw[	 ]+0x123\(%r31,%rax,4\),%k5
 [	 ]*[a-f0-9]+:[	 ]*62 da 7c 08 49 84 87 23 01 00 00[	 ]+ldtilecfg[	 ]+0x123\(%r31,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 fc 7d 08 60 c2[	 ]+movbe[	 ]+%r18w,%ax
+[	 ]*[a-f0-9]+:[	 ]*62 d4 7d 08 60 c7[	 ]+movbe[	 ]+%r15w,%ax
 [	 ]*[a-f0-9]+:[	 ]*62 ec 7d 08 61 94 80 23 01 00 00[	 ]+movbe[	 ]+%r18w,0x123\(%r16,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 cc 7d 08 61 94 87 23 01 00 00[	 ]+movbe[	 ]+%r18w,0x123\(%r31,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 dc 7c 08 60 d1[	 ]+movbe[	 ]+%r25d,%edx
+[	 ]*[a-f0-9]+:[	 ]*62 d4 7c 08 60 d7[	 ]+movbe[	 ]+%r15d,%edx
 [	 ]*[a-f0-9]+:[	 ]*62 6c 7c 08 61 8c 80 23 01 00 00[	 ]+movbe[	 ]+%r25d,0x123\(%r16,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 5c fc 08 60 ff[	 ]+movbe[	 ]+%r31,%r15
+[	 ]*[a-f0-9]+:[	 ]*62 54 fc 08 60 f8[	 ]+movbe[	 ]+%r8,%r15
 [	 ]*[a-f0-9]+:[	 ]*62 6c fc 08 61 bc 80 23 01 00 00[	 ]+movbe[	 ]+%r31,0x123\(%r16,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 4c fc 08 61 bc 87 23 01 00 00[	 ]+movbe[	 ]+%r31,0x123\(%r31,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 6c fc 08 60 bc 80 23 01 00 00[	 ]+movbe[	 ]+0x123\(%r16,%rax,4\),%r31
diff --git a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted.d b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted.d
index 3a9d712a346..cb821b8e798 100644
--- a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted.d
+++ b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted.d
@@ -111,11 +111,14 @@ Disassembly of section \.text:
 [	 ]*[a-f0-9]+:[	 ]*62 d9 7c 08 90 ac 87 23 01 00 00[	 ]+kmovw[	 ]+0x123\(%r31,%rax,4\),%k5
 [	 ]*[a-f0-9]+:[	 ]*62 da 7c 08 49 84 87 23 01 00 00[	 ]+ldtilecfg[	 ]+0x123\(%r31,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 fc 7d 08 60 c2[	 ]+movbe[	 ]+%r18w,%ax
+[	 ]*[a-f0-9]+:[	 ]*62 d4 7d 08 60 c7[	 ]+movbe[	 ]+%r15w,%ax
 [	 ]*[a-f0-9]+:[	 ]*62 ec 7d 08 61 94 80 23 01 00 00[	 ]+movbe[	 ]+%r18w,0x123\(%r16,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 cc 7d 08 61 94 87 23 01 00 00[	 ]+movbe[	 ]+%r18w,0x123\(%r31,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 dc 7c 08 60 d1[	 ]+movbe[	 ]+%r25d,%edx
+[	 ]*[a-f0-9]+:[	 ]*62 d4 7c 08 60 d7[	 ]+movbe[	 ]+%r15d,%edx
 [	 ]*[a-f0-9]+:[	 ]*62 6c 7c 08 61 8c 80 23 01 00 00[	 ]+movbe[	 ]+%r25d,0x123\(%r16,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 5c fc 08 60 ff[	 ]+movbe[	 ]+%r31,%r15
+[	 ]*[a-f0-9]+:[	 ]*62 54 fc 08 60 f8[	 ]+movbe[	 ]+%r8,%r15
 [	 ]*[a-f0-9]+:[	 ]*62 6c fc 08 61 bc 80 23 01 00 00[	 ]+movbe[	 ]+%r31,0x123\(%r16,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 4c fc 08 61 bc 87 23 01 00 00[	 ]+movbe[	 ]+%r31,0x123\(%r31,%rax,4\)
 [	 ]*[a-f0-9]+:[	 ]*62 6c fc 08 60 bc 80 23 01 00 00[	 ]+movbe[	 ]+0x123\(%r16,%rax,4\),%r31
diff --git a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted.s b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted.s
index 0773dc562ab..b7f6cce3edc 100644
--- a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted.s
+++ b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted.s
@@ -105,11 +105,14 @@ _start:
 	kmovw	0x123(%r31,%rax,4),%k5
 	ldtilecfg	0x123(%r31,%rax,4)
 	movbe	%r18w,%ax
+	movbe	%r15w,%ax
 	movbe	%r18w,0x123(%r16,%rax,4)
 	movbe	%r18w,0x123(%r31,%rax,4)
 	movbe	%r25d,%edx
+	movbe	%r15d,%edx
 	movbe	%r25d,0x123(%r16,%rax,4)
 	movbe	%r31,%r15
+	movbe	%r8,%r15
 	movbe	%r31,0x123(%r16,%rax,4)
 	movbe	%r31,0x123(%r31,%rax,4)
 	movbe	0x123(%r16,%rax,4),%r31
diff --git a/gas/testsuite/gas/i386/x86-64-apx_f-evex.d b/gas/testsuite/gas/i386/x86-64-apx_f-evex.d
new file mode 100644
index 00000000000..25c81149c54
--- /dev/null
+++ b/gas/testsuite/gas/i386/x86-64-apx_f-evex.d
@@ -0,0 +1,629 @@
+#as:
+#objdump: -dw
+#name: x86_64 APX_F insns with evex pseudo prefix
+#source: x86-64-apx_f-evex.s
+
+.*: +file format .*
+
+Disassembly of section \.text:
+
+0+ <_start>:
+\s*[a-f0-9]+:\s*62 54 fc 08 fc bc 80 23 01 00 00\s+\{evex\} aadd\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 7c 08 fc bc 80 23 01 00 00\s+\{evex\} aadd\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fd 08 fc bc 80 23 01 00 00\s+\{evex\} aand\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 7d 08 fc bc 80 23 01 00 00\s+\{evex\} aand\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 d7 7b\s+\{evex\} adc\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 d7 7b\s+\{evex\} adc\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 d7 7b\s+\{evex\} adc\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 d0 7b\s+\{evex\} adc\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 94 80 23 01 00 00 7b\s+\{evex\} adcb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 94 80 23 01 00 00 7b\s+\{evex\} adcw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 94 80 23 01 00 00 7b\s+\{evex\} adcl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 94 80 23 01 00 00 7b\s+\{evex\} adcq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 11 ff\s+\{evex\} adc\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 11 bc 80 23 01 00 00\s+\{evex\} adc\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 11 fa\s+\{evex\} adc\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 11 bc 80 23 01 00 00\s+\{evex\} adc\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 11 f8\s+\{evex\} adc\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 11 bc 80 23 01 00 00\s+\{evex\} adc\s+%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 10 c2\s+\{evex\} adc\s+%r8b,%dl
+\s*[a-f0-9]+:\s*62 54 7c 08 10 84 80 23 01 00 00\s+\{evex\} adc\s+%r8b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 13 bc 80 23 01 00 00\s+\{evex\} adc\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 13 bc 80 23 01 00 00\s+\{evex\} adc\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 13 bc 80 23 01 00 00\s+\{evex\} adc\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 54 7c 08 12 84 80 23 01 00 00\s+\{evex\} adc\s+0x123\(%r8,%rax,4\),%r8b
+\s*[a-f0-9]+:\s*62 54 fd 08 66 ff\s+\{evex\} adcx\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 d4 7d 08 66 d7\s+\{evex\} adcx\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 fd 08 66 bc 80 23 01 00 00\s+\{evex\} adcx\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7d 08 66 bc 80 23 01 00 00\s+\{evex\} adcx\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 c7 7b\s+\{evex\} add\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 c7 7b\s+\{evex\} add\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 c7 7b\s+\{evex\} add\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 c0 7b\s+\{evex\} add\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 84 80 23 01 00 00 7b\s+\{evex\} addb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 84 80 23 01 00 00 7b\s+\{evex\} addw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 84 80 23 01 00 00 7b\s+\{evex\} addl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 84 80 23 01 00 00 7b\s+\{evex\} addq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 01 ff\s+\{evex\} add\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 01 bc 80 23 01 00 00\s+\{evex\} add\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 01 fa\s+\{evex\} add\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 01 bc 80 23 01 00 00\s+\{evex\} add\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 01 f8\s+\{evex\} add\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 01 bc 80 23 01 00 00\s+\{evex\} add\s+%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 00 c2\s+\{evex\} add\s+%r8b,%dl
+\s*[a-f0-9]+:\s*62 54 7c 08 00 84 80 23 01 00 00\s+\{evex\} add\s+%r8b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 03 bc 80 23 01 00 00\s+\{evex\} add\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 03 bc 80 23 01 00 00\s+\{evex\} add\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 03 bc 80 23 01 00 00\s+\{evex\} add\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 54 7c 08 02 84 80 23 01 00 00\s+\{evex\} add\s+0x123\(%r8,%rax,4\),%r8b
+\s*[a-f0-9]+:\s*62 54 fe 08 66 ff\s+\{evex\} adox\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 d4 7e 08 66 d7\s+\{evex\} adox\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 fe 08 66 bc 80 23 01 00 00\s+\{evex\} adox\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7e 08 66 bc 80 23 01 00 00\s+\{evex\} adox\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7e 08 dd a4 80 23 01 00 00\s+\{evex\} aesdec128kl\s+0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 54 7e 08 df a4 80 23 01 00 00\s+\{evex\} aesdec256kl\s+0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 d4 7e 08 d8 8c 80 23 01 00 00\s+\{evex\} aesdecwide128kl\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7e 08 d8 9c 80 23 01 00 00\s+\{evex\} aesdecwide256kl\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 7e 08 dc a4 80 23 01 00 00\s+\{evex\} aesenc128kl\s+0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 54 7e 08 de a4 80 23 01 00 00\s+\{evex\} aesenc256kl\s+0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 d4 7e 08 d8 84 80 23 01 00 00\s+\{evex\} aesencwide128kl\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7e 08 d8 94 80 23 01 00 00\s+\{evex\} aesencwide256kl\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 e7 7b\s+\{evex\} and\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 e7 7b\s+\{evex\} and\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 e7 7b\s+\{evex\} and\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 e0 7b\s+\{evex\} and\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 a4 80 23 01 00 00 7b\s+\{evex\} andb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 a4 80 23 01 00 00 7b\s+\{evex\} andw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 a4 80 23 01 00 00 7b\s+\{evex\} andl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 a4 80 23 01 00 00 7b\s+\{evex\} andq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 21 ff\s+\{evex\} and\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 21 bc 80 23 01 00 00\s+\{evex\} and\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 21 fa\s+\{evex\} and\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 21 bc 80 23 01 00 00\s+\{evex\} and\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 21 f8\s+\{evex\} and\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 21 bc 80 23 01 00 00\s+\{evex\} and\s+%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 20 c2\s+\{evex\} and\s+%r8b,%dl
+\s*[a-f0-9]+:\s*62 54 7c 08 20 84 80 23 01 00 00\s+\{evex\} and\s+%r8b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 23 bc 80 23 01 00 00\s+\{evex\} and\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 23 bc 80 23 01 00 00\s+\{evex\} and\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 23 bc 80 23 01 00 00\s+\{evex\} and\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 54 7c 08 22 84 80 23 01 00 00\s+\{evex\} and\s+0x123\(%r8,%rax,4\),%r8b
+\s*[a-f0-9]+:\s*62 52 84 08 f2 df\s+\{evex\} andn\s+%r15,%r15,%r11
+\s*[a-f0-9]+:\s*62 52 6c 08 f2 d7\s+\{evex\} andn\s+%r15d,%edx,%r10d
+\s*[a-f0-9]+:\s*62 52 84 08 f2 bc 80 23 01 00 00\s+\{evex\} andn\s+0x123\(%r8,%rax,4\),%r15,%r15
+\s*[a-f0-9]+:\s*62 d2 04 08 f2 94 80 23 01 00 00\s+\{evex\} andn\s+0x123\(%r8,%rax,4\),%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 ff 08 fc bc 80 23 01 00 00\s+\{evex\} aor\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 7f 08 fc bc 80 23 01 00 00\s+\{evex\} aor\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fe 08 fc bc 80 23 01 00 00\s+\{evex\} axor\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 7e 08 fc bc 80 23 01 00 00\s+\{evex\} axor\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 84 08 f7 df\s+\{evex\} bextr\s+%r15,%r15,%r11
+\s*[a-f0-9]+:\s*62 52 84 08 f7 bc 80 23 01 00 00\s+\{evex\} bextr\s+%r15,0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 72 04 08 f7 d2\s+\{evex\} bextr\s+%r15d,%edx,%r10d
+\s*[a-f0-9]+:\s*62 d2 04 08 f7 94 80 23 01 00 00\s+\{evex\} bextr\s+%r15d,0x123\(%r8,%rax,4\),%edx
+\s*[a-f0-9]+:\s*62 d2 84 08 f3 df\s+\{evex\} blsi\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 d2 6c 08 f3 df\s+\{evex\} blsi\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 d2 84 08 f3 9c 80 23 01 00 00\s+\{evex\} blsi\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 d2 04 08 f3 9c 80 23 01 00 00\s+\{evex\} blsi\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 d2 84 08 f3 d7\s+\{evex\} blsmsk\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 d2 6c 08 f3 d7\s+\{evex\} blsmsk\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 d2 84 08 f3 94 80 23 01 00 00\s+\{evex\} blsmsk\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 d2 04 08 f3 94 80 23 01 00 00\s+\{evex\} blsmsk\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 d2 84 08 f3 cf\s+\{evex\} blsr\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 d2 6c 08 f3 cf\s+\{evex\} blsr\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 d2 84 08 f3 8c 80 23 01 00 00\s+\{evex\} blsr\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 d2 04 08 f3 8c 80 23 01 00 00\s+\{evex\} blsr\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 52 84 08 f5 df\s+\{evex\} bzhi\s+%r15,%r15,%r11
+\s*[a-f0-9]+:\s*62 52 84 08 f5 bc 80 23 01 00 00\s+\{evex\} bzhi\s+%r15,0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 72 04 08 f5 d2\s+\{evex\} bzhi\s+%r15d,%edx,%r10d
+\s*[a-f0-9]+:\s*62 d2 04 08 f5 94 80 23 01 00 00\s+\{evex\} bzhi\s+%r15d,0x123\(%r8,%rax,4\),%edx
+\s*[a-f0-9]+:\s*62 52 85 08 e6 bc 80 23 01 00 00\s+\{evex\} cmpbexadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 e6 94 80 23 01 00 00\s+\{evex\} cmpbexadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 e2 bc 80 23 01 00 00\s+\{evex\} cmpbxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 e2 94 80 23 01 00 00\s+\{evex\} cmpbxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 ee bc 80 23 01 00 00\s+\{evex\} cmplexadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 ee 94 80 23 01 00 00\s+\{evex\} cmplexadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 ec bc 80 23 01 00 00\s+\{evex\} cmplxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 ec 94 80 23 01 00 00\s+\{evex\} cmplxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 e7 bc 80 23 01 00 00\s+\{evex\} cmpnbexadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 e7 94 80 23 01 00 00\s+\{evex\} cmpnbexadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 e3 bc 80 23 01 00 00\s+\{evex\} cmpnbxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 e3 94 80 23 01 00 00\s+\{evex\} cmpnbxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 ef bc 80 23 01 00 00\s+\{evex\} cmpnlexadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 ef 94 80 23 01 00 00\s+\{evex\} cmpnlexadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 ed bc 80 23 01 00 00\s+\{evex\} cmpnlxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 ed 94 80 23 01 00 00\s+\{evex\} cmpnlxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 e1 bc 80 23 01 00 00\s+\{evex\} cmpnoxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 e1 94 80 23 01 00 00\s+\{evex\} cmpnoxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 eb bc 80 23 01 00 00\s+\{evex\} cmpnpxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 eb 94 80 23 01 00 00\s+\{evex\} cmpnpxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 e9 bc 80 23 01 00 00\s+\{evex\} cmpnsxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 e9 94 80 23 01 00 00\s+\{evex\} cmpnsxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 e5 bc 80 23 01 00 00\s+\{evex\} cmpnzxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 e5 94 80 23 01 00 00\s+\{evex\} cmpnzxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 e0 bc 80 23 01 00 00\s+\{evex\} cmpoxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 e0 94 80 23 01 00 00\s+\{evex\} cmpoxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 ea bc 80 23 01 00 00\s+\{evex\} cmppxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 ea 94 80 23 01 00 00\s+\{evex\} cmppxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 e8 bc 80 23 01 00 00\s+\{evex\} cmpsxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 e8 94 80 23 01 00 00\s+\{evex\} cmpsxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 e4 bc 80 23 01 00 00\s+\{evex\} cmpzxadd\s+%r15,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d2 05 08 e4 94 80 23 01 00 00\s+\{evex\} cmpzxadd\s+%r15d,%edx,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 ff cf\s+\{evex\} dec\s+%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 ff cf\s+\{evex\} dec\s+%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 ff cf\s+\{evex\} dec\s+%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 fe c8\s+\{evex\} dec\s+%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 fe 8c 80 23 01 00 00\s+\{evex\} decb\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 ff 8c 80 23 01 00 00\s+\{evex\} decw\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 ff 8c 80 23 01 00 00\s+\{evex\} decl\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 ff 8c 80 23 01 00 00\s+\{evex\} decq\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 f7\s+\{evex\} div\s+%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 f7\s+\{evex\} div\s+%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 f7\s+\{evex\} div\s+%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 f0\s+\{evex\} div\s+%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 b4 80 23 01 00 00\s+\{evex\} divb\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 b4 80 23 01 00 00\s+\{evex\} divw\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 b4 80 23 01 00 00\s+\{evex\} divl\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 b4 80 23 01 00 00\s+\{evex\} divq\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7e 08 da d7\s+\{evex\} encodekey128\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 d4 7e 08 db d7\s+\{evex\} encodekey256\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7f 08 f8 bc 80 23 01 00 00\s+\{evex\} enqcmd\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*67 62 54 7f 08 f8 bc 80 23 01 00 00\s+\{evex\} enqcmd\s+0x123\(%r8d,%eax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7e 08 f8 bc 80 23 01 00 00\s+\{evex\} enqcmds\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*67 62 54 7e 08 f8 bc 80 23 01 00 00\s+\{evex\} enqcmds\s+0x123\(%r8d,%eax,4\),%r15d
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 ff\s+\{evex\} idiv\s+%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 ff\s+\{evex\} idiv\s+%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 ff\s+\{evex\} idiv\s+%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 f8\s+\{evex\} idiv\s+%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 bc 80 23 01 00 00\s+\{evex\} idivb\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 bc 80 23 01 00 00\s+\{evex\} idivw\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 bc 80 23 01 00 00\s+\{evex\} idivl\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 bc 80 23 01 00 00\s+\{evex\} idivq\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 ef\s+\{evex\} imul\s+%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 af ff\s+\{evex\} imul\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 ef\s+\{evex\} imul\s+%r15d
+\s*[a-f0-9]+:\s*62 d4 7c 08 af d7\s+\{evex\} imul\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 ef\s+\{evex\} imul\s+%r15w
+\s*[a-f0-9]+:\s*62 d4 7d 08 af c7\s+\{evex\} imul\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 e8\s+\{evex\} imul\s+%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 ac 80 23 01 00 00\s+\{evex\} imulb\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 ac 80 23 01 00 00\s+\{evex\} imulw\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 ac 80 23 01 00 00\s+\{evex\} imull\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 af bc 80 23 01 00 00\s+\{evex\} imul\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 af bc 80 23 01 00 00\s+\{evex\} imul\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 af bc 80 23 01 00 00\s+\{evex\} imul\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 ac 80 23 01 00 00\s+\{evex\} imulq\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 f4 7d 08 6b c2 7b\s+\{evex\} imul \$0x7b,%dx,%ax
+\s*[a-f0-9]+:\s*62 f4 7c 08 6b d1 7b\s+\{evex\} imul \$0x7b,%ecx,%edx
+\s*[a-f0-9]+:\s*62 54 fc 08 6b f9 7b\s+\{evex\} imul \$0x7b,%r9,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 6b c9 7b\s+\{evex\} imul \$0x7b,%r9,%r9
+\s*[a-f0-9]+:\s*62 d4 7d 08 6b 94 80 23 01 00 00 7b\s+\{evex\} imul \$0x7b,0x123\(%r8,%rax,4\),%dx
+\s*[a-f0-9]+:\s*62 d4 7c 08 6b 8c 80 23 01 00 00 7b\s+\{evex\} imul \$0x7b,0x123\(%r8,%rax,4\),%ecx
+\s*[a-f0-9]+:\s*62 54 fc 08 6b 8c 80 23 01 00 00 7b\s+\{evex\} imul \$0x7b,0x123\(%r8,%rax,4\),%r9
+\s*[a-f0-9]+:\s*62 f4 7d 08 6b c2 90\s+\{evex\} imul \$0xff90,%dx,%ax
+\s*[a-f0-9]+:\s*62 f4 7c 08 69 d1 90 ff 00 00\s+\{evex\} imul \$0xff90,%ecx,%edx
+\s*[a-f0-9]+:\s*62 54 fc 08 69 f9 90 ff 00 00\s+\{evex\} imul \$0xff90,%r9,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 69 c9 90 ff 00 00\s+\{evex\} imul \$0xff90,%r9,%r9
+\s*[a-f0-9]+:\s*62 d4 7d 08 6b 94 80 23 01 00 00 90\s+\{evex\} imul \$0xff90,0x123\(%r8,%rax,4\),%dx
+\s*[a-f0-9]+:\s*62 d4 7c 08 69 8c 80 23 01 00 00 90 ff 00 00\s+\{evex\} imul \$0xff90,0x123\(%r8,%rax,4\),%ecx
+\s*[a-f0-9]+:\s*62 54 fc 08 69 8c 80 23 01 00 00 90 ff 00 00\s+\{evex\} imul \$0xff90,0x123\(%r8,%rax,4\),%r9
+\s*[a-f0-9]+:\s*62 d4 fc 08 ff c7\s+\{evex\} inc\s+%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 ff c7\s+\{evex\} inc\s+%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 ff c7\s+\{evex\} inc\s+%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 fe c0\s+\{evex\} inc\s+%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 fe 84 80 23 01 00 00\s+\{evex\} incb\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 ff 84 80 23 01 00 00\s+\{evex\} incw\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 ff 84 80 23 01 00 00\s+\{evex\} incl\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 ff 84 80 23 01 00 00\s+\{evex\} incq\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 7e 08 f0 bc 80 23 01 00 00\s+\{evex\} invept\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7e 08 f2 bc 80 23 01 00 00\s+\{evex\} invpcid\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7e 08 f1 bc 80 23 01 00 00\s+\{evex\} invvpid\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 f1 7d 08 90 eb\s+\{evex\} kmovb\s+%k3,%k5
+\s*[a-f0-9]+:\s*62 71 7d 08 93 fd\s+\{evex\} kmovb\s+%k5,%r15d
+\s*[a-f0-9]+:\s*62 d1 7d 08 91 ac 80 23 01 00 00\s+\{evex\} kmovb\s+%k5,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d1 7d 08 92 ef\s+\{evex\} kmovb\s+%r15d,%k5
+\s*[a-f0-9]+:\s*62 d1 7d 08 90 ac 80 23 01 00 00\s+\{evex\} kmovb\s+0x123\(%r8,%rax,4\),%k5
+\s*[a-f0-9]+:\s*62 f1 fd 08 90 eb\s+\{evex\} kmovd\s+%k3,%k5
+\s*[a-f0-9]+:\s*62 71 7f 08 93 fd\s+\{evex\} kmovd\s+%k5,%r15d
+\s*[a-f0-9]+:\s*62 d1 fd 08 91 ac 80 23 01 00 00\s+\{evex\} kmovd\s+%k5,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d1 7f 08 92 ef\s+\{evex\} kmovd\s+%r15d,%k5
+\s*[a-f0-9]+:\s*62 d1 fd 08 90 ac 80 23 01 00 00\s+\{evex\} kmovd\s+0x123\(%r8,%rax,4\),%k5
+\s*[a-f0-9]+:\s*62 f1 fc 08 90 eb\s+\{evex\} kmovq\s+%k3,%k5
+\s*[a-f0-9]+:\s*62 71 ff 08 93 fd\s+\{evex\} kmovq\s+%k5,%r15
+\s*[a-f0-9]+:\s*62 d1 fc 08 91 ac 80 23 01 00 00\s+\{evex\} kmovq\s+%k5,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d1 ff 08 92 ef\s+\{evex\} kmovq\s+%r15,%k5
+\s*[a-f0-9]+:\s*62 d1 fc 08 90 ac 80 23 01 00 00\s+\{evex\} kmovq\s+0x123\(%r8,%rax,4\),%k5
+\s*[a-f0-9]+:\s*62 f1 7c 08 90 eb\s+\{evex\} kmovw\s+%k3,%k5
+\s*[a-f0-9]+:\s*62 71 7c 08 93 fd\s+\{evex\} kmovw\s+%k5,%r15d
+\s*[a-f0-9]+:\s*62 d1 7c 08 91 ac 80 23 01 00 00\s+\{evex\} kmovw\s+%k5,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d1 7c 08 92 ef\s+\{evex\} kmovw\s+%r15d,%k5
+\s*[a-f0-9]+:\s*62 d1 7c 08 90 ac 80 23 01 00 00\s+\{evex\} kmovw\s+0x123\(%r8,%rax,4\),%k5
+\s*[a-f0-9]+:\s*62 54 fc 08 f5 ff\s+\{evex\} lzcnt\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 f5 d7\s+\{evex\} lzcnt\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 d4 7d 08 f5 c7\s+\{evex\} lzcnt\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 fc 08 f5 bc 80 23 01 00 00\s+\{evex\} lzcnt\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 f5 bc 80 23 01 00 00\s+\{evex\} lzcnt\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 f5 bc 80 23 01 00 00\s+\{evex\} lzcnt\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 54 fc 08 61 bc 80 23 01 00 00\s+\{evex\} movbe\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 7c 08 61 bc 80 23 01 00 00\s+\{evex\} movbe\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 7d 08 61 bc 80 23 01 00 00\s+\{evex\} movbe\s+%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 60 bc 80 23 01 00 00\s+\{evex\} movbe\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 60 bc 80 23 01 00 00\s+\{evex\} movbe\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 60 bc 80 23 01 00 00\s+\{evex\} movbe\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 54 7d 08 f8 bc 80 23 01 00 00\s+\{evex\} movdir64b\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*67 62 54 7d 08 f8 bc 80 23 01 00 00\s+\{evex\} movdir64b\s+0x123\(%r8d,%eax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 fc 08 f9 bc 80 23 01 00 00\s+\{evex\} movdiri\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 7c 08 f9 bc 80 23 01 00 00\s+\{evex\} movdiri\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 e7\s+\{evex\} mul\s+%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 e7\s+\{evex\} mul\s+%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 e7\s+\{evex\} mul\s+%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 e0\s+\{evex\} mul\s+%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 a4 80 23 01 00 00\s+\{evex\} mulb\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 a4 80 23 01 00 00\s+\{evex\} mulw\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 a4 80 23 01 00 00\s+\{evex\} mull\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 a4 80 23 01 00 00\s+\{evex\} mulq\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 87 08 f6 df\s+\{evex\} mulx\s+%r15,%r15,%r11
+\s*[a-f0-9]+:\s*62 52 6f 08 f6 d7\s+\{evex\} mulx\s+%r15d,%edx,%r10d
+\s*[a-f0-9]+:\s*62 52 87 08 f6 bc 80 23 01 00 00\s+\{evex\} mulx\s+0x123\(%r8,%rax,4\),%r15,%r15
+\s*[a-f0-9]+:\s*62 d2 07 08 f6 94 80 23 01 00 00\s+\{evex\} mulx\s+0x123\(%r8,%rax,4\),%r15d,%edx
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 df\s+\{evex\} neg\s+%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 df\s+\{evex\} neg\s+%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 df\s+\{evex\} neg\s+%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 d8\s+\{evex\} neg\s+%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 9c 80 23 01 00 00\s+\{evex\} negb\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 9c 80 23 01 00 00\s+\{evex\} negw\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 9c 80 23 01 00 00\s+\{evex\} negl\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 9c 80 23 01 00 00\s+\{evex\} negq\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 d7\s+\{evex\} not\s+%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 d7\s+\{evex\} not\s+%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 d7\s+\{evex\} not\s+%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 d0\s+\{evex\} not\s+%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 f6 94 80 23 01 00 00\s+\{evex\} notb\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 f7 94 80 23 01 00 00\s+\{evex\} notw\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 f7 94 80 23 01 00 00\s+\{evex\} notl\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 f7 94 80 23 01 00 00\s+\{evex\} notq\s+0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 cf 7b\s+\{evex\} or\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 cf 7b\s+\{evex\} or\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 cf 7b\s+\{evex\} or\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 c8 7b\s+\{evex\} or\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 8c 80 23 01 00 00 7b\s+\{evex\} orb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 8c 80 23 01 00 00 7b\s+\{evex\} orw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 8c 80 23 01 00 00 7b\s+\{evex\} orl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 8c 80 23 01 00 00 7b\s+\{evex\} orq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 09 ff\s+\{evex\} or\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 09 bc 80 23 01 00 00\s+\{evex\} or\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 09 fa\s+\{evex\} or\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 09 bc 80 23 01 00 00\s+\{evex\} or\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 09 f8\s+\{evex\} or\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 09 bc 80 23 01 00 00\s+\{evex\} or\s+%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 08 c2\s+\{evex\} or\s+%r8b,%dl
+\s*[a-f0-9]+:\s*62 54 7c 08 08 84 80 23 01 00 00\s+\{evex\} or\s+%r8b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 0b bc 80 23 01 00 00\s+\{evex\} or\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 0b bc 80 23 01 00 00\s+\{evex\} or\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 0b bc 80 23 01 00 00\s+\{evex\} or\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 54 7c 08 0a 84 80 23 01 00 00\s+\{evex\} or\s+0x123\(%r8,%rax,4\),%r8b
+\s*[a-f0-9]+:\s*62 52 87 08 f5 df\s+\{evex\} pdep\s+%r15,%r15,%r11
+\s*[a-f0-9]+:\s*62 52 6f 08 f5 d7\s+\{evex\} pdep\s+%r15d,%edx,%r10d
+\s*[a-f0-9]+:\s*62 52 87 08 f5 bc 80 23 01 00 00\s+\{evex\} pdep\s+0x123\(%r8,%rax,4\),%r15,%r15
+\s*[a-f0-9]+:\s*62 d2 07 08 f5 94 80 23 01 00 00\s+\{evex\} pdep\s+0x123\(%r8,%rax,4\),%r15d,%edx
+\s*[a-f0-9]+:\s*62 52 86 08 f5 df\s+\{evex\} pext\s+%r15,%r15,%r11
+\s*[a-f0-9]+:\s*62 52 6e 08 f5 d7\s+\{evex\} pext\s+%r15d,%edx,%r10d
+\s*[a-f0-9]+:\s*62 52 86 08 f5 bc 80 23 01 00 00\s+\{evex\} pext\s+0x123\(%r8,%rax,4\),%r15,%r15
+\s*[a-f0-9]+:\s*62 d2 06 08 f5 94 80 23 01 00 00\s+\{evex\} pext\s+0x123\(%r8,%rax,4\),%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 fc 08 88 ff\s+\{evex\} popcnt\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 88 d7\s+\{evex\} popcnt\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 d4 7d 08 88 c7\s+\{evex\} popcnt\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 fc 08 88 bc 80 23 01 00 00\s+\{evex\} popcnt\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 88 bc 80 23 01 00 00\s+\{evex\} popcnt\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 88 bc 80 23 01 00 00\s+\{evex\} popcnt\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 d7 7b\s+\{evex\} rcl\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 d7 7b\s+\{evex\} rcl\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 d7 7b\s+\{evex\} rcl\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 d0 7b\s+\{evex\} rcl\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 94 80 23 01 00 00 7b\s+\{evex\} rclb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 94 80 23 01 00 00 7b\s+\{evex\} rclw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 94 80 23 01 00 00 7b\s+\{evex\} rcll\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 94 80 23 01 00 00 7b\s+\{evex\} rclq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 d7\s+\{evex\} rcl\s+\$1,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 d7\s+\{evex\} rcl\s+\$1,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 d7\s+\{evex\} rcl\s+\$1,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 d0\s+\{evex\} rcl\s+\$1,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 94 80 23 01 00 00\s+\{evex\} rclb\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 94 80 23 01 00 00\s+\{evex\} rclw\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 94 80 23 01 00 00\s+\{evex\} rcll\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 94 80 23 01 00 00\s+\{evex\} rclq\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 d7\s+\{evex\} rcl\s+%cl,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 d7\s+\{evex\} rcl\s+%cl,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 d7\s+\{evex\} rcl\s+%cl,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 d0\s+\{evex\} rcl\s+%cl,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 94 80 23 01 00 00\s+\{evex\} rclb\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 94 80 23 01 00 00\s+\{evex\} rclw\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 94 80 23 01 00 00\s+\{evex\} rcll\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 94 80 23 01 00 00\s+\{evex\} rclq\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 df 7b\s+\{evex\} rcr\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 df 7b\s+\{evex\} rcr\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 df 7b\s+\{evex\} rcr\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 d8 7b\s+\{evex\} rcr\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 9c 80 23 01 00 00 7b\s+\{evex\} rcrb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 9c 80 23 01 00 00 7b\s+\{evex\} rcrw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 9c 80 23 01 00 00 7b\s+\{evex\} rcrl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 9c 80 23 01 00 00 7b\s+\{evex\} rcrq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 df\s+\{evex\} rcr\s+\$1,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 df\s+\{evex\} rcr\s+\$1,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 df\s+\{evex\} rcr\s+\$1,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 d8\s+\{evex\} rcr\s+\$1,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 9c 80 23 01 00 00\s+\{evex\} rcrb\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 9c 80 23 01 00 00\s+\{evex\} rcrw\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 9c 80 23 01 00 00\s+\{evex\} rcrl\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 9c 80 23 01 00 00\s+\{evex\} rcrq\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 df\s+\{evex\} rcr\s+%cl,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 df\s+\{evex\} rcr\s+%cl,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 df\s+\{evex\} rcr\s+%cl,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 d8\s+\{evex\} rcr\s+%cl,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 9c 80 23 01 00 00\s+\{evex\} rcrb\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 9c 80 23 01 00 00\s+\{evex\} rcrw\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 9c 80 23 01 00 00\s+\{evex\} rcrl\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 9c 80 23 01 00 00\s+\{evex\} rcrq\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 c7 7b\s+\{evex\} rol\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 c7 7b\s+\{evex\} rol\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 c7 7b\s+\{evex\} rol\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 c0 7b\s+\{evex\} rol\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 84 80 23 01 00 00 7b\s+\{evex\} rolb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 84 80 23 01 00 00 7b\s+\{evex\} rolw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 84 80 23 01 00 00 7b\s+\{evex\} roll\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 84 80 23 01 00 00 7b\s+\{evex\} rolq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 c7\s+\{evex\} rol\s+\$1,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 c7\s+\{evex\} rol\s+\$1,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 c7\s+\{evex\} rol\s+\$1,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 c0\s+\{evex\} rol\s+\$1,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 84 80 23 01 00 00\s+\{evex\} rolb\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 84 80 23 01 00 00\s+\{evex\} rolw\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 84 80 23 01 00 00\s+\{evex\} roll\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 84 80 23 01 00 00\s+\{evex\} rolq\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 c7\s+\{evex\} rol\s+%cl,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 c7\s+\{evex\} rol\s+%cl,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 c7\s+\{evex\} rol\s+%cl,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 c0\s+\{evex\} rol\s+%cl,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 84 80 23 01 00 00\s+\{evex\} rolb\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 84 80 23 01 00 00\s+\{evex\} rolw\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 84 80 23 01 00 00\s+\{evex\} roll\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 84 80 23 01 00 00\s+\{evex\} rolq\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 cf 7b\s+\{evex\} ror\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 cf 7b\s+\{evex\} ror\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 cf 7b\s+\{evex\} ror\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 c8 7b\s+\{evex\} ror\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 8c 80 23 01 00 00 7b\s+\{evex\} rorb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 8c 80 23 01 00 00 7b\s+\{evex\} rorw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 8c 80 23 01 00 00 7b\s+\{evex\} rorl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 8c 80 23 01 00 00 7b\s+\{evex\} rorq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 cf\s+\{evex\} ror\s+\$1,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 cf\s+\{evex\} ror\s+\$1,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 cf\s+\{evex\} ror\s+\$1,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 c8\s+\{evex\} ror\s+\$1,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 8c 80 23 01 00 00\s+\{evex\} rorb\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 8c 80 23 01 00 00\s+\{evex\} rorw\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 8c 80 23 01 00 00\s+\{evex\} rorl\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 8c 80 23 01 00 00\s+\{evex\} rorq\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 cf\s+\{evex\} ror\s+%cl,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 cf\s+\{evex\} ror\s+%cl,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 cf\s+\{evex\} ror\s+%cl,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 c8\s+\{evex\} ror\s+%cl,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 8c 80 23 01 00 00\s+\{evex\} rorb\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 8c 80 23 01 00 00\s+\{evex\} rorw\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 8c 80 23 01 00 00\s+\{evex\} rorl\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 8c 80 23 01 00 00\s+\{evex\} rorq\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 53 ff 08 f0 ff 7b\s+\{evex\} rorx\s+\$0x7b,%r15,%r15
+\s*[a-f0-9]+:\s*62 d3 7f 08 f0 d7 7b\s+\{evex\} rorx\s+\$0x7b,%r15d,%edx
+\s*[a-f0-9]+:\s*62 53 ff 08 f0 bc 80 23 01 00 00 7b\s+\{evex\} rorx\s+\$0x7b,0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 53 7f 08 f0 bc 80 23 01 00 00 7b\s+\{evex\} rorx\s+\$0x7b,0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 e7 7b\s+\{evex\} shl\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 e7 7b\s+\{evex\} shl\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 e7 7b\s+\{evex\} shl\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 e0 7b\s+\{evex\} shl\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 a4 80 23 01 00 00 7b\s+\{evex\} shlb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 a4 80 23 01 00 00 7b\s+\{evex\} shlw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 a4 80 23 01 00 00 7b\s+\{evex\} shll\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 a4 80 23 01 00 00 7b\s+\{evex\} shlq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 e7\s+\{evex\} shl\s+\$1,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 e7\s+\{evex\} shl\s+\$1,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 e7\s+\{evex\} shl\s+\$1,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 e0\s+\{evex\} shl\s+\$1,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 a4 80 23 01 00 00\s+\{evex\} shlb\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 a4 80 23 01 00 00\s+\{evex\} shlw\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 a4 80 23 01 00 00\s+\{evex\} shll\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 a4 80 23 01 00 00\s+\{evex\} shlq\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 e7\s+\{evex\} shl\s+%cl,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 e7\s+\{evex\} shl\s+%cl,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 e7\s+\{evex\} shl\s+%cl,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 e0\s+\{evex\} shl\s+%cl,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 a4 80 23 01 00 00\s+\{evex\} shlb\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 a4 80 23 01 00 00\s+\{evex\} shlw\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 a4 80 23 01 00 00\s+\{evex\} shll\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 a4 80 23 01 00 00\s+\{evex\} shlq\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 ff 7b\s+\{evex\} sar\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 ff 7b\s+\{evex\} sar\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 ff 7b\s+\{evex\} sar\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 f8 7b\s+\{evex\} sar\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 bc 80 23 01 00 00 7b\s+\{evex\} sarb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 bc 80 23 01 00 00 7b\s+\{evex\} sarw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 bc 80 23 01 00 00 7b\s+\{evex\} sarl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 bc 80 23 01 00 00 7b\s+\{evex\} sarq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 ff\s+\{evex\} sar\s+\$1,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 ff\s+\{evex\} sar\s+\$1,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 ff\s+\{evex\} sar\s+\$1,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 f8\s+\{evex\} sar\s+\$1,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 bc 80 23 01 00 00\s+\{evex\} sarb\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 bc 80 23 01 00 00\s+\{evex\} sarw\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 bc 80 23 01 00 00\s+\{evex\} sarl\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 bc 80 23 01 00 00\s+\{evex\} sarq\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 ff\s+\{evex\} sar\s+%cl,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 ff\s+\{evex\} sar\s+%cl,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 ff\s+\{evex\} sar\s+%cl,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 f8\s+\{evex\} sar\s+%cl,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 bc 80 23 01 00 00\s+\{evex\} sarb\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 bc 80 23 01 00 00\s+\{evex\} sarw\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 bc 80 23 01 00 00\s+\{evex\} sarl\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 bc 80 23 01 00 00\s+\{evex\} sarq\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 86 08 f7 df\s+\{evex\} sarx\s+%r15,%r15,%r11
+\s*[a-f0-9]+:\s*62 52 86 08 f7 bc 80 23 01 00 00\s+\{evex\} sarx\s+%r15,0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 72 06 08 f7 d2\s+\{evex\} sarx\s+%r15d,%edx,%r10d
+\s*[a-f0-9]+:\s*62 d2 06 08 f7 94 80 23 01 00 00\s+\{evex\} sarx\s+%r15d,0x123\(%r8,%rax,4\),%edx
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 df 7b\s+\{evex\} sbb\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 df 7b\s+\{evex\} sbb\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 df 7b\s+\{evex\} sbb\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 d8 7b\s+\{evex\} sbb\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 9c 80 23 01 00 00 7b\s+\{evex\} sbbb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 9c 80 23 01 00 00 7b\s+\{evex\} sbbw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 9c 80 23 01 00 00 7b\s+\{evex\} sbbl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 9c 80 23 01 00 00 7b\s+\{evex\} sbbq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 19 ff\s+\{evex\} sbb\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 19 bc 80 23 01 00 00\s+\{evex\} sbb\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 19 fa\s+\{evex\} sbb\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 19 bc 80 23 01 00 00\s+\{evex\} sbb\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 19 f8\s+\{evex\} sbb\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 19 bc 80 23 01 00 00\s+\{evex\} sbb\s+%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 18 c2\s+\{evex\} sbb\s+%r8b,%dl
+\s*[a-f0-9]+:\s*62 54 7c 08 18 84 80 23 01 00 00\s+\{evex\} sbb\s+%r8b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 1b bc 80 23 01 00 00\s+\{evex\} sbb\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 1b bc 80 23 01 00 00\s+\{evex\} sbb\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 1b bc 80 23 01 00 00\s+\{evex\} sbb\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 54 7c 08 1a 84 80 23 01 00 00\s+\{evex\} sbb\s+0x123\(%r8,%rax,4\),%r8b
+\s*[a-f0-9]+:\s*62 54 7c 08 d9 e5\s+\{evex\} sha1msg1\s+%xmm13,%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 d9 a4 80 23 01 00 00\s+\{evex\} sha1msg1\s+0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 da e5\s+\{evex\} sha1msg2\s+%xmm13,%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 da a4 80 23 01 00 00\s+\{evex\} sha1msg2\s+0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 d8 e5\s+\{evex\} sha1nexte\s+%xmm13,%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 d8 a4 80 23 01 00 00\s+\{evex\} sha1nexte\s+0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 d4 e5 7b\s+\{evex\} sha1rnds4\s+\$0x7b,%xmm13,%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 d4 a4 80 23 01 00 00 7b\s+\{evex\} sha1rnds4\s+\$0x7b,0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 dc e5\s+\{evex\} sha256msg1\s+%xmm13,%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 dc a4 80 23 01 00 00\s+\{evex\} sha256msg1\s+0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 dd e5\s+\{evex\} sha256msg2\s+%xmm13,%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 dd a4 80 23 01 00 00\s+\{evex\} sha256msg2\s+0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 54 7c 08 db a4 80 23 01 00 00\s+\{evex\} sha256rnds2\s+%xmm0,0x123\(%r8,%rax,4\),%xmm12
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 e7 7b\s+\{evex\} shl\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 e7 7b\s+\{evex\} shl\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 e7 7b\s+\{evex\} shl\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 e0 7b\s+\{evex\} shl\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 a4 80 23 01 00 00 7b\s+\{evex\} shlb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 a4 80 23 01 00 00 7b\s+\{evex\} shlw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 a4 80 23 01 00 00 7b\s+\{evex\} shll\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 a4 80 23 01 00 00 7b\s+\{evex\} shlq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 e7\s+\{evex\} shl\s+\$1,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 e7\s+\{evex\} shl\s+\$1,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 e7\s+\{evex\} shl\s+\$1,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 e0\s+\{evex\} shl\s+\$1,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 a4 80 23 01 00 00\s+\{evex\} shlb\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 a4 80 23 01 00 00\s+\{evex\} shlw\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 a4 80 23 01 00 00\s+\{evex\} shll\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 a4 80 23 01 00 00\s+\{evex\} shlq\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 e7\s+\{evex\} shl\s+%cl,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 e7\s+\{evex\} shl\s+%cl,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 e7\s+\{evex\} shl\s+%cl,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 e0\s+\{evex\} shl\s+%cl,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 a4 80 23 01 00 00\s+\{evex\} shlb\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 a4 80 23 01 00 00\s+\{evex\} shlw\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 a4 80 23 01 00 00\s+\{evex\} shll\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 a4 80 23 01 00 00\s+\{evex\} shlq\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 24 ff 7b\s+\{evex\} shld\s+\$0x7b,%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 24 bc 80 23 01 00 00 7b\s+\{evex\} shld\s+\$0x7b,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 24 fa 7b\s+\{evex\} shld\s+\$0x7b,%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 24 bc 80 23 01 00 00 7b\s+\{evex\} shld\s+\$0x7b,%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 24 f8 7b\s+\{evex\} shld\s+\$0x7b,%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 24 bc 80 23 01 00 00 7b\s+\{evex\} shld\s+\$0x7b,%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 a5 ff\s+\{evex\} shld\s+%cl,%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 a5 bc 80 23 01 00 00\s+\{evex\} shld\s+%cl,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 a5 fa\s+\{evex\} shld\s+%cl,%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 a5 bc 80 23 01 00 00\s+\{evex\} shld\s+%cl,%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 a5 f8\s+\{evex\} shld\s+%cl,%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 a5 bc 80 23 01 00 00\s+\{evex\} shld\s+%cl,%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 85 08 f7 df\s+\{evex\} shlx\s+%r15,%r15,%r11
+\s*[a-f0-9]+:\s*62 52 85 08 f7 bc 80 23 01 00 00\s+\{evex\} shlx\s+%r15,0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 72 05 08 f7 d2\s+\{evex\} shlx\s+%r15d,%edx,%r10d
+\s*[a-f0-9]+:\s*62 d2 05 08 f7 94 80 23 01 00 00\s+\{evex\} shlx\s+%r15d,0x123\(%r8,%rax,4\),%edx
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 ef 7b\s+\{evex\} shr\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 ef 7b\s+\{evex\} shr\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 ef 7b\s+\{evex\} shr\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 e8 7b\s+\{evex\} shr\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 c0 ac 80 23 01 00 00 7b\s+\{evex\} shrb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 c1 ac 80 23 01 00 00 7b\s+\{evex\} shrw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 c1 ac 80 23 01 00 00 7b\s+\{evex\} shrl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 c1 ac 80 23 01 00 00 7b\s+\{evex\} shrq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 ef\s+\{evex\} shr\s+\$1,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 ef\s+\{evex\} shr\s+\$1,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 ef\s+\{evex\} shr\s+\$1,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 e8\s+\{evex\} shr\s+\$1,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d0 ac 80 23 01 00 00\s+\{evex\} shrb\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d1 ac 80 23 01 00 00\s+\{evex\} shrw\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d1 ac 80 23 01 00 00\s+\{evex\} shrl\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d1 ac 80 23 01 00 00\s+\{evex\} shrq\s+\$1,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 ef\s+\{evex\} shr\s+%cl,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 ef\s+\{evex\} shr\s+%cl,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 ef\s+\{evex\} shr\s+%cl,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 e8\s+\{evex\} shr\s+%cl,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 d2 ac 80 23 01 00 00\s+\{evex\} shrb\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 d3 ac 80 23 01 00 00\s+\{evex\} shrw\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 d3 ac 80 23 01 00 00\s+\{evex\} shrl\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 d3 ac 80 23 01 00 00\s+\{evex\} shrq\s+%cl,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 2c ff 7b\s+\{evex\} shrd\s+\$0x7b,%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 2c bc 80 23 01 00 00 7b\s+\{evex\} shrd\s+\$0x7b,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 2c fa 7b\s+\{evex\} shrd\s+\$0x7b,%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 2c bc 80 23 01 00 00 7b\s+\{evex\} shrd\s+\$0x7b,%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 2c f8 7b\s+\{evex\} shrd\s+\$0x7b,%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 2c bc 80 23 01 00 00 7b\s+\{evex\} shrd\s+\$0x7b,%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 ad ff\s+\{evex\} shrd\s+%cl,%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 ad bc 80 23 01 00 00\s+\{evex\} shrd\s+%cl,%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 ad fa\s+\{evex\} shrd\s+%cl,%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 ad bc 80 23 01 00 00\s+\{evex\} shrd\s+%cl,%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 ad f8\s+\{evex\} shrd\s+%cl,%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 ad bc 80 23 01 00 00\s+\{evex\} shrd\s+%cl,%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 52 87 08 f7 df\s+\{evex\} shrx\s+%r15,%r15,%r11
+\s*[a-f0-9]+:\s*62 52 87 08 f7 bc 80 23 01 00 00\s+\{evex\} shrx\s+%r15,0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 72 07 08 f7 d2\s+\{evex\} shrx\s+%r15d,%edx,%r10d
+\s*[a-f0-9]+:\s*62 d2 07 08 f7 94 80 23 01 00 00\s+\{evex\} shrx\s+%r15d,0x123\(%r8,%rax,4\),%edx
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 ef 7b\s+\{evex\} sub\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 ef 7b\s+\{evex\} sub\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 ef 7b\s+\{evex\} sub\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 e8 7b\s+\{evex\} sub\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 ac 80 23 01 00 00 7b\s+\{evex\} subb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 ac 80 23 01 00 00 7b\s+\{evex\} subw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 ac 80 23 01 00 00 7b\s+\{evex\} subl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 ac 80 23 01 00 00 7b\s+\{evex\} subq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 29 ff\s+\{evex\} sub\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 29 bc 80 23 01 00 00\s+\{evex\} sub\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 29 fa\s+\{evex\} sub\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 29 bc 80 23 01 00 00\s+\{evex\} sub\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 29 f8\s+\{evex\} sub\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 29 bc 80 23 01 00 00\s+\{evex\} sub\s+%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 28 c2\s+\{evex\} sub\s+%r8b,%dl
+\s*[a-f0-9]+:\s*62 54 7c 08 28 84 80 23 01 00 00\s+\{evex\} sub\s+%r8b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 2b bc 80 23 01 00 00\s+\{evex\} sub\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 2b bc 80 23 01 00 00\s+\{evex\} sub\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 2b bc 80 23 01 00 00\s+\{evex\} sub\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 54 7c 08 2a 84 80 23 01 00 00\s+\{evex\} sub\s+0x123\(%r8,%rax,4\),%r8b
+\s*[a-f0-9]+:\s*62 54 fc 08 f4 ff\s+\{evex\} tzcnt\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 f4 d7\s+\{evex\} tzcnt\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 d4 7d 08 f4 c7\s+\{evex\} tzcnt\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 fc 08 f4 bc 80 23 01 00 00\s+\{evex\} tzcnt\s+0x123\(%r8,%rax,4\),%r15
+\s*[a-f0-9]+:\s*62 54 7c 08 f4 bc 80 23 01 00 00\s+\{evex\} tzcnt\s+0x123\(%r8,%rax,4\),%r15d
+\s*[a-f0-9]+:\s*62 54 7d 08 f4 bc 80 23 01 00 00\s+\{evex\} tzcnt\s+0x123\(%r8,%rax,4\),%r15w
+\s*[a-f0-9]+:\s*62 54 7c 08 66 bc 80 23 01 00 00\s+\{evex\} wrssd\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 66 bc 80 23 01 00 00\s+\{evex\} wrssq\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 7d 08 65 bc 80 23 01 00 00\s+\{evex\} wrussd\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fd 08 65 bc 80 23 01 00 00\s+\{evex\} wrussq\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 f7 7b\s+\{evex\} xor\s+\$0x7b,%r15
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 f7 7b\s+\{evex\} xor\s+\$0x7b,%r15d
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 f7 7b\s+\{evex\} xor\s+\$0x7b,%r15w
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 f0 7b\s+\{evex\} xor\s+\$0x7b,%r8b
+\s*[a-f0-9]+:\s*62 d4 7c 08 80 b4 80 23 01 00 00 7b\s+\{evex\} xorb\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7d 08 83 b4 80 23 01 00 00 7b\s+\{evex\} xorw\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 7c 08 83 b4 80 23 01 00 00 7b\s+\{evex\} xorl\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 d4 fc 08 83 b4 80 23 01 00 00 7b\s+\{evex\} xorq\s+\$0x7b,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 54 fc 08 31 ff\s+\{evex\} xor\s+%r15,%r15
+\s*[a-f0-9]+:\s*62 54 fc 08 31 bc 80 23 01 00 00\s+\{evex\} xor\s+%r15,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 31 fa\s+\{evex\} xor\s+%r15d,%edx
+\s*[a-f0-9]+:\s*62 54 7c 08 31 bc 80 23 01 00 00\s+\{evex\} xor\s+%r15d,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7d 08 31 f8\s+\{evex\} xor\s+%r15w,%ax
+\s*[a-f0-9]+:\s*62 54 7d 08 31 bc 80 23 01 00 00\s+\{evex\} xor\s+%r15w,0x123\(%r8,%rax,4\)
+\s*[a-f0-9]+:\s*62 74 7c 08 30 c2\s+\{evex\} xor\s+%r8b,%dl
+\s*[a-f0-9]+:\s*62 54 7c 08 30 84 80 23 01 00 00\s+\{evex\} xor\s+%r8b,0x123\(%r8,%rax,4\)
+#pass
diff --git a/gas/testsuite/gas/i386/x86-64-apx_f-evex.s b/gas/testsuite/gas/i386/x86-64-apx_f-evex.s
new file mode 100644
index 00000000000..c75de89ffee
--- /dev/null
+++ b/gas/testsuite/gas/i386/x86-64-apx_f-evex.s
@@ -0,0 +1,626 @@
+# Check 64bit APX_F instructions with evex pseudo prefix
+
+	.text
+_start:
+	{evex} aadd	%r15,0x123(%r8,%rax,4)
+	{evex} aadd	%r15d,0x123(%r8,%rax,4)
+	{evex} aand	%r15,0x123(%r8,%rax,4)
+	{evex} aand	%r15d,0x123(%r8,%rax,4)
+	{evex} adc	$0x7b,%r15
+	{evex} adc	$0x7b,%r15d
+	{evex} adc	$0x7b,%r15w
+	{evex} adc	$0x7b,%r8b
+	{evex} adcb	$0x7b,0x123(%r8,%rax,4)
+	{evex} adcw	$0x7b,0x123(%r8,%rax,4)
+	{evex} adcl	$0x7b,0x123(%r8,%rax,4)
+	{evex} adcq	$0x7b,0x123(%r8,%rax,4)
+	{evex} adc	%r15,%r15
+	{evex} adc	%r15,0x123(%r8,%rax,4)
+	{evex} adc	%r15d,%edx
+	{evex} adc	%r15d,0x123(%r8,%rax,4)
+	{evex} adc	%r15w,%ax
+	{evex} adc	%r15w,0x123(%r8,%rax,4)
+	{evex} adc	%r8b,%dl
+	{evex} adc	%r8b,0x123(%r8,%rax,4)
+	{evex} adc	0x123(%r8,%rax,4),%r15
+	{evex} adc	0x123(%r8,%rax,4),%r15d
+	{evex} adc	0x123(%r8,%rax,4),%r15w
+	{evex} adc	0x123(%r8,%rax,4),%r8b
+	{evex} adcx	%r15,%r15
+	{evex} adcx	%r15d,%edx
+	{evex} adcx	0x123(%r8,%rax,4),%r15
+	{evex} adcx	0x123(%r8,%rax,4),%r15d
+	{evex} add	$0x7b,%r15
+	{evex} add	$0x7b,%r15d
+	{evex} add	$0x7b,%r15w
+	{evex} add	$0x7b,%r8b
+	{evex} addb	$0x7b,0x123(%r8,%rax,4)
+	{evex} addw	$0x7b,0x123(%r8,%rax,4)
+	{evex} addl	$0x7b,0x123(%r8,%rax,4)
+	{evex} addq	$0x7b,0x123(%r8,%rax,4)
+	{evex} add	%r15,%r15
+	{evex} add	%r15,0x123(%r8,%rax,4)
+	{evex} add	%r15d,%edx
+	{evex} add	%r15d,0x123(%r8,%rax,4)
+	{evex} add	%r15w,%ax
+	{evex} add	%r15w,0x123(%r8,%rax,4)
+	{evex} add	%r8b,%dl
+	{evex} add	%r8b,0x123(%r8,%rax,4)
+	{evex} add	0x123(%r8,%rax,4),%r15
+	{evex} add	0x123(%r8,%rax,4),%r15d
+	{evex} add	0x123(%r8,%rax,4),%r15w
+	{evex} add	0x123(%r8,%rax,4),%r8b
+	{evex} adox	%r15,%r15
+	{evex} adox	%r15d,%edx
+	{evex} adox	0x123(%r8,%rax,4),%r15
+	{evex} adox	0x123(%r8,%rax,4),%r15d
+	{evex} aesdec128kl	0x123(%r8,%rax,4),%xmm12
+	{evex} aesdec256kl	0x123(%r8,%rax,4),%xmm12
+	{evex} aesdecwide128kl	0x123(%r8,%rax,4)
+	{evex} aesdecwide256kl	0x123(%r8,%rax,4)
+	{evex} aesenc128kl	0x123(%r8,%rax,4),%xmm12
+	{evex} aesenc256kl	0x123(%r8,%rax,4),%xmm12
+	{evex} aesencwide128kl	0x123(%r8,%rax,4)
+	{evex} aesencwide256kl	0x123(%r8,%rax,4)
+	{evex} and	$0x7b,%r15
+	{evex} and	$0x7b,%r15d
+	{evex} and	$0x7b,%r15w
+	{evex} and	$0x7b,%r8b
+	{evex} andb	$0x7b,0x123(%r8,%rax,4)
+	{evex} andw	$0x7b,0x123(%r8,%rax,4)
+	{evex} andl	$0x7b,0x123(%r8,%rax,4)
+	{evex} andq	$0x7b,0x123(%r8,%rax,4)
+	{evex} and	%r15,%r15
+	{evex} and	%r15,0x123(%r8,%rax,4)
+	{evex} and	%r15d,%edx
+	{evex} and	%r15d,0x123(%r8,%rax,4)
+	{evex} and	%r15w,%ax
+	{evex} and	%r15w,0x123(%r8,%rax,4)
+	{evex} and	%r8b,%dl
+	{evex} and	%r8b,0x123(%r8,%rax,4)
+	{evex} and	0x123(%r8,%rax,4),%r15
+	{evex} and	0x123(%r8,%rax,4),%r15d
+	{evex} and	0x123(%r8,%rax,4),%r15w
+	{evex} and	0x123(%r8,%rax,4),%r8b
+	{evex} andn	%r15,%r15,%r11
+	{evex} andn	%r15d,%edx,%r10d
+	{evex} andn	0x123(%r8,%rax,4),%r15,%r15
+	{evex} andn	0x123(%r8,%rax,4),%r15d,%edx
+	{evex} aor	%r15,0x123(%r8,%rax,4)
+	{evex} aor	%r15d,0x123(%r8,%rax,4)
+	{evex} axor	%r15,0x123(%r8,%rax,4)
+	{evex} axor	%r15d,0x123(%r8,%rax,4)
+	{evex} bextr	%r15,%r15,%r11
+	{evex} bextr	%r15,0x123(%r8,%rax,4),%r15
+	{evex} bextr	%r15d,%edx,%r10d
+	{evex} bextr	%r15d,0x123(%r8,%rax,4),%edx
+	{evex} blsi	%r15,%r15
+	{evex} blsi	%r15d,%edx
+	{evex} blsi	0x123(%r8,%rax,4),%r15
+	{evex} blsi	0x123(%r8,%rax,4),%r15d
+	{evex} blsmsk	%r15,%r15
+	{evex} blsmsk	%r15d,%edx
+	{evex} blsmsk	0x123(%r8,%rax,4),%r15
+	{evex} blsmsk	0x123(%r8,%rax,4),%r15d
+	{evex} blsr	%r15,%r15
+	{evex} blsr	%r15d,%edx
+	{evex} blsr	0x123(%r8,%rax,4),%r15
+	{evex} blsr	0x123(%r8,%rax,4),%r15d
+	{evex} bzhi	%r15,%r15,%r11
+	{evex} bzhi	%r15,0x123(%r8,%rax,4),%r15
+	{evex} bzhi	%r15d,%edx,%r10d
+	{evex} bzhi	%r15d,0x123(%r8,%rax,4),%edx
+	{evex} cmpbexadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpbexadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpbxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpbxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmplexadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmplexadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmplxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmplxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpnbexadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpnbexadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpnbxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpnbxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpnlexadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpnlexadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpnlxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpnlxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpnoxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpnoxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpnpxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpnpxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpnsxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpnsxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpnzxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpnzxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpoxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpoxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmppxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmppxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpsxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpsxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} cmpzxadd	%r15,%r15,0x123(%r8,%rax,4)
+	{evex} cmpzxadd	%r15d,%edx,0x123(%r8,%rax,4)
+	{evex} dec	%r15
+	{evex} dec	%r15d
+	{evex} dec	%r15w
+	{evex} dec	%r8b
+	{evex} decb	0x123(%r8,%rax,4)
+	{evex} decw	0x123(%r8,%rax,4)
+	{evex} decl	0x123(%r8,%rax,4)
+	{evex} decq	0x123(%r8,%rax,4)
+	{evex} div	%r15
+	{evex} div	%r15d
+	{evex} div	%r15w
+	{evex} div	%r8b
+	{evex} divb	0x123(%r8,%rax,4)
+	{evex} divw	0x123(%r8,%rax,4)
+	{evex} divl	0x123(%r8,%rax,4)
+	{evex} divq	0x123(%r8,%rax,4)
+	{evex} encodekey128	%r15d,%edx
+	{evex} encodekey256	%r15d,%edx
+	{evex} enqcmd	0x123(%r8,%rax,4),%r15
+	{evex} enqcmd	0x123(%r8d,%eax,4),%r15d
+	{evex} enqcmds	0x123(%r8,%rax,4),%r15
+	{evex} enqcmds	0x123(%r8d,%eax,4),%r15d
+	{evex} idiv	%r15
+	{evex} idiv	%r15d
+	{evex} idiv	%r15w
+	{evex} idiv	%r8b
+	{evex} idivb	0x123(%r8,%rax,4)
+	{evex} idivw	0x123(%r8,%rax,4)
+	{evex} idivl	0x123(%r8,%rax,4)
+	{evex} idivq	0x123(%r8,%rax,4)
+	{evex} imul	%r15
+	{evex} imul	%r15,%r15
+	{evex} imul	%r15d
+	{evex} imul	%r15d,%edx
+	{evex} imul	%r15w
+	{evex} imul	%r15w,%ax
+	{evex} imul	%r8b
+	{evex} imulb	0x123(%r8,%rax,4)
+	{evex} imulw	0x123(%r8,%rax,4)
+	{evex} imull	0x123(%r8,%rax,4)
+	{evex} imul	0x123(%r8,%rax,4),%r15
+	{evex} imul	0x123(%r8,%rax,4),%r15d
+	{evex} imul	0x123(%r8,%rax,4),%r15w
+	{evex} imulq	0x123(%r8,%rax,4)
+	{evex} imul	$0x7b, %dx, %ax
+	{evex} imul	$0x7b, %ecx, %edx
+	{evex} imul	$0x7b, %r9, %r15
+	{evex} imul	$0x7b, %r9
+	{evex} imul	$0x7b, 291(%r8, %rax, 4), %dx
+	{evex} imul	$0x7b, 291(%r8, %rax, 4), %ecx
+	{evex} imul	$0x7b, 291(%r8, %rax, 4), %r9
+	{evex} imul	$0xff90, %dx, %ax
+	{evex} imul	$0xff90, %ecx, %edx
+	{evex} imul	$0xff90, %r9, %r15
+	{evex} imul	$0xff90, %r9
+	{evex} imul	$0xff90, 291(%r8, %rax, 4), %dx
+	{evex} imul	$0xff90, 291(%r8, %rax, 4), %ecx
+	{evex} imul	$0xff90, 291(%r8, %rax, 4), %r9
+	{evex} inc	%r15
+	{evex} inc	%r15d
+	{evex} inc	%r15w
+	{evex} inc	%r8b
+	{evex} incb	0x123(%r8,%rax,4)
+	{evex} incw	0x123(%r8,%rax,4)
+	{evex} incl	0x123(%r8,%rax,4)
+	{evex} incq	0x123(%r8,%rax,4)
+	{evex} invept	0x123(%r8,%rax,4),%r15
+	{evex} invpcid	0x123(%r8,%rax,4),%r15
+	{evex} invvpid	0x123(%r8,%rax,4),%r15
+	{evex} kmovb	%k3,%k5
+	{evex} kmovb	%k5,%r15d
+	{evex} kmovb	%k5,0x123(%r8,%rax,4)
+	{evex} kmovb	%r15d,%k5
+	{evex} kmovb	0x123(%r8,%rax,4),%k5
+	{evex} kmovd	%k3,%k5
+	{evex} kmovd	%k5,%r15d
+	{evex} kmovd	%k5,0x123(%r8,%rax,4)
+	{evex} kmovd	%r15d,%k5
+	{evex} kmovd	0x123(%r8,%rax,4),%k5
+	{evex} kmovq	%k3,%k5
+	{evex} kmovq	%k5,%r15
+	{evex} kmovq	%k5,0x123(%r8,%rax,4)
+	{evex} kmovq	%r15,%k5
+	{evex} kmovq	0x123(%r8,%rax,4),%k5
+	{evex} kmovw	%k3,%k5
+	{evex} kmovw	%k5,%r15d
+	{evex} kmovw	%k5,0x123(%r8,%rax,4)
+	{evex} kmovw	%r15d,%k5
+	{evex} kmovw	0x123(%r8,%rax,4),%k5
+	{evex} lzcnt	%r15,%r15
+	{evex} lzcnt	%r15d,%edx
+	{evex} lzcnt	%r15w,%ax
+	{evex} lzcnt	0x123(%r8,%rax,4),%r15
+	{evex} lzcnt	0x123(%r8,%rax,4),%r15d
+	{evex} lzcnt	0x123(%r8,%rax,4),%r15w
+	{evex} movbe	%r15,0x123(%r8,%rax,4)
+	{evex} movbe	%r15d,0x123(%r8,%rax,4)
+	{evex} movbe	%r15w,0x123(%r8,%rax,4)
+	{evex} movbe	0x123(%r8,%rax,4),%r15
+	{evex} movbe	0x123(%r8,%rax,4),%r15d
+	{evex} movbe	0x123(%r8,%rax,4),%r15w
+	{evex} movdir64b	0x123(%r8,%rax,4),%r15
+	{evex} movdir64b	0x123(%r8d,%eax,4),%r15d
+	{evex} movdiri	%r15,0x123(%r8,%rax,4)
+	{evex} movdiri	%r15d,0x123(%r8,%rax,4)
+	{evex} mul	%r15
+	{evex} mul	%r15d
+	{evex} mul	%r15w
+	{evex} mul	%r8b
+	{evex} mulb	0x123(%r8,%rax,4)
+	{evex} mulw	0x123(%r8,%rax,4)
+	{evex} mull	0x123(%r8,%rax,4)
+	{evex} mulq	0x123(%r8,%rax,4)
+	{evex} mulx	%r15,%r15,%r11
+	{evex} mulx	%r15d,%edx,%r10d
+	{evex} mulx	0x123(%r8,%rax,4),%r15,%r15
+	{evex} mulx	0x123(%r8,%rax,4),%r15d,%edx
+	{evex} neg	%r15
+	{evex} neg	%r15d
+	{evex} neg	%r15w
+	{evex} neg	%r8b
+	{evex} negb	0x123(%r8,%rax,4)
+	{evex} negw	0x123(%r8,%rax,4)
+	{evex} negl	0x123(%r8,%rax,4)
+	{evex} negq	0x123(%r8,%rax,4)
+	{evex} not	%r15
+	{evex} not	%r15d
+	{evex} not	%r15w
+	{evex} not	%r8b
+	{evex} notb	0x123(%r8,%rax,4)
+	{evex} notw	0x123(%r8,%rax,4)
+	{evex} notl	0x123(%r8,%rax,4)
+	{evex} notq	0x123(%r8,%rax,4)
+	{evex} or	$0x7b,%r15
+	{evex} or	$0x7b,%r15d
+	{evex} or	$0x7b,%r15w
+	{evex} or	$0x7b,%r8b
+	{evex} orb	$0x7b,0x123(%r8,%rax,4)
+	{evex} orw	$0x7b,0x123(%r8,%rax,4)
+	{evex} orl	$0x7b,0x123(%r8,%rax,4)
+	{evex} orq	$0x7b,0x123(%r8,%rax,4)
+	{evex} or	%r15,%r15
+	{evex} or	%r15,0x123(%r8,%rax,4)
+	{evex} or	%r15d,%edx
+	{evex} or	%r15d,0x123(%r8,%rax,4)
+	{evex} or	%r15w,%ax
+	{evex} or	%r15w,0x123(%r8,%rax,4)
+	{evex} or	%r8b,%dl
+	{evex} or	%r8b,0x123(%r8,%rax,4)
+	{evex} or	0x123(%r8,%rax,4),%r15
+	{evex} or	0x123(%r8,%rax,4),%r15d
+	{evex} or	0x123(%r8,%rax,4),%r15w
+	{evex} or	0x123(%r8,%rax,4),%r8b
+	{evex} pdep	%r15,%r15,%r11
+	{evex} pdep	%r15d,%edx,%r10d
+	{evex} pdep	0x123(%r8,%rax,4),%r15,%r15
+	{evex} pdep	0x123(%r8,%rax,4),%r15d,%edx
+	{evex} pext	%r15,%r15,%r11
+	{evex} pext	%r15d,%edx,%r10d
+	{evex} pext	0x123(%r8,%rax,4),%r15,%r15
+	{evex} pext	0x123(%r8,%rax,4),%r15d,%edx
+	{evex} popcnt	%r15,%r15
+	{evex} popcnt	%r15d,%edx
+	{evex} popcnt	%r15w,%ax
+	{evex} popcnt	0x123(%r8,%rax,4),%r15
+	{evex} popcnt	0x123(%r8,%rax,4),%r15d
+	{evex} popcnt	0x123(%r8,%rax,4),%r15w
+	{evex} rcl	$0x7b,%r15
+	{evex} rcl	$0x7b,%r15d
+	{evex} rcl	$0x7b,%r15w
+	{evex} rcl	$0x7b,%r8b
+	{evex} rclb	$0x7b,0x123(%r8,%rax,4)
+	{evex} rclw	$0x7b,0x123(%r8,%rax,4)
+	{evex} rcll	$0x7b,0x123(%r8,%rax,4)
+	{evex} rclq	$0x7b,0x123(%r8,%rax,4)
+	{evex} rcl	$1,%r15
+	{evex} rcl	$1,%r15d
+	{evex} rcl	$1,%r15w
+	{evex} rcl	$1,%r8b
+	{evex} rclb	$1,0x123(%r8,%rax,4)
+	{evex} rclw	$1,0x123(%r8,%rax,4)
+	{evex} rcll	$1,0x123(%r8,%rax,4)
+	{evex} rclq	$1,0x123(%r8,%rax,4)
+	{evex} rcl	%cl,%r15
+	{evex} rcl	%cl,%r15d
+	{evex} rcl	%cl,%r15w
+	{evex} rcl	%cl,%r8b
+	{evex} rclb	%cl,0x123(%r8,%rax,4)
+	{evex} rclw	%cl,0x123(%r8,%rax,4)
+	{evex} rcll	%cl,0x123(%r8,%rax,4)
+	{evex} rclq	%cl,0x123(%r8,%rax,4)
+	{evex} rcr	$0x7b,%r15
+	{evex} rcr	$0x7b,%r15d
+	{evex} rcr	$0x7b,%r15w
+	{evex} rcr	$0x7b,%r8b
+	{evex} rcrb	$0x7b,0x123(%r8,%rax,4)
+	{evex} rcrw	$0x7b,0x123(%r8,%rax,4)
+	{evex} rcrl	$0x7b,0x123(%r8,%rax,4)
+	{evex} rcrq	$0x7b,0x123(%r8,%rax,4)
+	{evex} rcr	$1,%r15
+	{evex} rcr	$1,%r15d
+	{evex} rcr	$1,%r15w
+	{evex} rcr	$1,%r8b
+	{evex} rcrb	$1,0x123(%r8,%rax,4)
+	{evex} rcrw	$1,0x123(%r8,%rax,4)
+	{evex} rcrl	$1,0x123(%r8,%rax,4)
+	{evex} rcrq	$1,0x123(%r8,%rax,4)
+	{evex} rcr	%cl,%r15
+	{evex} rcr	%cl,%r15d
+	{evex} rcr	%cl,%r15w
+	{evex} rcr	%cl,%r8b
+	{evex} rcrb	%cl,0x123(%r8,%rax,4)
+	{evex} rcrw	%cl,0x123(%r8,%rax,4)
+	{evex} rcrl	%cl,0x123(%r8,%rax,4)
+	{evex} rcrq	%cl,0x123(%r8,%rax,4)
+	{evex} rol	$0x7b,%r15
+	{evex} rol	$0x7b,%r15d
+	{evex} rol	$0x7b,%r15w
+	{evex} rol	$0x7b,%r8b
+	{evex} rolb	$0x7b,0x123(%r8,%rax,4)
+	{evex} rolw	$0x7b,0x123(%r8,%rax,4)
+	{evex} roll	$0x7b,0x123(%r8,%rax,4)
+	{evex} rolq	$0x7b,0x123(%r8,%rax,4)
+	{evex} rol	$1,%r15
+	{evex} rol	$1,%r15d
+	{evex} rol	$1,%r15w
+	{evex} rol	$1,%r8b
+	{evex} rolb	$1,0x123(%r8,%rax,4)
+	{evex} rolw	$1,0x123(%r8,%rax,4)
+	{evex} roll	$1,0x123(%r8,%rax,4)
+	{evex} rolq	$1,0x123(%r8,%rax,4)
+	{evex} rol	%cl,%r15
+	{evex} rol	%cl,%r15d
+	{evex} rol	%cl,%r15w
+	{evex} rol	%cl,%r8b
+	{evex} rolb	%cl,0x123(%r8,%rax,4)
+	{evex} rolw	%cl,0x123(%r8,%rax,4)
+	{evex} roll	%cl,0x123(%r8,%rax,4)
+	{evex} rolq	%cl,0x123(%r8,%rax,4)
+	{evex} ror	$0x7b,%r15
+	{evex} ror	$0x7b,%r15d
+	{evex} ror	$0x7b,%r15w
+	{evex} ror	$0x7b,%r8b
+	{evex} rorb	$0x7b,0x123(%r8,%rax,4)
+	{evex} rorw	$0x7b,0x123(%r8,%rax,4)
+	{evex} rorl	$0x7b,0x123(%r8,%rax,4)
+	{evex} rorq	$0x7b,0x123(%r8,%rax,4)
+	{evex} ror	$1,%r15
+	{evex} ror	$1,%r15d
+	{evex} ror	$1,%r15w
+	{evex} ror	$1,%r8b
+	{evex} rorb	$1,0x123(%r8,%rax,4)
+	{evex} rorw	$1,0x123(%r8,%rax,4)
+	{evex} rorl	$1,0x123(%r8,%rax,4)
+	{evex} rorq	$1,0x123(%r8,%rax,4)
+	{evex} ror	%cl,%r15
+	{evex} ror	%cl,%r15d
+	{evex} ror	%cl,%r15w
+	{evex} ror	%cl,%r8b
+	{evex} rorb	%cl,0x123(%r8,%rax,4)
+	{evex} rorw	%cl,0x123(%r8,%rax,4)
+	{evex} rorl	%cl,0x123(%r8,%rax,4)
+	{evex} rorq	%cl,0x123(%r8,%rax,4)
+	{evex} rorx	$0x7b,%r15,%r15
+	{evex} rorx	$0x7b,%r15d,%edx
+	{evex} rorx	$0x7b,0x123(%r8,%rax,4),%r15
+	{evex} rorx	$0x7b,0x123(%r8,%rax,4),%r15d
+	{evex} sal	$0x7b,%r15
+	{evex} sal	$0x7b,%r15d
+	{evex} sal	$0x7b,%r15w
+	{evex} sal	$0x7b,%r8b
+	{evex} salb	$0x7b,0x123(%r8,%rax,4)
+	{evex} salw	$0x7b,0x123(%r8,%rax,4)
+	{evex} sall	$0x7b,0x123(%r8,%rax,4)
+	{evex} salq	$0x7b,0x123(%r8,%rax,4)
+	{evex} sal	$1,%r15
+	{evex} sal	$1,%r15d
+	{evex} sal	$1,%r15w
+	{evex} sal	$1,%r8b
+	{evex} salb	$1,0x123(%r8,%rax,4)
+	{evex} salw	$1,0x123(%r8,%rax,4)
+	{evex} sall	$1,0x123(%r8,%rax,4)
+	{evex} salq	$1,0x123(%r8,%rax,4)
+	{evex} sal	%cl,%r15
+	{evex} sal	%cl,%r15d
+	{evex} sal	%cl,%r15w
+	{evex} sal	%cl,%r8b
+	{evex} salb	%cl,0x123(%r8,%rax,4)
+	{evex} salw	%cl,0x123(%r8,%rax,4)
+	{evex} sall	%cl,0x123(%r8,%rax,4)
+	{evex} salq	%cl,0x123(%r8,%rax,4)
+	{evex} sar	$0x7b,%r15
+	{evex} sar	$0x7b,%r15d
+	{evex} sar	$0x7b,%r15w
+	{evex} sar	$0x7b,%r8b
+	{evex} sarb	$0x7b,0x123(%r8,%rax,4)
+	{evex} sarw	$0x7b,0x123(%r8,%rax,4)
+	{evex} sarl	$0x7b,0x123(%r8,%rax,4)
+	{evex} sarq	$0x7b,0x123(%r8,%rax,4)
+	{evex} sar	$1,%r15
+	{evex} sar	$1,%r15d
+	{evex} sar	$1,%r15w
+	{evex} sar	$1,%r8b
+	{evex} sarb	$1,0x123(%r8,%rax,4)
+	{evex} sarw	$1,0x123(%r8,%rax,4)
+	{evex} sarl	$1,0x123(%r8,%rax,4)
+	{evex} sarq	$1,0x123(%r8,%rax,4)
+	{evex} sar	%cl,%r15
+	{evex} sar	%cl,%r15d
+	{evex} sar	%cl,%r15w
+	{evex} sar	%cl,%r8b
+	{evex} sarb	%cl,0x123(%r8,%rax,4)
+	{evex} sarw	%cl,0x123(%r8,%rax,4)
+	{evex} sarl	%cl,0x123(%r8,%rax,4)
+	{evex} sarq	%cl,0x123(%r8,%rax,4)
+	{evex} sarx	%r15,%r15,%r11
+	{evex} sarx	%r15,0x123(%r8,%rax,4),%r15
+	{evex} sarx	%r15d,%edx,%r10d
+	{evex} sarx	%r15d,0x123(%r8,%rax,4),%edx
+	{evex} sbb	$0x7b,%r15
+	{evex} sbb	$0x7b,%r15d
+	{evex} sbb	$0x7b,%r15w
+	{evex} sbb	$0x7b,%r8b
+	{evex} sbbb	$0x7b,0x123(%r8,%rax,4)
+	{evex} sbbw	$0x7b,0x123(%r8,%rax,4)
+	{evex} sbbl	$0x7b,0x123(%r8,%rax,4)
+	{evex} sbbq	$0x7b,0x123(%r8,%rax,4)
+	{evex} sbb	%r15,%r15
+	{evex} sbb	%r15,0x123(%r8,%rax,4)
+	{evex} sbb	%r15d,%edx
+	{evex} sbb	%r15d,0x123(%r8,%rax,4)
+	{evex} sbb	%r15w,%ax
+	{evex} sbb	%r15w,0x123(%r8,%rax,4)
+	{evex} sbb	%r8b,%dl
+	{evex} sbb	%r8b,0x123(%r8,%rax,4)
+	{evex} sbb	0x123(%r8,%rax,4),%r15
+	{evex} sbb	0x123(%r8,%rax,4),%r15d
+	{evex} sbb	0x123(%r8,%rax,4),%r15w
+	{evex} sbb	0x123(%r8,%rax,4),%r8b
+	{evex} sha1msg1	%xmm13,%xmm12
+	{evex} sha1msg1	0x123(%r8,%rax,4),%xmm12
+	{evex} sha1msg2	%xmm13,%xmm12
+	{evex} sha1msg2	0x123(%r8,%rax,4),%xmm12
+	{evex} sha1nexte	%xmm13,%xmm12
+	{evex} sha1nexte	0x123(%r8,%rax,4),%xmm12
+	{evex} sha1rnds4	$0x7b,%xmm13,%xmm12
+	{evex} sha1rnds4	$0x7b,0x123(%r8,%rax,4),%xmm12
+	{evex} sha256msg1	%xmm13,%xmm12
+	{evex} sha256msg1	0x123(%r8,%rax,4),%xmm12
+	{evex} sha256msg2	%xmm13,%xmm12
+	{evex} sha256msg2	0x123(%r8,%rax,4),%xmm12
+	{evex} sha256rnds2	0x123(%r8,%rax,4),%xmm12
+	{evex} shl	$0x7b,%r15
+	{evex} shl	$0x7b,%r15d
+	{evex} shl	$0x7b,%r15w
+	{evex} shl	$0x7b,%r8b
+	{evex} shlb	$0x7b,0x123(%r8,%rax,4)
+	{evex} shlw	$0x7b,0x123(%r8,%rax,4)
+	{evex} shll	$0x7b,0x123(%r8,%rax,4)
+	{evex} shlq	$0x7b,0x123(%r8,%rax,4)
+	{evex} shl	$1,%r15
+	{evex} shl	$1,%r15d
+	{evex} shl	$1,%r15w
+	{evex} shl	$1,%r8b
+	{evex} shlb	$1,0x123(%r8,%rax,4)
+	{evex} shlw	$1,0x123(%r8,%rax,4)
+	{evex} shll	$1,0x123(%r8,%rax,4)
+	{evex} shlq	$1,0x123(%r8,%rax,4)
+	{evex} shl	%cl,%r15
+	{evex} shl	%cl,%r15d
+	{evex} shl	%cl,%r15w
+	{evex} shl	%cl,%r8b
+	{evex} shlb	%cl,0x123(%r8,%rax,4)
+	{evex} shlw	%cl,0x123(%r8,%rax,4)
+	{evex} shll	%cl,0x123(%r8,%rax,4)
+	{evex} shlq	%cl,0x123(%r8,%rax,4)
+	{evex} shld	$0x7b,%r15,%r15
+	{evex} shld	$0x7b,%r15,0x123(%r8,%rax,4)
+	{evex} shld	$0x7b,%r15d,%edx
+	{evex} shld	$0x7b,%r15d,0x123(%r8,%rax,4)
+	{evex} shld	$0x7b,%r15w,%ax
+	{evex} shld	$0x7b,%r15w,0x123(%r8,%rax,4)
+	{evex} shld	%cl,%r15,%r15
+	{evex} shld	%cl,%r15,0x123(%r8,%rax,4)
+	{evex} shld	%cl,%r15d,%edx
+	{evex} shld	%cl,%r15d,0x123(%r8,%rax,4)
+	{evex} shld	%cl,%r15w,%ax
+	{evex} shld	%cl,%r15w,0x123(%r8,%rax,4)
+	{evex} shlx	%r15,%r15,%r11
+	{evex} shlx	%r15,0x123(%r8,%rax,4),%r15
+	{evex} shlx	%r15d,%edx,%r10d
+	{evex} shlx	%r15d,0x123(%r8,%rax,4),%edx
+	{evex} shr	$0x7b,%r15
+	{evex} shr	$0x7b,%r15d
+	{evex} shr	$0x7b,%r15w
+	{evex} shr	$0x7b,%r8b
+	{evex} shrb	$0x7b,0x123(%r8,%rax,4)
+	{evex} shrw	$0x7b,0x123(%r8,%rax,4)
+	{evex} shrl	$0x7b,0x123(%r8,%rax,4)
+	{evex} shrq	$0x7b,0x123(%r8,%rax,4)
+	{evex} shr	$1,%r15
+	{evex} shr	$1,%r15d
+	{evex} shr	$1,%r15w
+	{evex} shr	$1,%r8b
+	{evex} shrb	$1,0x123(%r8,%rax,4)
+	{evex} shrw	$1,0x123(%r8,%rax,4)
+	{evex} shrl	$1,0x123(%r8,%rax,4)
+	{evex} shrq	$1,0x123(%r8,%rax,4)
+	{evex} shr	%cl,%r15
+	{evex} shr	%cl,%r15d
+	{evex} shr	%cl,%r15w
+	{evex} shr	%cl,%r8b
+	{evex} shrb	%cl,0x123(%r8,%rax,4)
+	{evex} shrw	%cl,0x123(%r8,%rax,4)
+	{evex} shrl	%cl,0x123(%r8,%rax,4)
+	{evex} shrq	%cl,0x123(%r8,%rax,4)
+	{evex} shrd	$0x7b,%r15,%r15
+	{evex} shrd	$0x7b,%r15,0x123(%r8,%rax,4)
+	{evex} shrd	$0x7b,%r15d,%edx
+	{evex} shrd	$0x7b,%r15d,0x123(%r8,%rax,4)
+	{evex} shrd	$0x7b,%r15w,%ax
+	{evex} shrd	$0x7b,%r15w,0x123(%r8,%rax,4)
+	{evex} shrd	%cl,%r15,%r15
+	{evex} shrd	%cl,%r15,0x123(%r8,%rax,4)
+	{evex} shrd	%cl,%r15d,%edx
+	{evex} shrd	%cl,%r15d,0x123(%r8,%rax,4)
+	{evex} shrd	%cl,%r15w,%ax
+	{evex} shrd	%cl,%r15w,0x123(%r8,%rax,4)
+	{evex} shrx	%r15,%r15,%r11
+	{evex} shrx	%r15,0x123(%r8,%rax,4),%r15
+	{evex} shrx	%r15d,%edx,%r10d
+	{evex} shrx	%r15d,0x123(%r8,%rax,4),%edx
+	{evex} sub	$0x7b,%r15
+	{evex} sub	$0x7b,%r15d
+	{evex} sub	$0x7b,%r15w
+	{evex} sub	$0x7b,%r8b
+	{evex} subb	$0x7b,0x123(%r8,%rax,4)
+	{evex} subw	$0x7b,0x123(%r8,%rax,4)
+	{evex} subl	$0x7b,0x123(%r8,%rax,4)
+	{evex} subq	$0x7b,0x123(%r8,%rax,4)
+	{evex} sub	%r15,%r15
+	{evex} sub	%r15,0x123(%r8,%rax,4)
+	{evex} sub	%r15d,%edx
+	{evex} sub	%r15d,0x123(%r8,%rax,4)
+	{evex} sub	%r15w,%ax
+	{evex} sub	%r15w,0x123(%r8,%rax,4)
+	{evex} sub	%r8b,%dl
+	{evex} sub	%r8b,0x123(%r8,%rax,4)
+	{evex} sub	0x123(%r8,%rax,4),%r15
+	{evex} sub	0x123(%r8,%rax,4),%r15d
+	{evex} sub	0x123(%r8,%rax,4),%r15w
+	{evex} sub	0x123(%r8,%rax,4),%r8b
+	{evex} tzcnt	%r15,%r15
+	{evex} tzcnt	%r15d,%edx
+	{evex} tzcnt	%r15w,%ax
+	{evex} tzcnt	0x123(%r8,%rax,4),%r15
+	{evex} tzcnt	0x123(%r8,%rax,4),%r15d
+	{evex} tzcnt	0x123(%r8,%rax,4),%r15w
+	{evex} wrssd	%r15d,0x123(%r8,%rax,4)
+	{evex} wrssq	%r15,0x123(%r8,%rax,4)
+	{evex} wrussd	%r15d,0x123(%r8,%rax,4)
+	{evex} wrussq	%r15,0x123(%r8,%rax,4)
+	{evex} xor	$0x7b,%r15
+	{evex} xor	$0x7b,%r15d
+	{evex} xor	$0x7b,%r15w
+	{evex} xor	$0x7b,%r8b
+	{evex} xorb	$0x7b,0x123(%r8,%rax,4)
+	{evex} xorw	$0x7b,0x123(%r8,%rax,4)
+	{evex} xorl	$0x7b,0x123(%r8,%rax,4)
+	{evex} xorq	$0x7b,0x123(%r8,%rax,4)
+	{evex} xor	%r15,%r15
+	{evex} xor	%r15,0x123(%r8,%rax,4)
+	{evex} xor	%r15d,%edx
+	{evex} xor	%r15d,0x123(%r8,%rax,4)
+	{evex} xor	%r15w,%ax
+	{evex} xor	%r15w,0x123(%r8,%rax,4)
+	{evex} xor	%r8b,%dl
+	{evex} xor	%r8b,0x123(%r8,%rax,4)
+	{evex} xor	0x123(%r8,%rax,4),%r15
+	{evex} xor	0x123(%r8,%rax,4),%r15d
+	{evex} xor	0x123(%r8,%rax,4),%r15w
+	{evex} xor	0x123(%r8,%rax,4),%r8b
diff --git a/gas/testsuite/gas/i386/x86-64.exp b/gas/testsuite/gas/i386/x86-64.exp
index de38cdfe172..cae9c6f299f 100644
--- a/gas/testsuite/gas/i386/x86-64.exp
+++ b/gas/testsuite/gas/i386/x86-64.exp
@@ -133,6 +133,7 @@ run_dump_test "noreg64-data16"
 run_dump_test "noreg64-rex64"
 run_dump_test "noreg-intel64"
 run_list_test "noreg-intel64" "-I${srcdir}/$subdir -mintel64"
+run_dump_test "noreg64-evex"
 run_list_test "movx64" "-al"
 run_list_test "cvtsi2sX"
 run_list_test "x86-64-nosse2" "-al"
@@ -388,6 +389,7 @@ run_dump_test "x86-64-apx-jmpabs-intel"
 run_dump_test "x86-64-apx-jmpabs-inval"
 run_dump_test "x86-64-apx-nf"
 run_dump_test "x86-64-apx-nf-intel"
+run_dump_test "x86-64-apx_f-evex"
 run_dump_test "x86-64-avx512f-rcigrz-intel"
 run_dump_test "x86-64-avx512f-rcigrz"
 run_dump_test "x86-64-clwb"
diff --git a/opcodes/i386-dis-evex.h b/opcodes/i386-dis-evex.h
index dff5b8b1fa9..34d69925ef3 100644
--- a/opcodes/i386-dis-evex.h
+++ b/opcodes/i386-dis-evex.h
@@ -983,8 +983,8 @@ static const struct dis386 evex_table[][256] = {
     { Bad_Opcode },
     { Bad_Opcode },
     /* 60 */
-    { "movbeS",	{ Gv, Ev }, PREFIX_NP_OR_DATA },
-    { "movbeS",	{ Ev, Gv }, PREFIX_NP_OR_DATA },
+    { "%MEmovbeS",	{ Gv, Ev }, PREFIX_NP_OR_DATA },
+    { "%MEmovbeS",	{ Ev, Gv }, PREFIX_NP_OR_DATA },
     { Bad_Opcode },
     { Bad_Opcode },
     { Bad_Opcode },
diff --git a/opcodes/i386-dis.c b/opcodes/i386-dis.c
index 67d8cdd44c2..5462a30edb1 100644
--- a/opcodes/i386-dis.c
+++ b/opcodes/i386-dis.c
@@ -1816,7 +1816,11 @@ struct dis386 {
    "XV" => print "{vex} " pseudo prefix
    "XE" => print "{evex} " pseudo prefix if no EVEX-specific functionality is
 	   is used by an EVEX-encoded (AVX512VL) instruction.
-   "NF" => print "{nf} " pseudo prefix when EVEX.NF = 1.
+   "ME" => print "{evex} " pseudo prefix for ins->modrm.mod != 3,if no
+	   EVEX-specific functionality is used by an EVEX-encoded (AVX512VL)
+	   instruction.
+   "NF" => print "{nf} " pseudo prefix when EVEX.NF = 1 and print "{evex} "
+	   pseudo prefix when instructions without NF, EGPR and VVVV,
    "YK" keep unused, to avoid ambiguity with the combined use of Y and K.
    "YX" keep unused, to avoid ambiguity with the combined use of Y and X.
    "LQ" => print 'l' ('d' in Intel mode) or 'q' for memory operand, cond
@@ -3882,38 +3886,38 @@ static const struct dis386 prefix_table[][4] = {
 
   /* PREFIX_VEX_0F90_L_0_W_0 */
   {
-    { "kmovw",		{ MaskG, MaskE }, 0 },
+    { "%XEkmovw",		{ MaskG, MaskE }, 0 },
     { Bad_Opcode },
-    { "kmovb",		{ MaskG, MaskBDE }, 0 },
+    { "%XEkmovb",		{ MaskG, MaskBDE }, 0 },
   },
 
   /* PREFIX_VEX_0F90_L_0_W_1 */
   {
-    { "kmovq",		{ MaskG, MaskE }, 0 },
+    { "%XEkmovq",		{ MaskG, MaskE }, 0 },
     { Bad_Opcode },
-    { "kmovd",		{ MaskG, MaskBDE }, 0 },
+    { "%XEkmovd",		{ MaskG, MaskBDE }, 0 },
   },
 
   /* PREFIX_VEX_0F91_L_0_W_0 */
   {
-    { "kmovw",		{ Mw, MaskG }, 0 },
+    { "%XEkmovw",		{ Mw, MaskG }, 0 },
     { Bad_Opcode },
-    { "kmovb",		{ Mb, MaskG }, 0 },
+    { "%XEkmovb",		{ Mb, MaskG }, 0 },
   },
 
   /* PREFIX_VEX_0F91_L_0_W_1 */
   {
-    { "kmovq",		{ Mq, MaskG }, 0 },
+    { "%XEkmovq",		{ Mq, MaskG }, 0 },
     { Bad_Opcode },
-    { "kmovd",		{ Md, MaskG }, 0 },
+    { "%XEkmovd",		{ Md, MaskG }, 0 },
   },
 
   /* PREFIX_VEX_0F92_L_0_W_0 */
   {
-    { "kmovw",		{ MaskG, Rdq }, 0 },
+    { "%XEkmovw",		{ MaskG, Rdq }, 0 },
     { Bad_Opcode },
-    { "kmovb",		{ MaskG, Rdq }, 0 },
-    { "kmovd",		{ MaskG, Rdq }, 0 },
+    { "%XEkmovb",		{ MaskG, Rdq }, 0 },
+    { "%XEkmovd",		{ MaskG, Rdq }, 0 },
   },
 
   /* PREFIX_VEX_0F92_L_0_W_1 */
@@ -3921,15 +3925,15 @@ static const struct dis386 prefix_table[][4] = {
     { Bad_Opcode },
     { Bad_Opcode },
     { Bad_Opcode },
-    { "kmovK",		{ MaskG, Rdq }, 0 },
+    { "%XEkmovK",		{ MaskG, Rdq }, 0 },
   },
 
   /* PREFIX_VEX_0F93_L_0_W_0 */
   {
-    { "kmovw",		{ Gdq, MaskR }, 0 },
+    { "%XEkmovw",		{ Gdq, MaskR }, 0 },
     { Bad_Opcode },
-    { "kmovb",		{ Gdq, MaskR }, 0 },
-    { "kmovd",		{ Gdq, MaskR }, 0 },
+    { "%XEkmovb",		{ Gdq, MaskR }, 0 },
+    { "%XEkmovd",		{ Gdq, MaskR }, 0 },
   },
 
   /* PREFIX_VEX_0F93_L_0_W_1 */
@@ -3937,7 +3941,7 @@ static const struct dis386 prefix_table[][4] = {
     { Bad_Opcode },
     { Bad_Opcode },
     { Bad_Opcode },
-    { "kmovK",		{ Gdq, MaskR }, 0 },
+    { "%XEkmovK",		{ Gdq, MaskR }, 0 },
   },
 
   /* PREFIX_VEX_0F98_L_0_W_0 */
@@ -4109,9 +4113,9 @@ static const struct dis386 prefix_table[][4] = {
   /* PREFIX_VEX_0F38F5_L_0 */
   {
     { "%NFbzhiS",	{ Gdq, Edq, VexGdq }, 0 },
-    { "pextS",		{ Gdq, VexGdq, Edq }, 0 },
+    { "%XEpextS",		{ Gdq, VexGdq, Edq }, 0 },
     { Bad_Opcode },
-    { "pdepS",		{ Gdq, VexGdq, Edq }, 0 },
+    { "%XEpdepS",		{ Gdq, VexGdq, Edq }, 0 },
   },
 
   /* PREFIX_VEX_0F38F6_L_0 */
@@ -4119,15 +4123,15 @@ static const struct dis386 prefix_table[][4] = {
     { Bad_Opcode },
     { Bad_Opcode },
     { Bad_Opcode },
-    { "mulxS",		{ Gdq, VexGdq, Edq }, 0 },
+    { "%XEmulxS",		{ Gdq, VexGdq, Edq }, 0 },
   },
 
   /* PREFIX_VEX_0F38F7_L_0 */
   {
     { "%NFbextrS",	{ Gdq, Edq, VexGdq }, 0 },
-    { "sarxS",		{ Gdq, Edq, VexGdq }, 0 },
-    { "shlxS",		{ Gdq, Edq, VexGdq }, 0 },
-    { "shrxS",		{ Gdq, Edq, VexGdq }, 0 },
+    { "%XEsarxS",		{ Gdq, Edq, VexGdq }, 0 },
+    { "%XEshlxS",		{ Gdq, Edq, VexGdq }, 0 },
+    { "%XEshrxS",		{ Gdq, Edq, VexGdq }, 0 },
   },
 
   /* PREFIX_VEX_0F3AF0_L_0 */
@@ -4135,7 +4139,7 @@ static const struct dis386 prefix_table[][4] = {
     { Bad_Opcode },
     { Bad_Opcode },
     { Bad_Opcode },
-    { "rorxS",		{ Gdq, Edq, Ib }, 0 },
+    { "%XErorxS",		{ Gdq, Edq, Ib }, 0 },
   },
 
   /* PREFIX_VEX_MAP7_F8_L_0_W_0_R_0_X86_64 */
@@ -4499,97 +4503,97 @@ static const struct dis386 x86_64_table[][2] = {
   /* X86_64_VEX_0F38E0 */
   {
     { Bad_Opcode },
-    { "cmpoxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpoxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38E1 */
   {
     { Bad_Opcode },
-    { "cmpnoxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpnoxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38E2 */
   {
     { Bad_Opcode },
-    { "cmpbxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpbxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38E3 */
   {
     { Bad_Opcode },
-    { "cmpnbxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpnbxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38E4 */
   {
     { Bad_Opcode },
-    { "cmpzxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpzxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38E5 */
   {
     { Bad_Opcode },
-    { "cmpnzxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpnzxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38E6 */
   {
     { Bad_Opcode },
-    { "cmpbexadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpbexadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38E7 */
   {
     { Bad_Opcode },
-    { "cmpnbexadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpnbexadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38E8 */
   {
     { Bad_Opcode },
-    { "cmpsxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpsxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38E9 */
   {
     { Bad_Opcode },
-    { "cmpnsxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpnsxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38EA */
   {
     { Bad_Opcode },
-    { "cmppxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmppxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38EB */
   {
     { Bad_Opcode },
-    { "cmpnpxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpnpxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38EC */
   {
     { Bad_Opcode },
-    { "cmplxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmplxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38ED */
   {
     { Bad_Opcode },
-    { "cmpnlxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpnlxadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38EE */
   {
     { Bad_Opcode },
-    { "cmplexadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmplexadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_0F38EF */
   {
     { Bad_Opcode },
-    { "cmpnlexadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
+    { "%XEcmpnlexadd", { Mdq, Gdq, VexGdq }, PREFIX_DATA },
   },
 
   /* X86_64_VEX_MAP7_F8_L_0_W_0_R_0 */
@@ -10398,6 +10402,7 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
   int cond = 1;
   unsigned int l = 0, len = 0;
   char last[4];
+  bool b_done = false;
 
   for (p = in_template; *p; p++)
     {
@@ -10411,6 +10416,12 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
       switch (*p)
 	{
 	default:
+	  if (ins->evex_type == evex_from_legacy && !ins->vex.nd
+	      && !(ins->rex2 & 7) && !b_done)
+	    {
+	      oappend (ins, "{evex} ");
+	      b_done = true;
+	    }
 	  *ins->obufp++ = *p;
 	  break;
 	case '%':
@@ -10547,6 +10558,11 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
 		  *ins->obufp++ = '}';
 		  *ins->obufp++ = ' ';
 		  break;
+		case 'M':
+		  if (ins->modrm.mod != 3 && !(ins->rex2 & 7))
+		    oappend (ins, "{evex} ");
+		  b_done = true;
+		  break;
 		default:
 		  abort ();
 		}
@@ -10588,7 +10604,11 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
 		  oappend (ins, "{nf} ");
 		  /* This bit needs to be cleared after it is consumed.  */
 		  ins->vex.nf = false;
+		  b_done = true;
 		}
+	      else if (ins->evex_type == evex_from_vex && !(ins->rex2 & 7)
+		       && ins->vex.v)
+		oappend (ins, "{evex} ");
 	    }
 	  else
 	    abort ();
-- 
2.34.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32.
  2024-03-22  9:49 [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32 Cui, Lili
@ 2024-03-25 12:22 ` Jan Beulich
  2024-03-25 12:31 ` Jan Beulich
  1 sibling, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2024-03-25 12:22 UTC (permalink / raw)
  To: Cui, Lili; +Cc: hjl.tools, Hu, Lin1, binutils

On 22.03.2024 10:49, Cui, Lili wrote:
> From: "Hu, Lin1" <lin1.hu@intel.com>
> 
> Based on the V1, there are mainly the following changes:
> 1. Added more test cases to cover each ins template.
> 2. The Intel format test has been removed in this patch.
> 3. Droped the newly added %XE and use evex_from_legacy for unified judgment.
> 4. Add %ME to movbe to print {evex} correctly.

Hmm. Iirc I had outlined how to deal with this without introducing a
single-use macro. I'm not outright opposed, but I'd like to understand
why you've chosen not to deal with this by having decode go through
mod_table[].

> --- /dev/null
> +++ b/gas/testsuite/gas/i386/noreg64-evex.s
> @@ -0,0 +1,74 @@
> +# Check 64-bit insns not sizeable through register operands with evex
> +
> +        .text
> +_start:
> +	{evex} adc	$1, (%rax)
> +	{evex} adc	$0x89, (%rax)
> +	{evex} adc	$0x1234, (%rax)
> +	{evex} adc	$0x12345678, (%rax)
> +	{evex} add	$1, (%rax)
> +	{evex} add	$0x89, (%rax)
> +	{evex} add	$0x1234, (%rax)
> +	{evex} add	$0x12345678, (%rax)
> +	{evex} and	$1, (%rax)
> +	{evex} and	$0x89, (%rax)
> +	{evex} and	$0x1234, (%rax)
> +	{evex} and	$0x12345678, (%rax)
> +	{evex} crc32	(%rax), %eax
> +	{evex} crc32	(%rax), %rax
> +	{evex} dec	(%rax)
> +	{evex} div	(%rax)
> +	{evex} idiv	(%rax)
> +	{evex} imul	(%rax)
> +	{evex} inc	(%rax)
> +	{evex} mul	(%rax)
> +	{evex} neg	(%rax)
> +	{evex} not	(%rax)
> +	{evex} or 	$1, (%rax)
> +	{evex} or 	$0x89, (%rax)
> +	{evex} or 	$0x1234, (%rax)
> +	{evex} or 	$0x12345678, (%rax)
> +	{evex} rcl	$1, (%rax)
> +	{evex} rcl	$2, (%rax)
> +	{evex} rcl	%cl, (%rax)
> +	{evex} rcl	(%rax)
> +	{evex} rcr	$1, (%rax)
> +	{evex} rcr	$2, (%rax)
> +	{evex} rcr	%cl, (%rax)
> +	{evex} rcr	(%rax)
> +	{evex} rol	$1, (%rax)
> +	{evex} rol	$2, (%rax)
> +	{evex} rol	%cl, (%rax)
> +	{evex} rol	(%rax)
> +	{evex} ror	$1, (%rax)
> +	{evex} ror	$2, (%rax)
> +	{evex} ror	%cl, (%rax)
> +	{evex} ror	(%rax)
> +	{evex} sbb	$1, (%rax)
> +	{evex} sbb	$0x89, (%rax)
> +	{evex} sbb	$0x1234, (%rax)
> +	{evex} sbb	$0x12345678, (%rax)
> +	{evex} sal	$1, (%rax)
> +	{evex} sal	$2, (%rax)
> +	{evex} sal	%cl, (%rax)
> +	{evex} sal	(%rax)
> +	{evex} sar	$1, (%rax)
> +	{evex} sar	$2, (%rax)
> +	{evex} sar	%cl, (%rax)
> +	{evex} sar	(%rax)

I realize it was my mistake originally, but may I ask that we don't further
spread it: sbb really wants to come after sal and sar.

> --- a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
> +++ b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
> @@ -30,16 +30,16 @@ Disassembly of section .text:
>  [ 	]*[a-f0-9]+:[ 	]+0c 18[ 	]+or.*
>  [ 	]*[a-f0-9]+:[ 	]+62 f2 fc 18 f5[ 	]+\(bad\)
>  [ 	]*[a-f0-9]+:[ 	]+0c 18[ 	]+or.*
> -[ 	]*[a-f0-9]+:[ 	]+62 f4 e4[ 	]+\(bad\)
> +[ 	]*[a-f0-9]+:[ 	]+62 f4 e4[ 	]+\{evex\} \(bad\)
>  [ 	]*[a-f0-9]+:[ 	]+08 ff[ 	]+.*
>  [ 	]*[a-f0-9]+:[ 	]+04 08[ 	]+.*
> -[ 	]*[a-f0-9]+:[ 	]+62 f4 3c[ 	]+\(bad\)
> +[ 	]*[a-f0-9]+:[ 	]+62 f4 3c[ 	]+\{evex\} \(bad\)

Why is this? What's the criteria for {evex} to appear ahead of (bad)? And
if so for EVEX, shouldn't VEX gain {vex} in such cases, too? (Which is
really the opposite I mean to indicate: No such prefixes should ever appear
here. If anything we should present unrecognized VEX/EVEX encodings in a
sufficiently generic way, including all of their - similarly generalized -
operands.)

>  [ 	]*[a-f0-9]+:[ 	]+08 8f c0 ff ff ff[ 	]+or.*
>  [ 	]*[a-f0-9]+:[ 	]+62 74 7c 18 8f c0[ 	]+pop2   %rax,\(bad\)
>  [ 	]*[a-f0-9]+:[ 	]+62 d4 24 18 8f[ 	]+\(bad\)
>  [ 	]*[a-f0-9]+:[ 	]+c3[ 	]+.*
>  [ 	]*[a-f0-9]+:[ 	]+62 e4 7e 08 dc 20[ 	]+aesenc128kl \(%rax\),%xmm20\(bad\)
> -[ 	]*[a-f0-9]+:[ 	]+62 b4 7c 08 d9 c4[ 	]+sha1msg1 %xmm20\(bad\),%xmm0
> +[ 	]*[a-f0-9]+:[ 	]+62 b4 7c 08 d9 c4[ 	]+{evex} sha1msg1 %xmm20\(bad\),%xmm0

Why would {evex} need to appear here? There's no non-EVEX encoding using
%xmm20, is there? It shouldn't matter that %xmm20 really is wrong to use
here in the first place. It's (wrong) use cannot be expressed using REX2.

If having the pseudo-prefix appear here meaningfully simplifies the code,
then at the very least the expectations here should only permit, but not
demand its presence.

That said, I don't see how this test would have succeeded in your
testing: There are backslashes missing to escape the figure braces.

> @@ -10398,6 +10402,7 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
>    int cond = 1;
>    unsigned int l = 0, len = 0;
>    char last[4];
> +  bool b_done = false;

Mind me asking what "b" in this identifier is intended to stand for?

> @@ -10411,6 +10416,12 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
>        switch (*p)
>  	{
>  	default:
> +	  if (ins->evex_type == evex_from_legacy && !ins->vex.nd
> +	      && !(ins->rex2 & 7) && !b_done)
> +	    {
> +	      oappend (ins, "{evex} ");
> +	      b_done = true;
> +	    }
>  	  *ins->obufp++ = *p;
>  	  break;

Right now it looks like it is okay to do this here; let's hope this
isn't going to bite us later.

> @@ -10547,6 +10558,11 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
>  		  *ins->obufp++ = '}';
>  		  *ins->obufp++ = ' ';
>  		  break;
> +		case 'M':
> +		  if (ins->modrm.mod != 3 && !(ins->rex2 & 7))

Hmm, according to the description of %ME you ought to also check for
no NDD, even if right now the only use site (MOVBE) doesn't allow for
that anyway.

> +		    oappend (ins, "{evex} ");
> +		  b_done = true;
> +		  break;
>  		default:
>  		  abort ();
>  		}
> @@ -10588,7 +10604,11 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
>  		  oappend (ins, "{nf} ");
>  		  /* This bit needs to be cleared after it is consumed.  */
>  		  ins->vex.nf = false;
> +		  b_done = true;
>  		}
> +	      else if (ins->evex_type == evex_from_vex && !(ins->rex2 & 7)
> +		       && ins->vex.v)
> +		oappend (ins, "{evex} ");
>  	    }
>  	  else
>  	    abort ();


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32.
  2024-03-22  9:49 [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32 Cui, Lili
  2024-03-25 12:22 ` Jan Beulich
@ 2024-03-25 12:31 ` Jan Beulich
  2024-04-02  8:59   ` Cui, Lili
  1 sibling, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2024-03-25 12:31 UTC (permalink / raw)
  To: Cui, Lili; +Cc: hjl.tools, Hu, Lin1, binutils

(sorry, previous mail went out unfinished by mistake)

On 22.03.2024 10:49, Cui, Lili wrote:
> From: "Hu, Lin1" <lin1.hu@intel.com>
> 
> Based on the V1, there are mainly the following changes:
> 1. Added more test cases to cover each ins template.
> 2. The Intel format test has been removed in this patch.
> 3. Droped the newly added %XE and use evex_from_legacy for unified judgment.
> 4. Add %ME to movbe to print {evex} correctly.

Hmm. Iirc I had outlined how to deal with this without introducing a
single-use macro. I'm not outright opposed, but I'd like to understand
why you've chosen not to deal with this by having decode go through
mod_table[].

> --- /dev/null
> +++ b/gas/testsuite/gas/i386/noreg64-evex.s
> @@ -0,0 +1,74 @@
> +# Check 64-bit insns not sizeable through register operands with evex
> +
> +        .text
> +_start:
> +	{evex} adc	$1, (%rax)
> +	{evex} adc	$0x89, (%rax)
> +	{evex} adc	$0x1234, (%rax)
> +	{evex} adc	$0x12345678, (%rax)
> +	{evex} add	$1, (%rax)
> +	{evex} add	$0x89, (%rax)
> +	{evex} add	$0x1234, (%rax)
> +	{evex} add	$0x12345678, (%rax)
> +	{evex} and	$1, (%rax)
> +	{evex} and	$0x89, (%rax)
> +	{evex} and	$0x1234, (%rax)
> +	{evex} and	$0x12345678, (%rax)
> +	{evex} crc32	(%rax), %eax
> +	{evex} crc32	(%rax), %rax
> +	{evex} dec	(%rax)
> +	{evex} div	(%rax)
> +	{evex} idiv	(%rax)
> +	{evex} imul	(%rax)
> +	{evex} inc	(%rax)
> +	{evex} mul	(%rax)
> +	{evex} neg	(%rax)
> +	{evex} not	(%rax)
> +	{evex} or 	$1, (%rax)
> +	{evex} or 	$0x89, (%rax)
> +	{evex} or 	$0x1234, (%rax)
> +	{evex} or 	$0x12345678, (%rax)
> +	{evex} rcl	$1, (%rax)
> +	{evex} rcl	$2, (%rax)
> +	{evex} rcl	%cl, (%rax)
> +	{evex} rcl	(%rax)
> +	{evex} rcr	$1, (%rax)
> +	{evex} rcr	$2, (%rax)
> +	{evex} rcr	%cl, (%rax)
> +	{evex} rcr	(%rax)
> +	{evex} rol	$1, (%rax)
> +	{evex} rol	$2, (%rax)
> +	{evex} rol	%cl, (%rax)
> +	{evex} rol	(%rax)
> +	{evex} ror	$1, (%rax)
> +	{evex} ror	$2, (%rax)
> +	{evex} ror	%cl, (%rax)
> +	{evex} ror	(%rax)
> +	{evex} sbb	$1, (%rax)
> +	{evex} sbb	$0x89, (%rax)
> +	{evex} sbb	$0x1234, (%rax)
> +	{evex} sbb	$0x12345678, (%rax)
> +	{evex} sal	$1, (%rax)
> +	{evex} sal	$2, (%rax)
> +	{evex} sal	%cl, (%rax)
> +	{evex} sal	(%rax)
> +	{evex} sar	$1, (%rax)
> +	{evex} sar	$2, (%rax)
> +	{evex} sar	%cl, (%rax)
> +	{evex} sar	(%rax)

I realize it was my mistake originally, but may I ask that we don't further
spread it: sbb really wants to come after sal and sar.

> --- a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
> +++ b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
> @@ -30,16 +30,16 @@ Disassembly of section .text:
>  [ 	]*[a-f0-9]+:[ 	]+0c 18[ 	]+or.*
>  [ 	]*[a-f0-9]+:[ 	]+62 f2 fc 18 f5[ 	]+\(bad\)
>  [ 	]*[a-f0-9]+:[ 	]+0c 18[ 	]+or.*
> -[ 	]*[a-f0-9]+:[ 	]+62 f4 e4[ 	]+\(bad\)
> +[ 	]*[a-f0-9]+:[ 	]+62 f4 e4[ 	]+\{evex\} \(bad\)
>  [ 	]*[a-f0-9]+:[ 	]+08 ff[ 	]+.*
>  [ 	]*[a-f0-9]+:[ 	]+04 08[ 	]+.*
> -[ 	]*[a-f0-9]+:[ 	]+62 f4 3c[ 	]+\(bad\)
> +[ 	]*[a-f0-9]+:[ 	]+62 f4 3c[ 	]+\{evex\} \(bad\)

Why is this? What's the criteria for {evex} to appear ahead of (bad)? And
if so for EVEX, shouldn't VEX gain {vex} in such cases, too? (Which is
really the opposite I mean to indicate: No such prefixes should ever appear
here. If anything we should present unrecognized VEX/EVEX encodings in a
sufficiently generic way, including all of their - similarly generalized -
operands.)

>  [ 	]*[a-f0-9]+:[ 	]+08 8f c0 ff ff ff[ 	]+or.*
>  [ 	]*[a-f0-9]+:[ 	]+62 74 7c 18 8f c0[ 	]+pop2   %rax,\(bad\)
>  [ 	]*[a-f0-9]+:[ 	]+62 d4 24 18 8f[ 	]+\(bad\)
>  [ 	]*[a-f0-9]+:[ 	]+c3[ 	]+.*
>  [ 	]*[a-f0-9]+:[ 	]+62 e4 7e 08 dc 20[ 	]+aesenc128kl \(%rax\),%xmm20\(bad\)
> -[ 	]*[a-f0-9]+:[ 	]+62 b4 7c 08 d9 c4[ 	]+sha1msg1 %xmm20\(bad\),%xmm0
> +[ 	]*[a-f0-9]+:[ 	]+62 b4 7c 08 d9 c4[ 	]+{evex} sha1msg1 %xmm20\(bad\),%xmm0

Why would {evex} need to appear here? There's no non-EVEX encoding using
%xmm20, is there? It shouldn't matter that %xmm20 really is wrong to use
here in the first place. It's (wrong) use cannot be expressed using REX2.

If having the pseudo-prefix appear here meaningfully simplifies the code,
then at the very least the expectations here should only permit, but not
demand its presence.

That said, I don't see how this test would have succeeded in your
testing: There are backslashes missing to escape the figure braces.

> @@ -10398,6 +10402,7 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
>    int cond = 1;
>    unsigned int l = 0, len = 0;
>    char last[4];
> +  bool b_done = false;

Mind me asking what "b" in this identifier is intended to stand for?

> @@ -10411,6 +10416,12 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
>        switch (*p)
>  	{
>  	default:
> +	  if (ins->evex_type == evex_from_legacy && !ins->vex.nd
> +	      && !(ins->rex2 & 7) && !b_done)
> +	    {
> +	      oappend (ins, "{evex} ");
> +	      b_done = true;
> +	    }
>  	  *ins->obufp++ = *p;
>  	  break;

Right now it looks like it is okay to do this here; let's hope this
isn't going to bite us later.

> @@ -10547,6 +10558,11 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
>  		  *ins->obufp++ = '}';
>  		  *ins->obufp++ = ' ';
>  		  break;
> +		case 'M':
> +		  if (ins->modrm.mod != 3 && !(ins->rex2 & 7))

Hmm, according to the description of %ME you ought to also check for
no NDD, even if right now the only use site (MOVBE) doesn't allow for
that.

> @@ -10588,7 +10604,11 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
>  		  oappend (ins, "{nf} ");
>  		  /* This bit needs to be cleared after it is consumed.  */
>  		  ins->vex.nf = false;
> +		  b_done = true;
>  		}
> +	      else if (ins->evex_type == evex_from_vex && !(ins->rex2 & 7)
> +		       && ins->vex.v)
> +		oappend (ins, "{evex} ");

Why would b_done not need setting here as well?

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32.
  2024-03-25 12:31 ` Jan Beulich
@ 2024-04-02  8:59   ` Cui, Lili
  2024-04-02  9:30     ` Jan Beulich
  0 siblings, 1 reply; 8+ messages in thread
From: Cui, Lili @ 2024-04-02  8:59 UTC (permalink / raw)
  To: Beulich, Jan; +Cc: hjl.tools, binutils

> > Based on the V1, there are mainly the following changes:
> > 1. Added more test cases to cover each ins template.
> > 2. The Intel format test has been removed in this patch.
> > 3. Droped the newly added %XE and use evex_from_legacy for unified
> judgment.
> > 4. Add %ME to movbe to print {evex} correctly.
> 
> Hmm. Iirc I had outlined how to deal with this without introducing a single-use
> macro. I'm not outright opposed, but I'd like to understand why you've chosen
> not to deal with this by having decode go through mod_table[].
> 

As you predicted, with mod_table[] , we add %XE to the reg and no %XE for the memory part, but with the current implementation we want to make the judgment uniform for all map4 instructions. In this way, we cannot distinguish the reg and memory parts of movbe, because they are both in map4, so I want to add a new micro to movbe. We can also use the fixup method, but it feels not as smooth as the current one.

> > --- /dev/null
> > +++ b/gas/testsuite/gas/i386/noreg64-evex.s
> > @@ -0,0 +1,74 @@
> > +# Check 64-bit insns not sizeable through register operands with evex
> > +	{evex} sbb	$1, (%rax)
> > +	{evex} sbb	$0x89, (%rax)
> > +	{evex} sbb	$0x1234, (%rax)
> > +	{evex} sbb	$0x12345678, (%rax)
> > +	{evex} sal	$1, (%rax)
> > +	{evex} sal	$2, (%rax)
> > +	{evex} sal	%cl, (%rax)
> > +	{evex} sal	(%rax)
> > +	{evex} sar	$1, (%rax)
> > +	{evex} sar	$2, (%rax)
> > +	{evex} sar	%cl, (%rax)
> > +	{evex} sar	(%rax)
> 
> I realize it was my mistake originally, but may I ask that we don't further spread
> it: sbb really wants to come after sal and sar.
> 

Done.

> > --- a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
> > +++ b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
> > @@ -30,16 +30,16 @@ Disassembly of section .text:
> >  [ 	]*[a-f0-9]+:[ 	]+0c 18[ 	]+or.*
> >  [ 	]*[a-f0-9]+:[ 	]+62 f2 fc 18 f5[ 	]+\(bad\)
> >  [ 	]*[a-f0-9]+:[ 	]+0c 18[ 	]+or.*
> > -[ 	]*[a-f0-9]+:[ 	]+62 f4 e4[ 	]+\(bad\)
> > +[ 	]*[a-f0-9]+:[ 	]+62 f4 e4[ 	]+\{evex\} \(bad\)
> >  [ 	]*[a-f0-9]+:[ 	]+08 ff[ 	]+.*
> >  [ 	]*[a-f0-9]+:[ 	]+04 08[ 	]+.*
> > -[ 	]*[a-f0-9]+:[ 	]+62 f4 3c[ 	]+\(bad\)
> > +[ 	]*[a-f0-9]+:[ 	]+62 f4 3c[ 	]+\{evex\} \(bad\)
> 
> Why is this? What's the criteria for {evex} to appear ahead of (bad)? And if so
> for EVEX, shouldn't VEX gain {vex} in such cases, too? (Which is really the
> opposite I mean to indicate: No such prefixes should ever appear here. If
> anything we should present unrecognized VEX/EVEX encodings in a sufficiently
> generic way, including all of their - similarly generalized -
> operands.)
> 

Our rules for adding {evex} to map4 are:
1. No NDD( means ins->vex.nd is 0).
2. No Egprs( use ins->rex2, excluding X4).
3. Other macros are not added {evex}/{nf}.

For {evex} inc %rax %rbx, we set ins->vex.nd = 0, meaning it only has two operands, I think it is right to add {evex} for it.
-----------------------------------------------------------------------------------
        #{evex} inc %rax %rbx EVEX.vvvv != 1111 && EVEX.ND = 0.
        .byte 0x62, 0xf4, 0xe4, 0x08, 0xff, 0x04, 0x08
-----------------------------------------------------------------------------------

For pop2 %rax,%r8, it only has EVEX format, it's special because its ins->vex.nd != 0,so the normal process will not add {evex} to it, but we give it an illegal value, let ins->vex.nd = 0, so it added {evex} by mistake. This mistake is caused by illegal values. I don’t have a reasonable fix, so I prefer not to change it.
------------------------------------------------------------------------------------
        # pop2 %rax, %r8 set EVEX.ND=0.
        .byte 0x62, 0xf4, 0x3c, 0x08, 0x8f, 0xc0
        .byte 0xff, 0xff, 0xff
-------------------------------------------------------------------------------------

> >  [ 	]*[a-f0-9]+:[ 	]+08 8f c0 ff ff ff[ 	]+or.*
> >  [ 	]*[a-f0-9]+:[ 	]+62 74 7c 18 8f c0[ 	]+pop2   %rax,\(bad\)
> >  [ 	]*[a-f0-9]+:[ 	]+62 d4 24 18 8f[ 	]+\(bad\)
> >  [ 	]*[a-f0-9]+:[ 	]+c3[ 	]+.*
> >  [ 	]*[a-f0-9]+:[ 	]+62 e4 7e 08 dc 20[ 	]+aesenc128kl
> \(%rax\),%xmm20\(bad\)
> > -[ 	]*[a-f0-9]+:[ 	]+62 b4 7c 08 d9 c4[
> 	]+sha1msg1 %xmm20\(bad\),%xmm0
> > +[ 	]*[a-f0-9]+:[ 	]+62 b4 7c 08 d9 c4[ 	]+{evex}
> sha1msg1 %xmm20\(bad\),%xmm0
> 
> Why would {evex} need to appear here? There's no non-EVEX encoding
> using %xmm20, is there? It shouldn't matter that %xmm20 really is wrong to
> use here in the first place. It's (wrong) use cannot be expressed using REX2.
> 
> If having the pseudo-prefix appear here meaningfully simplifies the code, then
> at the very least the expectations here should only permit, but not demand its
> presence.
> 
> That said, I don't see how this test would have succeeded in your
> testing: There are backslashes missing to escape the figure braces.
> 

Oh, in this case xmm20 is represented by X4, and ins->rex2 doesn't have this bit, so it has to add {evex} (I forgot to add '\' for '{}', but the test case passed, which is a bit strange). Since APX spec has been updated, KEYLOCKER and SHA have been removed. We don't need to modify the code to handle X4.

> > @@ -10398,6 +10402,7 @@ putop (instr_info *ins, const char *in_template,
> int sizeflag)
> >    int cond = 1;
> >    unsigned int l = 0, len = 0;
> >    char last[4];
> > +  bool b_done = false;
> 
> Mind me asking what "b" in this identifier is intended to stand for?
> 

I just want to show that it is a bool type. Maybe it would be better to change it to b_added_evex_prefix? Do you have any suggestions?

> > @@ -10547,6 +10558,11 @@ putop (instr_info *ins, const char
> *in_template, int sizeflag)
> >  		  *ins->obufp++ = '}';
> >  		  *ins->obufp++ = ' ';
> >  		  break;
> > +		case 'M':
> > +		  if (ins->modrm.mod != 3 && !(ins->rex2 & 7))
> 
> Hmm, according to the description of %ME you ought to also check for no NDD,
> even if right now the only use site (MOVBE) doesn't allow for that.
> 

Ok.

> > @@ -10588,7 +10604,11 @@ putop (instr_info *ins, const char
> *in_template, int sizeflag)
> >  		  oappend (ins, "{nf} ");
> >  		  /* This bit needs to be cleared after it is consumed.  */
> >  		  ins->vex.nf = false;
> > +		  b_done = true;
> >  		}
> > +	      else if (ins->evex_type == evex_from_vex && !(ins->rex2 & 7)
> > +		       && ins->vex.v)
> > +		oappend (ins, "{evex} ");
> 
> Why would b_done not need setting here as well?
> 

We only use b_done under "ins->evex_type == evex_from_legacy".  Original version, we also handle "evex_from_legacy" here, now it is merged directly into "default:".

Thanks,
Lili.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32.
  2024-04-02  8:59   ` Cui, Lili
@ 2024-04-02  9:30     ` Jan Beulich
  2024-04-08  6:15       ` Cui, Lili
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2024-04-02  9:30 UTC (permalink / raw)
  To: Cui, Lili; +Cc: hjl.tools, binutils

On 02.04.2024 10:59, Cui, Lili wrote:
>>> --- a/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
>>> +++ b/gas/testsuite/gas/i386/x86-64-apx-evex-promoted-bad.d
>>> @@ -30,16 +30,16 @@ Disassembly of section .text:
>>>  [ 	]*[a-f0-9]+:[ 	]+0c 18[ 	]+or.*
>>>  [ 	]*[a-f0-9]+:[ 	]+62 f2 fc 18 f5[ 	]+\(bad\)
>>>  [ 	]*[a-f0-9]+:[ 	]+0c 18[ 	]+or.*
>>> -[ 	]*[a-f0-9]+:[ 	]+62 f4 e4[ 	]+\(bad\)
>>> +[ 	]*[a-f0-9]+:[ 	]+62 f4 e4[ 	]+\{evex\} \(bad\)
>>>  [ 	]*[a-f0-9]+:[ 	]+08 ff[ 	]+.*
>>>  [ 	]*[a-f0-9]+:[ 	]+04 08[ 	]+.*
>>> -[ 	]*[a-f0-9]+:[ 	]+62 f4 3c[ 	]+\(bad\)
>>> +[ 	]*[a-f0-9]+:[ 	]+62 f4 3c[ 	]+\{evex\} \(bad\)
>>
>> Why is this? What's the criteria for {evex} to appear ahead of (bad)? And if so
>> for EVEX, shouldn't VEX gain {vex} in such cases, too? (Which is really the
>> opposite I mean to indicate: No such prefixes should ever appear here. If
>> anything we should present unrecognized VEX/EVEX encodings in a sufficiently
>> generic way, including all of their - similarly generalized -
>> operands.)
> 
> Our rules for adding {evex} to map4 are:
> 1. No NDD( means ins->vex.nd is 0).
> 2. No Egprs( use ins->rex2, excluding X4).
> 3. Other macros are not added {evex}/{nf}.
> 
> For {evex} inc %rax %rbx, we set ins->vex.nd = 0, meaning it only has two operands, I think it is right to add {evex} for it.
> -----------------------------------------------------------------------------------
>         #{evex} inc %rax %rbx EVEX.vvvv != 1111 && EVEX.ND = 0.
>         .byte 0x62, 0xf4, 0xe4, 0x08, 0xff, 0x04, 0x08
> -----------------------------------------------------------------------------------
> 
> For pop2 %rax,%r8, it only has EVEX format, it's special because its ins->vex.nd != 0,so the normal process will not add {evex} to it, but we give it an illegal value, let ins->vex.nd = 0, so it added {evex} by mistake. This mistake is caused by illegal values. I don’t have a reasonable fix, so I prefer not to change it.
> ------------------------------------------------------------------------------------
>         # pop2 %rax, %r8 set EVEX.ND=0.
>         .byte 0x62, 0xf4, 0x3c, 0x08, 0x8f, 0xc0
>         .byte 0xff, 0xff, 0xff
> -------------------------------------------------------------------------------------

The POP2 aspect isn't really relevant here. Imo (bad) should never be prefixed
by (pseudo) prefixes. If, however, it is to be, then such prefixing needs doing
consistently. Which I'm afraid is going to be quite a bit more work than simply
zapping (or avoiding) {evex} when (bad) is printed.

>>> @@ -10398,6 +10402,7 @@ putop (instr_info *ins, const char *in_template,
>> int sizeflag)
>>>    int cond = 1;
>>>    unsigned int l = 0, len = 0;
>>>    char last[4];
>>> +  bool b_done = false;
>>
>> Mind me asking what "b" in this identifier is intended to stand for?
>>
> 
> I just want to show that it is a bool type. Maybe it would be better to change it to b_added_evex_prefix? Do you have any suggestions?

I don't think we use such b_ prefixes elsewhere. "evex_printed" or some such
would seem sufficient.

>>> @@ -10588,7 +10604,11 @@ putop (instr_info *ins, const char
>> *in_template, int sizeflag)
>>>  		  oappend (ins, "{nf} ");
>>>  		  /* This bit needs to be cleared after it is consumed.  */
>>>  		  ins->vex.nf = false;
>>> +		  b_done = true;
>>>  		}
>>> +	      else if (ins->evex_type == evex_from_vex && !(ins->rex2 & 7)
>>> +		       && ins->vex.v)
>>> +		oappend (ins, "{evex} ");
>>
>> Why would b_done not need setting here as well?
>>
> 
> We only use b_done under "ins->evex_type == evex_from_legacy".  Original version, we also handle "evex_from_legacy" here, now it is merged directly into "default:".

Then leaving a trap for later? Let's keep internal state properly updated:
If {evex} is printed, reflect that in local variable state, too.

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32.
  2024-04-02  9:30     ` Jan Beulich
@ 2024-04-08  6:15       ` Cui, Lili
  2024-04-08  6:26         ` Jan Beulich
  0 siblings, 1 reply; 8+ messages in thread
From: Cui, Lili @ 2024-04-08  6:15 UTC (permalink / raw)
  To: Beulich, Jan; +Cc: hjl.tools, binutils

> > For pop2 %rax,%r8, it only has EVEX format, it's special because its ins-
> >vex.nd != 0,so the normal process will not add {evex} to it, but we give it an
> illegal value, let ins->vex.nd = 0, so it added {evex} by mistake. This mistake is
> caused by illegal values. I don’t have a reasonable fix, so I prefer not to
> change it.
> > ------------------------------------------------------------------------------------
> >         # pop2 %rax, %r8 set EVEX.ND=0.
> >         .byte 0x62, 0xf4, 0x3c, 0x08, 0x8f, 0xc0
> >         .byte 0xff, 0xff, 0xff
> > ----------------------------------------------------------------------
> > ---------------
> 
> The POP2 aspect isn't really relevant here. Imo (bad) should never be
> prefixed by (pseudo) prefixes. If, however, it is to be, then such prefixing
> needs doing consistently. Which I'm afraid is going to be quite a bit more
> work than simply zapping (or avoiding) {evex} when (bad) is printed.
> 

I added an early return in "putop()" when the instruction name is "(bad)". Since we don't want to add any prefix or suffix to "(bad)". Do you think it is okay?

@@ -10391,6 +10395,15 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
   int cond = 1;
   unsigned int l = 0, len = 0;
   char last[4];
   bool evex_printed = false;

+  if (!strncmp (in_template, "(bad)", 5))
+    {
+      oappend (ins, "(bad)");
+      *ins->obufp = 0;
+      ins->mnemonicendp = ins->obufp;
+      return 0;
+    }

> >>> @@ -10398,6 +10402,7 @@ putop (instr_info *ins, const char
> >>> *in_template,
> >> int sizeflag)
> >>>    int cond = 1;
> >>>    unsigned int l = 0, len = 0;
> >>>    char last[4];
> >>> +  bool b_done = false;
> >>
> >> Mind me asking what "b" in this identifier is intended to stand for?
> >>
> >
> > I just want to show that it is a bool type. Maybe it would be better to
> change it to b_added_evex_prefix? Do you have any suggestions?
> 
> I don't think we use such b_ prefixes elsewhere. "evex_printed" or some such
> would seem sufficient.
> 

Done.

> >>> @@ -10588,7 +10604,11 @@ putop (instr_info *ins, const char
> >> *in_template, int sizeflag)
> >>>  		  oappend (ins, "{nf} ");
> >>>  		  /* This bit needs to be cleared after it is consumed.  */
> >>>  		  ins->vex.nf = false;
> >>> +		  b_done = true;
> >>>  		}
> >>> +	      else if (ins->evex_type == evex_from_vex && !(ins->rex2 & 7)
> >>> +		       && ins->vex.v)
> >>> +		oappend (ins, "{evex} ");
> >>
> >> Why would b_done not need setting here as well?
> >>
> >
> > We only use b_done under "ins->evex_type == evex_from_legacy".
> Original version, we also handle "evex_from_legacy" here, now it is merged
> directly into "default:".
> 
> Then leaving a trap for later? Let's keep internal state properly updated:
> If {evex} is printed, reflect that in local variable state, too.
> 
Done.

 Thanks,
Lili.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32.
  2024-04-08  6:15       ` Cui, Lili
@ 2024-04-08  6:26         ` Jan Beulich
  2024-04-08  7:58           ` Cui, Lili
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2024-04-08  6:26 UTC (permalink / raw)
  To: Cui, Lili; +Cc: hjl.tools, binutils

On 08.04.2024 08:15, Cui, Lili wrote:
>>> For pop2 %rax,%r8, it only has EVEX format, it's special because its ins-
>>> vex.nd != 0,so the normal process will not add {evex} to it, but we give it an
>> illegal value, let ins->vex.nd = 0, so it added {evex} by mistake. This mistake is
>> caused by illegal values. I don’t have a reasonable fix, so I prefer not to
>> change it.
>>> ------------------------------------------------------------------------------------
>>>         # pop2 %rax, %r8 set EVEX.ND=0.
>>>         .byte 0x62, 0xf4, 0x3c, 0x08, 0x8f, 0xc0
>>>         .byte 0xff, 0xff, 0xff
>>> ----------------------------------------------------------------------
>>> ---------------
>>
>> The POP2 aspect isn't really relevant here. Imo (bad) should never be
>> prefixed by (pseudo) prefixes. If, however, it is to be, then such prefixing
>> needs doing consistently. Which I'm afraid is going to be quite a bit more
>> work than simply zapping (or avoiding) {evex} when (bad) is printed.
>>
> 
> I added an early return in "putop()" when the instruction name is "(bad)". Since we don't want to add any prefix or suffix to "(bad)". Do you think it is okay?
> 
> @@ -10391,6 +10395,15 @@ putop (instr_info *ins, const char *in_template, int sizeflag)
>    int cond = 1;
>    unsigned int l = 0, len = 0;
>    char last[4];
>    bool evex_printed = false;
> 
> +  if (!strncmp (in_template, "(bad)", 5))
> +    {
> +      oappend (ins, "(bad)");
> +      *ins->obufp = 0;
> +      ins->mnemonicendp = ins->obufp;
> +      return 0;
> +    }

At the first glance I'd say that's acceptable, provided a comment gets
added.

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32.
  2024-04-08  6:26         ` Jan Beulich
@ 2024-04-08  7:58           ` Cui, Lili
  0 siblings, 0 replies; 8+ messages in thread
From: Cui, Lili @ 2024-04-08  7:58 UTC (permalink / raw)
  To: Beulich, Jan; +Cc: hjl.tools, binutils

> On 08.04.2024 08:15, Cui, Lili wrote:
> >>> For pop2 %rax,%r8, it only has EVEX format, it's special because its
> >>> ins- vex.nd != 0,so the normal process will not add {evex} to it,
> >>> but we give it an
> >> illegal value, let ins->vex.nd = 0, so it added {evex} by mistake.
> >> This mistake is caused by illegal values. I don’t have a reasonable
> >> fix, so I prefer not to change it.
> >>> ------------------------------------------------------------------------------------
> >>>         # pop2 %rax, %r8 set EVEX.ND=0.
> >>>         .byte 0x62, 0xf4, 0x3c, 0x08, 0x8f, 0xc0
> >>>         .byte 0xff, 0xff, 0xff
> >>> --------------------------------------------------------------------
> >>> --
> >>> ---------------
> >>
> >> The POP2 aspect isn't really relevant here. Imo (bad) should never be
> >> prefixed by (pseudo) prefixes. If, however, it is to be, then such
> >> prefixing needs doing consistently. Which I'm afraid is going to be
> >> quite a bit more work than simply zapping (or avoiding) {evex} when (bad)
> is printed.
> >>
> >
> > I added an early return in "putop()" when the instruction name is "(bad)".
> Since we don't want to add any prefix or suffix to "(bad)". Do you think it is
> okay?
> >
> > @@ -10391,6 +10395,15 @@ putop (instr_info *ins, const char
> *in_template, int sizeflag)
> >    int cond = 1;
> >    unsigned int l = 0, len = 0;
> >    char last[4];
> >    bool evex_printed = false;
> >
> > +  if (!strncmp (in_template, "(bad)", 5))
> > +    {
> > +      oappend (ins, "(bad)");
> > +      *ins->obufp = 0;
> > +      ins->mnemonicendp = ins->obufp;
> > +      return 0;
> > +    }
> 
> At the first glance I'd say that's acceptable, provided a comment gets added.
> 

Added.

Lili.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-04-08  7:58 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-22  9:49 [PATCH V2] Support {evex} pseudo prefix for decode evex promoted insns without egpr32 Cui, Lili
2024-03-25 12:22 ` Jan Beulich
2024-03-25 12:31 ` Jan Beulich
2024-04-02  8:59   ` Cui, Lili
2024-04-02  9:30     ` Jan Beulich
2024-04-08  6:15       ` Cui, Lili
2024-04-08  6:26         ` Jan Beulich
2024-04-08  7:58           ` Cui, Lili

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).