public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
* [PATCH 0//2] PowerPC: MMA+ Outer-Product Instruction name change
       [not found] <20111d66a467599d893bf85bbdf2e82b76377127.camel@us.ibm.com>
@ 2022-11-03 17:30 ` Carl Love
  2022-11-03 17:30 ` [PATCH 1/2] PowerPC: fix for the gdb.arch/powerpc-power10.exp test Carl Love
  2022-11-03 17:30 ` [PATCH 2/2] PowerPC: update comments for the MMA instruction name changes Carl Love
  2 siblings, 0 replies; 8+ messages in thread
From: Carl Love @ 2022-11-03 17:30 UTC (permalink / raw)
  To: gdb-patches; +Cc: cel, Will Schmidt, Ulrich Weigand, Peter Bergner

GDB maintainers:

The following patch set updates the mnemonics for the pmxvf16ger*,
pmxvf32ger*,pmxvf64ger*, pmxvi4ger8*, pmxvi8ger4*, pmxvi16ger2*
instructions.  

The names were officially changed to: pmdmxvf16ger*, pmdmxvf32ger*,
pmdmxvf64ger*, pmdmxvi4ger8*, pmdmxvi8ger4*,
pmdmxvi16ger* respectively in commit:

  commit bb98553cad4e017f1851153fa5de91f2cee98fb2
  Author: Peter Bergner <bergner@linux.ibm.com>
  Date:   Sat Oct 8 16:19:51 2022 -0500

    PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions

    gas/
            * config/tc-ppc.c (md_assemble): Only check for prefix opcodes.
            * testsuite/gas/ppc/rfc02658.s: New test.
            * testsuite/gas/ppc/rfc02658.d: Likewise.
            * testsuite/gas/ppc/ppc.exp: Run it.

    opcodes/
            * ppc-opc.c (XMSK8, P_GERX4_MASK, P_GERX2_MASK, XX3GERX_MASK): New.
            (powerpc_opcodes): Add dmxvi8gerx4pp, dmxvi8gerx4, dmxvf16gerx2pp,
            dmxvf16gerx2, dmxvbf16gerx2pp, dmxvf16gerx2np, dmxvbf16gerx2,
            dmxvi8gerx4spp, dmxvbf16gerx2np, dmxvf16gerx2pn, dmxvbf16gerx2pn,
            dmxvf16gerx2nn, dmxvbf16gerx2nn, pmdmxvi8gerx4pp, pmdmxvi8gerx4,
            pmdmxvf16gerx2pp, pmdmxvf16gerx2, pmdmxvbf16gerx2pp, pmdmxvf16gerx2np,
            pmdmxvbf16gerx2, pmdmxvi8gerx4spp, pmdmxvbf16gerx2np, pmdmxvf16gerx2pn,
            pmdmxvbf16gerx2pn, pmdmxvf16gerx2nn, pmdmxvbf16gerx2nn.

The old names are still accepted as extended mnemonics by the
assembler. The disassembler outputs the new mnemonics.  The change in
the above commit resulted in about 224 test errors in test
gdb.arch/powerpc-power10.exp as the disassembled instruction names no
longer matched the expected names.

This patch set consists of two patches.  The first patch fixes the
expected instruction names in the gdb.arch/powerpc-power10.exp test as
well as the instruction names in the comments of the source file
gdb.arch/powerpc-power10.s.  The second patch updates the names of the
instructions contained in the comments of several gdb source files. 
There are no functional changes in the second patch.

The patch set has been tested on Power 10 with no regressions.

Please let me know if the various patches are acceptable.  Thanks.

                         Carl Love


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] PowerPC: fix for the gdb.arch/powerpc-power10.exp test.
       [not found] <20111d66a467599d893bf85bbdf2e82b76377127.camel@us.ibm.com>
  2022-11-03 17:30 ` [PATCH 0//2] PowerPC: MMA+ Outer-Product Instruction name change Carl Love
@ 2022-11-03 17:30 ` Carl Love
  2022-11-04 13:02   ` Ulrich Weigand
  2022-11-03 17:30 ` [PATCH 2/2] PowerPC: update comments for the MMA instruction name changes Carl Love
  2 siblings, 1 reply; 8+ messages in thread
From: Carl Love @ 2022-11-03 17:30 UTC (permalink / raw)
  To: gdb-patches; +Cc: cel, Ulrich Weigand, Will Schmidt, Peter Bergner

GDB maintainers:

This patch updates the PowerPC instruction names in the
gdb.arch/powerpc-power10.exp test per the name change in:

  commit bb98553cad4e017f1851153fa5de91f2cee98fb2  
  Author: Peter Bergner <bergner@linux.ibm.com>
  Date:   Sat Oct 8 16:19:51 2022 -0500

    PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions

The patch updates the expected instruction names in the expect file and
the instruction names contained in the source file comments.

The patch has been tested on Power 10 with no regressions.

                      Carl Love

-------------------------------------------------
PowerPC fix for the gdb.arch/powerpc-power10.exp test.

The mnemonics for the pmxvf16ger*, pmxvf32ger*,pmxvf64ger*, pmxvi4ger8*,
pmxvi8ger4*, pmxvi16ger2* instructions were officially changed to
pmdmxvf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*, pmdmxvi8ger4*,
pmdmxvi16ger* respectively.  The old mnemonics are still supported by the
assembler as extended mnemonics.  The disassembler generates the new
mnemonics.  The name changes occurred in commit:

  commit bb98553cad4e017f1851153fa5de91f2cee98fb2
  Author: Peter Bergner <bergner@linux.ibm.com>
  Date:   Sat Oct 8 16:19:51 2022 -0500

    PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions

    gas/
            * config/tc-ppc.c (md_assemble): Only check for prefix opcodes.
            * testsuite/gas/ppc/rfc02658.s: New test.
            * testsuite/gas/ppc/rfc02658.d: Likewise.
            * testsuite/gas/ppc/ppc.exp: Run it.

    opcodes/
            * ppc-opc.c (XMSK8, P_GERX4_MASK, P_GERX2_MASK, XX3GERX_MASK): New.
            (powerpc_opcodes): Add dmxvi8gerx4pp, dmxvi8gerx4, dmxvf16gerx2pp,
            dmxvf16gerx2, dmxvbf16gerx2pp, dmxvf16gerx2np, dmxvbf16gerx2,
            dmxvi8gerx4spp, dmxvbf16gerx2np, dmxvf16gerx2pn, dmxvbf16gerx2pn,
            dmxvf16gerx2nn, dmxvbf16gerx2nn, pmdmxvi8gerx4pp, pmdmxvi8gerx4,
            pmdmxvf16gerx2pp, pmdmxvf16gerx2, pmdmxvbf16gerx2pp, pmdmxvf16gerx2np,
            pmdmxvbf16gerx2, pmdmxvi8gerx4spp, pmdmxvbf16gerx2np, pmdmxvf16gerx2pn,
            pmdmxvbf16gerx2pn, pmdmxvf16gerx2nn, pmdmxvbf16gerx2nn.

The above commit results in about 224 failures on Power 10 since the
disassembled names do not match the expected names in the test.  This
patch updates the expected names in the test to match the values produced
by the disassembler.

This patch updates file gdb.arch/powerpc-power10.exp with the new expected
values to the instructions.  The comment giving the name of the instruction
for each binary value in the file gdb.arch/powerpc-power10.c is updated
with the new name.   There are no functional changes in file
gdb.arch/powerpc-power10.c.
---
 gdb/testsuite/gdb.arch/powerpc-power10.exp | 448 ++++++++++-----------
 gdb/testsuite/gdb.arch/powerpc-power10.s   | 384 +++++++++---------
 2 files changed, 416 insertions(+), 416 deletions(-)

diff --git a/gdb/testsuite/gdb.arch/powerpc-power10.exp b/gdb/testsuite/gdb.arch/powerpc-power10.exp
index bc52a72d9de..b9383d8bd2a 100644
--- a/gdb/testsuite/gdb.arch/powerpc-power10.exp
+++ b/gdb/testsuite/gdb.arch/powerpc-power10.exp
@@ -186,198 +186,198 @@ func_check "plxvp   vs20,16(0)"
 func_check "plxvp   vs20,24(0)"
 func_check "plxvp   vs20,32(0)"
 func_check "plxvp   vs20,8(0)"
-func_check "pmxvbf16ger2 a4,vs0,vs1,0,0,0"
-func_check "pmxvbf16ger2 a4,vs0,vs1,0,0,1"
-func_check "pmxvbf16ger2 a4,vs0,vs1,0,13,0"
-func_check "pmxvbf16ger2 a4,vs0,vs1,0,13,1"
-func_check "pmxvbf16ger2 a4,vs0,vs1,11,0,0"
-func_check "pmxvbf16ger2 a4,vs0,vs1,11,0,1"
-func_check "pmxvbf16ger2 a4,vs0,vs1,11,13,0"
-func_check "pmxvbf16ger2 a4,vs0,vs1,11,13,1"
-func_check "pmxvbf16ger2nn a4,vs0,vs1,0,0,0"
-func_check "pmxvbf16ger2nn a4,vs0,vs1,0,0,1"
-func_check "pmxvbf16ger2nn a4,vs0,vs1,0,13,0"
-func_check "pmxvbf16ger2nn a4,vs0,vs1,0,13,1"
-func_check "pmxvbf16ger2nn a4,vs0,vs1,11,0,0"
-func_check "pmxvbf16ger2nn a4,vs0,vs1,11,0,1"
-func_check "pmxvbf16ger2nn a4,vs0,vs1,11,13,0"
-func_check "pmxvbf16ger2nn a4,vs0,vs1,11,13,1"
-func_check "pmxvbf16ger2np a4,vs0,vs1,0,0,0"
-func_check "pmxvbf16ger2np a4,vs0,vs1,0,0,1"
-func_check "pmxvbf16ger2np a4,vs0,vs1,0,13,0"
-func_check "pmxvbf16ger2np a4,vs0,vs1,0,13,1"
-func_check "pmxvbf16ger2np a4,vs0,vs1,11,0,0"
-func_check "pmxvbf16ger2np a4,vs0,vs1,11,0,1"
-func_check "pmxvbf16ger2np a4,vs0,vs1,11,13,0"
-func_check "pmxvbf16ger2np a4,vs0,vs1,11,13,1"
-func_check "pmxvbf16ger2pn a4,vs0,vs1,0,0,0"
-func_check "pmxvbf16ger2pn a4,vs0,vs1,0,0,1"
-func_check "pmxvbf16ger2pn a4,vs0,vs1,0,13,0"
-func_check "pmxvbf16ger2pn a4,vs0,vs1,0,13,1"
-func_check "pmxvbf16ger2pn a4,vs0,vs1,11,0,0"
-func_check "pmxvbf16ger2pn a4,vs0,vs1,11,0,1"
-func_check "pmxvbf16ger2pn a4,vs0,vs1,11,13,0"
-func_check "pmxvbf16ger2pn a4,vs0,vs1,11,13,1"
-func_check "pmxvbf16ger2pp a4,vs0,vs1,0,0,0"
-func_check "pmxvbf16ger2pp a4,vs0,vs1,0,0,1"
-func_check "pmxvbf16ger2pp a4,vs0,vs1,0,13,0"
-func_check "pmxvbf16ger2pp a4,vs0,vs1,0,13,1"
-func_check "pmxvbf16ger2pp a4,vs0,vs1,11,0,0"
-func_check "pmxvbf16ger2pp a4,vs0,vs1,11,0,1"
-func_check "pmxvbf16ger2pp a4,vs0,vs1,11,13,0"
-func_check "pmxvbf16ger2pp a4,vs0,vs1,11,13,1"
-func_check "pmxvf16ger2 a4,vs0,vs1,0,0,0"
-func_check "pmxvf16ger2 a4,vs0,vs1,0,0,1"
-func_check "pmxvf16ger2 a4,vs0,vs1,0,13,0"
-func_check "pmxvf16ger2 a4,vs0,vs1,0,13,1"
-func_check "pmxvf16ger2 a4,vs0,vs1,11,0,0"
-func_check "pmxvf16ger2 a4,vs0,vs1,11,0,1"
-func_check "pmxvf16ger2 a4,vs0,vs1,11,13,0"
-func_check "pmxvf16ger2 a4,vs0,vs1,11,13,1"
-func_check "pmxvf16ger2nn a4,vs0,vs1,0,0,0"
-func_check "pmxvf16ger2nn a4,vs0,vs1,0,0,1"
-func_check "pmxvf16ger2nn a4,vs0,vs1,0,13,0"
-func_check "pmxvf16ger2nn a4,vs0,vs1,0,13,1"
-func_check "pmxvf16ger2nn a4,vs0,vs1,11,0,0"
-func_check "pmxvf16ger2nn a4,vs0,vs1,11,0,1"
-func_check "pmxvf16ger2nn a4,vs0,vs1,11,13,0"
-func_check "pmxvf16ger2nn a4,vs0,vs1,11,13,1"
-func_check "pmxvf16ger2np a4,vs0,vs1,0,0,0"
-func_check "pmxvf16ger2np a4,vs0,vs1,0,0,1"
-func_check "pmxvf16ger2np a4,vs0,vs1,0,13,0"
-func_check "pmxvf16ger2np a4,vs0,vs1,0,13,1"
-func_check "pmxvf16ger2np a4,vs0,vs1,11,0,0"
-func_check "pmxvf16ger2np a4,vs0,vs1,11,0,1"
-func_check "pmxvf16ger2np a4,vs0,vs1,11,13,0"
-func_check "pmxvf16ger2np a4,vs0,vs1,11,13,1"
-func_check "pmxvf16ger2pn a4,vs0,vs1,0,0,0"
-func_check "pmxvf16ger2pn a4,vs0,vs1,0,0,1"
-func_check "pmxvf16ger2pn a4,vs0,vs1,0,13,0"
-func_check "pmxvf16ger2pn a4,vs0,vs1,0,13,1"
-func_check "pmxvf16ger2pn a4,vs0,vs1,11,0,0"
-func_check "pmxvf16ger2pn a4,vs0,vs1,11,0,1"
-func_check "pmxvf16ger2pn a4,vs0,vs1,11,13,0"
-func_check "pmxvf16ger2pn a4,vs0,vs1,11,13,1"
-func_check "pmxvf16ger2pp a4,vs0,vs1,0,0,0"
-func_check "pmxvf16ger2pp a4,vs0,vs1,0,0,1"
-func_check "pmxvf16ger2pp a4,vs0,vs1,0,13,0"
-func_check "pmxvf16ger2pp a4,vs0,vs1,0,13,1"
-func_check "pmxvf16ger2pp a4,vs0,vs1,11,0,0"
-func_check "pmxvf16ger2pp a4,vs0,vs1,11,0,1"
-func_check "pmxvf16ger2pp a4,vs0,vs1,11,13,0"
-func_check "pmxvf16ger2pp a4,vs0,vs1,11,13,1"
-func_check "pmxvf32ger a4,vs0,vs1,0,0"
-func_check "pmxvf32ger a4,vs0,vs1,0,13"
-func_check "pmxvf32ger a4,vs0,vs1,11,0"
-func_check "pmxvf32ger a4,vs0,vs1,11,13"
-func_check "pmxvf32gernn a4,vs0,vs1,0,0"
-func_check "pmxvf32gernn a4,vs0,vs1,0,13"
-func_check "pmxvf32gernn a4,vs0,vs1,11,0"
-func_check "pmxvf32gernn a4,vs0,vs1,11,13"
-func_check "pmxvf32gernp a4,vs0,vs1,0,0"
-func_check "pmxvf32gernp a4,vs0,vs1,0,13"
-func_check "pmxvf32gernp a4,vs0,vs1,11,0"
-func_check "pmxvf32gernp a4,vs0,vs1,11,13"
-func_check "pmxvf32gerpn a4,vs0,vs1,0,0"
-func_check "pmxvf32gerpn a4,vs0,vs1,0,13"
-func_check "pmxvf32gerpn a4,vs0,vs1,11,0"
-func_check "pmxvf32gerpn a4,vs0,vs1,11,13"
-func_check "pmxvf32gerpp a4,vs0,vs1,0,0"
-func_check "pmxvf32gerpp a4,vs0,vs1,0,13"
-func_check "pmxvf32gerpp a4,vs0,vs1,11,0"
-func_check "pmxvf32gerpp a4,vs0,vs1,11,13"
-func_check "pmxvf64ger a4,vs22,vs0,0,0"
-func_check "pmxvf64ger a4,vs22,vs0,0,1"
-func_check "pmxvf64ger a4,vs22,vs0,11,0"
-func_check "pmxvf64ger a4,vs22,vs0,11,1"
-func_check "pmxvf64gernn a4,vs22,vs0,0,0"
-func_check "pmxvf64gernn a4,vs22,vs0,0,1"
-func_check "pmxvf64gernn a4,vs22,vs0,11,0"
-func_check "pmxvf64gernn a4,vs22,vs0,11,1"
-func_check "pmxvf64gernp a4,vs22,vs0,0,0"
-func_check "pmxvf64gernp a4,vs22,vs0,0,1"
-func_check "pmxvf64gernp a4,vs22,vs0,11,0"
-func_check "pmxvf64gernp a4,vs22,vs0,11,1"
-func_check "pmxvf64gerpn a4,vs22,vs0,0,0"
-func_check "pmxvf64gerpn a4,vs22,vs0,0,1"
-func_check "pmxvf64gerpn a4,vs22,vs0,11,0"
-func_check "pmxvf64gerpn a4,vs22,vs0,11,1"
-func_check "pmxvf64gerpp a4,vs22,vs0,0,0"
-func_check "pmxvf64gerpp a4,vs22,vs0,0,1"
-func_check "pmxvf64gerpp a4,vs22,vs0,11,0"
-func_check "pmxvf64gerpp a4,vs22,vs0,11,1"
-func_check "pmxvi16ger2 a4,vs0,vs1,0,0,0"
-func_check "pmxvi16ger2 a4,vs0,vs1,0,0,1"
-func_check "pmxvi16ger2 a4,vs0,vs1,0,13,0"
-func_check "pmxvi16ger2 a4,vs0,vs1,0,13,1"
-func_check "pmxvi16ger2 a4,vs0,vs1,11,0,0"
-func_check "pmxvi16ger2 a4,vs0,vs1,11,0,1"
-func_check "pmxvi16ger2 a4,vs0,vs1,11,13,0"
-func_check "pmxvi16ger2 a4,vs0,vs1,11,13,1"
-func_check "pmxvi16ger2pp a4,vs0,vs1,0,0,0"
-func_check "pmxvi16ger2pp a4,vs0,vs1,0,0,1"
-func_check "pmxvi16ger2pp a4,vs0,vs1,0,13,0"
-func_check "pmxvi16ger2pp a4,vs0,vs1,0,13,1"
-func_check "pmxvi16ger2pp a4,vs0,vs1,11,0,0"
-func_check "pmxvi16ger2pp a4,vs0,vs1,11,0,1"
-func_check "pmxvi16ger2pp a4,vs0,vs1,11,13,0"
-func_check "pmxvi16ger2pp a4,vs0,vs1,11,13,1"
-func_check "pmxvi16ger2s a4,vs0,vs1,0,0,0"
-func_check "pmxvi16ger2s a4,vs0,vs1,0,0,1"
-func_check "pmxvi16ger2s a4,vs0,vs1,0,13,0"
-func_check "pmxvi16ger2s a4,vs0,vs1,0,13,1"
-func_check "pmxvi16ger2s a4,vs0,vs1,11,0,0"
-func_check "pmxvi16ger2s a4,vs0,vs1,11,0,1"
-func_check "pmxvi16ger2s a4,vs0,vs1,11,13,0"
-func_check "pmxvi16ger2s a4,vs0,vs1,11,13,1"
-func_check "pmxvi16ger2spp a4,vs0,vs1,0,0,0"
-func_check "pmxvi16ger2spp a4,vs0,vs1,0,0,1"
-func_check "pmxvi16ger2spp a4,vs0,vs1,0,13,0"
-func_check "pmxvi16ger2spp a4,vs0,vs1,0,13,1"
-func_check "pmxvi16ger2spp a4,vs0,vs1,11,0,0"
-func_check "pmxvi16ger2spp a4,vs0,vs1,11,0,1"
-func_check "pmxvi16ger2spp a4,vs0,vs1,11,13,0"
-func_check "pmxvi16ger2spp a4,vs0,vs1,11,13,1"
-func_check "pmxvi4ger8 a4,vs0,vs1,0,0,0"
-func_check "pmxvi4ger8 a4,vs0,vs1,0,0,45"
-func_check "pmxvi4ger8 a4,vs0,vs1,0,1,0"
-func_check "pmxvi4ger8 a4,vs0,vs1,0,1,45"
-func_check "pmxvi4ger8 a4,vs0,vs1,11,0,0"
-func_check "pmxvi4ger8 a4,vs0,vs1,11,0,45"
-func_check "pmxvi4ger8 a4,vs0,vs1,11,1,0"
-func_check "pmxvi4ger8 a4,vs0,vs1,11,1,45"
-func_check "pmxvi4ger8pp a4,vs0,vs1,0,0,0"
-func_check "pmxvi4ger8pp a4,vs0,vs1,0,0,45"
-func_check "pmxvi4ger8pp a4,vs0,vs1,0,1,0"
-func_check "pmxvi4ger8pp a4,vs0,vs1,0,1,45"
-func_check "pmxvi4ger8pp a4,vs0,vs1,11,0,0"
-func_check "pmxvi4ger8pp a4,vs0,vs1,11,0,45"
-func_check "pmxvi4ger8pp a4,vs0,vs1,11,1,0"
-func_check "pmxvi4ger8pp a4,vs0,vs1,11,1,45"
-func_check "pmxvi8ger4 a4,vs0,vs1,0,0,0"
-func_check "pmxvi8ger4 a4,vs0,vs1,0,0,5"
-func_check "pmxvi8ger4 a4,vs0,vs1,0,13,0"
-func_check "pmxvi8ger4 a4,vs0,vs1,0,13,5"
-func_check "pmxvi8ger4 a4,vs0,vs1,11,0,0"
-func_check "pmxvi8ger4 a4,vs0,vs1,11,0,5"
-func_check "pmxvi8ger4 a4,vs0,vs1,11,13,0"
-func_check "pmxvi8ger4 a4,vs0,vs1,11,13,5"
-func_check "pmxvi8ger4pp a4,vs0,vs1,0,0,0"
-func_check "pmxvi8ger4pp a4,vs0,vs1,0,0,5"
-func_check "pmxvi8ger4pp a4,vs0,vs1,0,13,0"
-func_check "pmxvi8ger4pp a4,vs0,vs1,0,13,5"
-func_check "pmxvi8ger4pp a4,vs0,vs1,11,0,0"
-func_check "pmxvi8ger4pp a4,vs0,vs1,11,0,5"
-func_check "pmxvi8ger4pp a4,vs0,vs1,11,13,0"
-func_check "pmxvi8ger4pp a4,vs0,vs1,11,13,5"
-func_check "pmxvi8ger4spp a4,vs0,vs1,0,0,0"
-func_check "pmxvi8ger4spp a4,vs0,vs1,0,0,5"
-func_check "pmxvi8ger4spp a4,vs0,vs1,0,13,0"
-func_check "pmxvi8ger4spp a4,vs0,vs1,0,13,5"
-func_check "pmxvi8ger4spp a4,vs0,vs1,11,0,0"
-func_check "pmxvi8ger4spp a4,vs0,vs1,11,0,5"
-func_check "pmxvi8ger4spp a4,vs0,vs1,11,13,0"
-func_check "pmxvi8ger4spp a4,vs0,vs1,11,13,5"
+func_check "pmdmxvbf16ger2 a4,vs0,vs1,0,0,0"
+func_check "pmdmxvbf16ger2 a4,vs0,vs1,0,0,1"
+func_check "pmdmxvbf16ger2 a4,vs0,vs1,0,13,0"
+func_check "pmdmxvbf16ger2 a4,vs0,vs1,0,13,1"
+func_check "pmdmxvbf16ger2 a4,vs0,vs1,11,0,0"
+func_check "pmdmxvbf16ger2 a4,vs0,vs1,11,0,1"
+func_check "pmdmxvbf16ger2 a4,vs0,vs1,11,13,0"
+func_check "pmdmxvbf16ger2 a4,vs0,vs1,11,13,1"
+func_check "pmdmxvbf16ger2nn a4,vs0,vs1,0,0,0"
+func_check "pmdmxvbf16ger2nn a4,vs0,vs1,0,0,1"
+func_check "pmdmxvbf16ger2nn a4,vs0,vs1,0,13,0"
+func_check "pmdmxvbf16ger2nn a4,vs0,vs1,0,13,1"
+func_check "pmdmxvbf16ger2nn a4,vs0,vs1,11,0,0"
+func_check "pmdmxvbf16ger2nn a4,vs0,vs1,11,0,1"
+func_check "pmdmxvbf16ger2nn a4,vs0,vs1,11,13,0"
+func_check "pmdmxvbf16ger2nn a4,vs0,vs1,11,13,1"
+func_check "pmdmxvbf16ger2np a4,vs0,vs1,0,0,0"
+func_check "pmdmxvbf16ger2np a4,vs0,vs1,0,0,1"
+func_check "pmdmxvbf16ger2np a4,vs0,vs1,0,13,0"
+func_check "pmdmxvbf16ger2np a4,vs0,vs1,0,13,1"
+func_check "pmdmxvbf16ger2np a4,vs0,vs1,11,0,0"
+func_check "pmdmxvbf16ger2np a4,vs0,vs1,11,0,1"
+func_check "pmdmxvbf16ger2np a4,vs0,vs1,11,13,0"
+func_check "pmdmxvbf16ger2np a4,vs0,vs1,11,13,1"
+func_check "pmdmxvbf16ger2pn a4,vs0,vs1,0,0,0"
+func_check "pmdmxvbf16ger2pn a4,vs0,vs1,0,0,1"
+func_check "pmdmxvbf16ger2pn a4,vs0,vs1,0,13,0"
+func_check "pmdmxvbf16ger2pn a4,vs0,vs1,0,13,1"
+func_check "pmdmxvbf16ger2pn a4,vs0,vs1,11,0,0"
+func_check "pmdmxvbf16ger2pn a4,vs0,vs1,11,0,1"
+func_check "pmdmxvbf16ger2pn a4,vs0,vs1,11,13,0"
+func_check "pmdmxvbf16ger2pn a4,vs0,vs1,11,13,1"
+func_check "pmdmxvbf16ger2pp a4,vs0,vs1,0,0,0"
+func_check "pmdmxvbf16ger2pp a4,vs0,vs1,0,0,1"
+func_check "pmdmxvbf16ger2pp a4,vs0,vs1,0,13,0"
+func_check "pmdmxvbf16ger2pp a4,vs0,vs1,0,13,1"
+func_check "pmdmxvbf16ger2pp a4,vs0,vs1,11,0,0"
+func_check "pmdmxvbf16ger2pp a4,vs0,vs1,11,0,1"
+func_check "pmdmxvbf16ger2pp a4,vs0,vs1,11,13,0"
+func_check "pmdmxvbf16ger2pp a4,vs0,vs1,11,13,1"
+func_check "pmdmxvf16ger2 a4,vs0,vs1,0,0,0"
+func_check "pmdmxvf16ger2 a4,vs0,vs1,0,0,1"
+func_check "pmdmxvf16ger2 a4,vs0,vs1,0,13,0"
+func_check "pmdmxvf16ger2 a4,vs0,vs1,0,13,1"
+func_check "pmdmxvf16ger2 a4,vs0,vs1,11,0,0"
+func_check "pmdmxvf16ger2 a4,vs0,vs1,11,0,1"
+func_check "pmdmxvf16ger2 a4,vs0,vs1,11,13,0"
+func_check "pmdmxvf16ger2 a4,vs0,vs1,11,13,1"
+func_check "pmdmxvf16ger2nn a4,vs0,vs1,0,0,0"
+func_check "pmdmxvf16ger2nn a4,vs0,vs1,0,0,1"
+func_check "pmdmxvf16ger2nn a4,vs0,vs1,0,13,0"
+func_check "pmdmxvf16ger2nn a4,vs0,vs1,0,13,1"
+func_check "pmdmxvf16ger2nn a4,vs0,vs1,11,0,0"
+func_check "pmdmxvf16ger2nn a4,vs0,vs1,11,0,1"
+func_check "pmdmxvf16ger2nn a4,vs0,vs1,11,13,0"
+func_check "pmdmxvf16ger2nn a4,vs0,vs1,11,13,1"
+func_check "pmdmxvf16ger2np a4,vs0,vs1,0,0,0"
+func_check "pmdmxvf16ger2np a4,vs0,vs1,0,0,1"
+func_check "pmdmxvf16ger2np a4,vs0,vs1,0,13,0"
+func_check "pmdmxvf16ger2np a4,vs0,vs1,0,13,1"
+func_check "pmdmxvf16ger2np a4,vs0,vs1,11,0,0"
+func_check "pmdmxvf16ger2np a4,vs0,vs1,11,0,1"
+func_check "pmdmxvf16ger2np a4,vs0,vs1,11,13,0"
+func_check "pmdmxvf16ger2np a4,vs0,vs1,11,13,1"
+func_check "pmdmxvf16ger2pn a4,vs0,vs1,0,0,0"
+func_check "pmdmxvf16ger2pn a4,vs0,vs1,0,0,1"
+func_check "pmdmxvf16ger2pn a4,vs0,vs1,0,13,0"
+func_check "pmdmxvf16ger2pn a4,vs0,vs1,0,13,1"
+func_check "pmdmxvf16ger2pn a4,vs0,vs1,11,0,0"
+func_check "pmdmxvf16ger2pn a4,vs0,vs1,11,0,1"
+func_check "pmdmxvf16ger2pn a4,vs0,vs1,11,13,0"
+func_check "pmdmxvf16ger2pn a4,vs0,vs1,11,13,1"
+func_check "pmdmxvf16ger2pp a4,vs0,vs1,0,0,0"
+func_check "pmdmxvf16ger2pp a4,vs0,vs1,0,0,1"
+func_check "pmdmxvf16ger2pp a4,vs0,vs1,0,13,0"
+func_check "pmdmxvf16ger2pp a4,vs0,vs1,0,13,1"
+func_check "pmdmxvf16ger2pp a4,vs0,vs1,11,0,0"
+func_check "pmdmxvf16ger2pp a4,vs0,vs1,11,0,1"
+func_check "pmdmxvf16ger2pp a4,vs0,vs1,11,13,0"
+func_check "pmdmxvf16ger2pp a4,vs0,vs1,11,13,1"
+func_check "pmdmxvf32ger a4,vs0,vs1,0,0"
+func_check "pmdmxvf32ger a4,vs0,vs1,0,13"
+func_check "pmdmxvf32ger a4,vs0,vs1,11,0"
+func_check "pmdmxvf32ger a4,vs0,vs1,11,13"
+func_check "pmdmxvf32gernn a4,vs0,vs1,0,0"
+func_check "pmdmxvf32gernn a4,vs0,vs1,0,13"
+func_check "pmdmxvf32gernn a4,vs0,vs1,11,0"
+func_check "pmdmxvf32gernn a4,vs0,vs1,11,13"
+func_check "pmdmxvf32gernp a4,vs0,vs1,0,0"
+func_check "pmdmxvf32gernp a4,vs0,vs1,0,13"
+func_check "pmdmxvf32gernp a4,vs0,vs1,11,0"
+func_check "pmdmxvf32gernp a4,vs0,vs1,11,13"
+func_check "pmdmxvf32gerpn a4,vs0,vs1,0,0"
+func_check "pmdmxvf32gerpn a4,vs0,vs1,0,13"
+func_check "pmdmxvf32gerpn a4,vs0,vs1,11,0"
+func_check "pmdmxvf32gerpn a4,vs0,vs1,11,13"
+func_check "pmdmxvf32gerpp a4,vs0,vs1,0,0"
+func_check "pmdmxvf32gerpp a4,vs0,vs1,0,13"
+func_check "pmdmxvf32gerpp a4,vs0,vs1,11,0"
+func_check "pmdmxvf32gerpp a4,vs0,vs1,11,13"
+func_check "pmdmxvf64ger a4,vs22,vs0,0,0"
+func_check "pmdmxvf64ger a4,vs22,vs0,0,1"
+func_check "pmdmxvf64ger a4,vs22,vs0,11,0"
+func_check "pmdmxvf64ger a4,vs22,vs0,11,1"
+func_check "pmdmxvf64gernn a4,vs22,vs0,0,0"
+func_check "pmdmxvf64gernn a4,vs22,vs0,0,1"
+func_check "pmdmxvf64gernn a4,vs22,vs0,11,0"
+func_check "pmdmxvf64gernn a4,vs22,vs0,11,1"
+func_check "pmdmxvf64gernp a4,vs22,vs0,0,0"
+func_check "pmdmxvf64gernp a4,vs22,vs0,0,1"
+func_check "pmdmxvf64gernp a4,vs22,vs0,11,0"
+func_check "pmdmxvf64gernp a4,vs22,vs0,11,1"
+func_check "pmdmxvf64gerpn a4,vs22,vs0,0,0"
+func_check "pmdmxvf64gerpn a4,vs22,vs0,0,1"
+func_check "pmdmxvf64gerpn a4,vs22,vs0,11,0"
+func_check "pmdmxvf64gerpn a4,vs22,vs0,11,1"
+func_check "pmdmxvf64gerpp a4,vs22,vs0,0,0"
+func_check "pmdmxvf64gerpp a4,vs22,vs0,0,1"
+func_check "pmdmxvf64gerpp a4,vs22,vs0,11,0"
+func_check "pmdmxvf64gerpp a4,vs22,vs0,11,1"
+func_check "pmdmxvi16ger2 a4,vs0,vs1,0,0,0"
+func_check "pmdmxvi16ger2 a4,vs0,vs1,0,0,1"
+func_check "pmdmxvi16ger2 a4,vs0,vs1,0,13,0"
+func_check "pmdmxvi16ger2 a4,vs0,vs1,0,13,1"
+func_check "pmdmxvi16ger2 a4,vs0,vs1,11,0,0"
+func_check "pmdmxvi16ger2 a4,vs0,vs1,11,0,1"
+func_check "pmdmxvi16ger2 a4,vs0,vs1,11,13,0"
+func_check "pmdmxvi16ger2 a4,vs0,vs1,11,13,1"
+func_check "pmdmxvi16ger2pp a4,vs0,vs1,0,0,0"
+func_check "pmdmxvi16ger2pp a4,vs0,vs1,0,0,1"
+func_check "pmdmxvi16ger2pp a4,vs0,vs1,0,13,0"
+func_check "pmdmxvi16ger2pp a4,vs0,vs1,0,13,1"
+func_check "pmdmxvi16ger2pp a4,vs0,vs1,11,0,0"
+func_check "pmdmxvi16ger2pp a4,vs0,vs1,11,0,1"
+func_check "pmdmxvi16ger2pp a4,vs0,vs1,11,13,0"
+func_check "pmdmxvi16ger2pp a4,vs0,vs1,11,13,1"
+func_check "pmdmxvi16ger2s a4,vs0,vs1,0,0,0"
+func_check "pmdmxvi16ger2s a4,vs0,vs1,0,0,1"
+func_check "pmdmxvi16ger2s a4,vs0,vs1,0,13,0"
+func_check "pmdmxvi16ger2s a4,vs0,vs1,0,13,1"
+func_check "pmdmxvi16ger2s a4,vs0,vs1,11,0,0"
+func_check "pmdmxvi16ger2s a4,vs0,vs1,11,0,1"
+func_check "pmdmxvi16ger2s a4,vs0,vs1,11,13,0"
+func_check "pmdmxvi16ger2s a4,vs0,vs1,11,13,1"
+func_check "pmdmxvi16ger2spp a4,vs0,vs1,0,0,0"
+func_check "pmdmxvi16ger2spp a4,vs0,vs1,0,0,1"
+func_check "pmdmxvi16ger2spp a4,vs0,vs1,0,13,0"
+func_check "pmdmxvi16ger2spp a4,vs0,vs1,0,13,1"
+func_check "pmdmxvi16ger2spp a4,vs0,vs1,11,0,0"
+func_check "pmdmxvi16ger2spp a4,vs0,vs1,11,0,1"
+func_check "pmdmxvi16ger2spp a4,vs0,vs1,11,13,0"
+func_check "pmdmxvi16ger2spp a4,vs0,vs1,11,13,1"
+func_check "pmdmxvi4ger8 a4,vs0,vs1,0,0,0"
+func_check "pmdmxvi4ger8 a4,vs0,vs1,0,0,45"
+func_check "pmdmxvi4ger8 a4,vs0,vs1,0,1,0"
+func_check "pmdmxvi4ger8 a4,vs0,vs1,0,1,45"
+func_check "pmdmxvi4ger8 a4,vs0,vs1,11,0,0"
+func_check "pmdmxvi4ger8 a4,vs0,vs1,11,0,45"
+func_check "pmdmxvi4ger8 a4,vs0,vs1,11,1,0"
+func_check "pmdmxvi4ger8 a4,vs0,vs1,11,1,45"
+func_check "pmdmxvi4ger8pp a4,vs0,vs1,0,0,0"
+func_check "pmdmxvi4ger8pp a4,vs0,vs1,0,0,45"
+func_check "pmdmxvi4ger8pp a4,vs0,vs1,0,1,0"
+func_check "pmdmxvi4ger8pp a4,vs0,vs1,0,1,45"
+func_check "pmdmxvi4ger8pp a4,vs0,vs1,11,0,0"
+func_check "pmdmxvi4ger8pp a4,vs0,vs1,11,0,45"
+func_check "pmdmxvi4ger8pp a4,vs0,vs1,11,1,0"
+func_check "pmdmxvi4ger8pp a4,vs0,vs1,11,1,45"
+func_check "pmdmxvi8ger4 a4,vs0,vs1,0,0,0"
+func_check "pmdmxvi8ger4 a4,vs0,vs1,0,0,5"
+func_check "pmdmxvi8ger4 a4,vs0,vs1,0,13,0"
+func_check "pmdmxvi8ger4 a4,vs0,vs1,0,13,5"
+func_check "pmdmxvi8ger4 a4,vs0,vs1,11,0,0"
+func_check "pmdmxvi8ger4 a4,vs0,vs1,11,0,5"
+func_check "pmdmxvi8ger4 a4,vs0,vs1,11,13,0"
+func_check "pmdmxvi8ger4 a4,vs0,vs1,11,13,5"
+func_check "pmdmxvi8ger4pp a4,vs0,vs1,0,0,0"
+func_check "pmdmxvi8ger4pp a4,vs0,vs1,0,0,5"
+func_check "pmdmxvi8ger4pp a4,vs0,vs1,0,13,0"
+func_check "pmdmxvi8ger4pp a4,vs0,vs1,0,13,5"
+func_check "pmdmxvi8ger4pp a4,vs0,vs1,11,0,0"
+func_check "pmdmxvi8ger4pp a4,vs0,vs1,11,0,5"
+func_check "pmdmxvi8ger4pp a4,vs0,vs1,11,13,0"
+func_check "pmdmxvi8ger4pp a4,vs0,vs1,11,13,5"
+func_check "pmdmxvi8ger4spp a4,vs0,vs1,0,0,0"
+func_check "pmdmxvi8ger4spp a4,vs0,vs1,0,0,5"
+func_check "pmdmxvi8ger4spp a4,vs0,vs1,0,13,0"
+func_check "pmdmxvi8ger4spp a4,vs0,vs1,0,13,5"
+func_check "pmdmxvi8ger4spp a4,vs0,vs1,11,0,0"
+func_check "pmdmxvi8ger4spp a4,vs0,vs1,11,0,5"
+func_check "pmdmxvi8ger4spp a4,vs0,vs1,11,13,0"
+func_check "pmdmxvi8ger4spp a4,vs0,vs1,11,13,5"
 #/* pstb extended mnemonics can suppress (r1) or the trailing ,0 or ,1, see ISA.
 func_check "pstb    r0,0(r1)"
 func_check "pstb    r0,16(r1)"
@@ -582,37 +582,37 @@ func_check "xscvsqqp v0,v1"
 func_check "xscvuqqp v0,v1"
 func_check "xsmaxcqp v0,v1,v2"
 func_check "xsmincqp v0,v1,v2"
-func_check "xvbf16ger2 a4,vs0,vs1"
-func_check "xvbf16ger2nn a4,vs0,vs1"
-func_check "xvbf16ger2np a4,vs0,vs1"
-func_check "xvbf16ger2pn a4,vs0,vs1"
-func_check "xvbf16ger2pp a4,vs0,vs1"
+func_check "dmxvbf16ger2 a4,vs0,vs1"
+func_check "dmxvbf16ger2nn a4,vs0,vs1"
+func_check "dmxvbf16ger2np a4,vs0,vs1"
+func_check "dmxvbf16ger2pn a4,vs0,vs1"
+func_check "dmxvbf16ger2pp a4,vs0,vs1"
 func_check "xvcvbf16spn vs0,vs1"
 func_check "xvcvspbf16 vs0,vs1"
-func_check "xvf16ger2 a4,vs0,vs1"
-func_check "xvf16ger2nn a4,vs0,vs1"
-func_check "xvf16ger2np a4,vs0,vs1"
-func_check "xvf16ger2pn a4,vs0,vs1"
-func_check "xvf16ger2pp a4,vs0,vs1"
-func_check "xvf32ger a4,vs0,vs1"
-func_check "xvf32gernn a4,vs0,vs1"
-func_check "xvf32gernp a4,vs0,vs1"
-func_check "xvf32gerpn a4,vs0,vs1"
-func_check "xvf32gerpp a4,vs0,vs1"
-func_check "xvf64ger a4,vs22,vs0"
-func_check "xvf64gernn a4,vs22,vs0"
-func_check "xvf64gernp a4,vs22,vs0"
-func_check "xvf64gerpn a4,vs22,vs0"
-func_check "xvf64gerpp a4,vs22,vs0"
-func_check "xvi16ger2 a4,vs0,vs1"
-func_check "xvi16ger2pp a4,vs0,vs1"
-func_check "xvi16ger2s a4,vs0,vs1"
-func_check "xvi16ger2spp a4,vs0,vs1"
-func_check "xvi4ger8 a4,vs0,vs1"
-func_check "xvi4ger8pp a4,vs0,vs1"
-func_check "xvi8ger4 a4,vs0,vs1"
-func_check "xvi8ger4pp a4,vs0,vs1"
-func_check "xvi8ger4spp a4,vs0,vs1"
+func_check "dmxvf16ger2 a4,vs0,vs1"
+func_check "dmxvf16ger2nn a4,vs0,vs1"
+func_check "dmxvf16ger2np a4,vs0,vs1"
+func_check "dmxvf16ger2pn a4,vs0,vs1"
+func_check "dmxvf16ger2pp a4,vs0,vs1"
+func_check "dmxvf32ger a4,vs0,vs1"
+func_check "dmxvf32gernn a4,vs0,vs1"
+func_check "dmxvf32gernp a4,vs0,vs1"
+func_check "dmxvf32gerpn a4,vs0,vs1"
+func_check "dmxvf32gerpp a4,vs0,vs1"
+func_check "dmxvf64ger a4,vs22,vs0"
+func_check "dmxvf64gernn a4,vs22,vs0"
+func_check "dmxvf64gernp a4,vs22,vs0"
+func_check "dmxvf64gerpn a4,vs22,vs0"
+func_check "dmxvf64gerpp a4,vs22,vs0"
+func_check "dmxvi16ger2 a4,vs0,vs1"
+func_check "dmxvi16ger2pp a4,vs0,vs1"
+func_check "dmxvi16ger2s a4,vs0,vs1"
+func_check "dmxvi16ger2spp a4,vs0,vs1"
+func_check "dmxvi4ger8 a4,vs0,vs1"
+func_check "dmxvi4ger8pp a4,vs0,vs1"
+func_check "dmxvi8ger4 a4,vs0,vs1"
+func_check "dmxvi8ger4pp a4,vs0,vs1"
+func_check "dmxvi8ger4spp a4,vs0,vs1"
 func_check "xvtlsbb cr3,vs0"
 func_check "xxblendvb vs0,vs1,vs2,vs3"
 func_check "xxblendvd vs0,vs1,vs2,vs3"
@@ -636,11 +636,11 @@ func_check "xxgenpcvwm vs0,v1,0"
 func_check "xxgenpcvwm vs0,v1,1"
 func_check "xxgenpcvwm vs0,v1,2"
 func_check "xxgenpcvwm vs0,v1,3"
-func_check "xxmfacc a4"
-func_check "xxmtacc a4"
+func_check "dmxxmfacc a4"
+func_check "dmxxmtacc a4"
 func_check "xxpermx vs0,vs1,vs2,vs3,0"
 func_check "xxpermx vs0,vs1,vs2,vs3,3"
-func_check "xxsetaccz a4"
+func_check "dmsetaccz a4"
 func_check "xxsplti32dx vs0,0,2779096485"
 func_check "xxsplti32dx vs0,0,4294967295"
 func_check "xxsplti32dx vs0,0,127"
diff --git a/gdb/testsuite/gdb.arch/powerpc-power10.s b/gdb/testsuite/gdb.arch/powerpc-power10.s
index 9ded00a8226..a334633292e 100644
--- a/gdb/testsuite/gdb.arch/powerpc-power10.s
+++ b/gdb/testsuite/gdb.arch/powerpc-power10.s
@@ -427,389 +427,389 @@ func:
 	.long 0xc8010004
 	.long 0x04000000	/* plxv vs0,8(r1) */
 	.long 0xc8010008
-	.long 0x07900000	/* pmxvbf16ger2 a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvbf16ger2 a4,vs0,vs1,0,0,0 */
 	.long 0xee000998
-	.long 0x07904000	/* pmxvbf16ger2 a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvbf16ger2 a4,vs0,vs1,0,0,1 */
 	.long 0xee000998
-	.long 0x0790000d	/* pmxvbf16ger2 a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvbf16ger2 a4,vs0,vs1,0,13,0 */
 	.long 0xee000998
-	.long 0x0790400d	/* pmxvbf16ger2 a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvbf16ger2 a4,vs0,vs1,0,13,1 */
 	.long 0xee000998
-	.long 0x079000b0	/* pmxvbf16ger2 a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvbf16ger2 a4,vs0,vs1,11,0,0 */
 	.long 0xee000998
-	.long 0x079040b0	/* pmxvbf16ger2 a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvbf16ger2 a4,vs0,vs1,11,0,1 */
 	.long 0xee000998
-	.long 0x079000bd	/* pmxvbf16ger2 a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvbf16ger2 a4,vs0,vs1,11,13,0 */
 	.long 0xee000998
-	.long 0x079040bd	/* pmxvbf16ger2 a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvbf16ger2 a4,vs0,vs1,11,13,1 */
 	.long 0xee000998
-	.long 0x07900000	/* pmxvbf16ger2nn a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvbf16ger2nn a4,vs0,vs1,0,0,0 */
 	.long 0xee000f90
-	.long 0x07904000	/* pmxvbf16ger2nn a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvbf16ger2nn a4,vs0,vs1,0,0,1 */
 	.long 0xee000f90
-	.long 0x0790000d	/* pmxvbf16ger2nn a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvbf16ger2nn a4,vs0,vs1,0,13,0 */
 	.long 0xee000f90
-	.long 0x0790400d	/* pmxvbf16ger2nn a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvbf16ger2nn a4,vs0,vs1,0,13,1 */
 	.long 0xee000f90
-	.long 0x079000b0	/* pmxvbf16ger2nn a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvbf16ger2nn a4,vs0,vs1,11,0,0 */
 	.long 0xee000f90
-	.long 0x079040b0	/* pmxvbf16ger2nn a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvbf16ger2nn a4,vs0,vs1,11,0,1 */
 	.long 0xee000f90
-	.long 0x079000bd	/* pmxvbf16ger2nn a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvbf16ger2nn a4,vs0,vs1,11,13,0 */
 	.long 0xee000f90
-	.long 0x079040bd	/* pmxvbf16ger2nn a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvbf16ger2nn a4,vs0,vs1,11,13,1 */
 	.long 0xee000f90
-	.long 0x07900000	/* pmxvbf16ger2np a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvbf16ger2np a4,vs0,vs1,0,0,0 */
 	.long 0xee000b90
-	.long 0x07904000	/* pmxvbf16ger2np a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvbf16ger2np a4,vs0,vs1,0,0,1 */
 	.long 0xee000b90
-	.long 0x0790000d	/* pmxvbf16ger2np a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvbf16ger2np a4,vs0,vs1,0,13,0 */
 	.long 0xee000b90
-	.long 0x0790400d	/* pmxvbf16ger2np a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvbf16ger2np a4,vs0,vs1,0,13,1 */
 	.long 0xee000b90
-	.long 0x079000b0	/* pmxvbf16ger2np a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvbf16ger2np a4,vs0,vs1,11,0,0 */
 	.long 0xee000b90
-	.long 0x079040b0	/* pmxvbf16ger2np a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvbf16ger2np a4,vs0,vs1,11,0,1 */
 	.long 0xee000b90
-	.long 0x079000bd	/* pmxvbf16ger2np a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvbf16ger2np a4,vs0,vs1,11,13,0 */
 	.long 0xee000b90
-	.long 0x079040bd	/* pmxvbf16ger2np a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvbf16ger2np a4,vs0,vs1,11,13,1 */
 	.long 0xee000b90
-	.long 0x07900000	/* pmxvbf16ger2pn a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvbf16ger2pn a4,vs0,vs1,0,0,0 */
 	.long 0xee000d90
-	.long 0x07904000	/* pmxvbf16ger2pn a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvbf16ger2pn a4,vs0,vs1,0,0,1 */
 	.long 0xee000d90
-	.long 0x0790000d	/* pmxvbf16ger2pn a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvbf16ger2pn a4,vs0,vs1,0,13,0 */
 	.long 0xee000d90
-	.long 0x0790400d	/* pmxvbf16ger2pn a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvbf16ger2pn a4,vs0,vs1,0,13,1 */
 	.long 0xee000d90
-	.long 0x079000b0	/* pmxvbf16ger2pn a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvbf16ger2pn a4,vs0,vs1,11,0,0 */
 	.long 0xee000d90
-	.long 0x079040b0	/* pmxvbf16ger2pn a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvbf16ger2pn a4,vs0,vs1,11,0,1 */
 	.long 0xee000d90
-	.long 0x079000bd	/* pmxvbf16ger2pn a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvbf16ger2pn a4,vs0,vs1,11,13,0 */
 	.long 0xee000d90
-	.long 0x079040bd	/* pmxvbf16ger2pn a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvbf16ger2pn a4,vs0,vs1,11,13,1 */
 	.long 0xee000d90
-	.long 0x07900000	/* pmxvbf16ger2pp a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvbf16ger2pp a4,vs0,vs1,0,0,0 */
 	.long 0xee000990
-	.long 0x07904000	/* pmxvbf16ger2pp a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvbf16ger2pp a4,vs0,vs1,0,0,1 */
 	.long 0xee000990
-	.long 0x0790000d	/* pmxvbf16ger2pp a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvbf16ger2pp a4,vs0,vs1,0,13,0 */
 	.long 0xee000990
-	.long 0x0790400d	/* pmxvbf16ger2pp a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvbf16ger2pp a4,vs0,vs1,0,13,1 */
 	.long 0xee000990
-	.long 0x079000b0	/* pmxvbf16ger2pp a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvbf16ger2pp a4,vs0,vs1,11,0,0 */
 	.long 0xee000990
-	.long 0x079040b0	/* pmxvbf16ger2pp a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvbf16ger2pp a4,vs0,vs1,11,0,1 */
 	.long 0xee000990
-	.long 0x079000bd	/* pmxvbf16ger2pp a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvbf16ger2pp a4,vs0,vs1,11,13,0 */
 	.long 0xee000990
-	.long 0x079040bd	/* pmxvbf16ger2pp a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvbf16ger2pp a4,vs0,vs1,11,13,1 */
 	.long 0xee000990
-	.long 0x07900000	/* pmxvf16ger2 a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvf16ger2 a4,vs0,vs1,0,0,0 */
 	.long 0xee000898
-	.long 0x07904000	/* pmxvf16ger2 a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvf16ger2 a4,vs0,vs1,0,0,1 */
 	.long 0xee000898
-	.long 0x0790000d	/* pmxvf16ger2 a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvf16ger2 a4,vs0,vs1,0,13,0 */
 	.long 0xee000898
-	.long 0x0790400d	/* pmxvf16ger2 a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvf16ger2 a4,vs0,vs1,0,13,1 */
 	.long 0xee000898
-	.long 0x079000b0	/* pmxvf16ger2 a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvf16ger2 a4,vs0,vs1,11,0,0 */
 	.long 0xee000898
-	.long 0x079040b0	/* pmxvf16ger2 a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvf16ger2 a4,vs0,vs1,11,0,1 */
 	.long 0xee000898
-	.long 0x079000bd	/* pmxvf16ger2 a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvf16ger2 a4,vs0,vs1,11,13,0 */
 	.long 0xee000898
-	.long 0x079040bd	/* pmxvf16ger2 a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvf16ger2 a4,vs0,vs1,11,13,1 */
 	.long 0xee000898
-	.long 0x07900000	/* pmxvf16ger2nn a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvf16ger2nn a4,vs0,vs1,0,0,0 */
 	.long 0xee000e90
-	.long 0x07904000	/* pmxvf16ger2nn a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvf16ger2nn a4,vs0,vs1,0,0,1 */
 	.long 0xee000e90
-	.long 0x0790000d	/* pmxvf16ger2nn a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvf16ger2nn a4,vs0,vs1,0,13,0 */
 	.long 0xee000e90
-	.long 0x0790400d	/* pmxvf16ger2nn a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvf16ger2nn a4,vs0,vs1,0,13,1 */
 	.long 0xee000e90
-	.long 0x079000b0	/* pmxvf16ger2nn a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvf16ger2nn a4,vs0,vs1,11,0,0 */
 	.long 0xee000e90
-	.long 0x079040b0	/* pmxvf16ger2nn a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvf16ger2nn a4,vs0,vs1,11,0,1 */
 	.long 0xee000e90
-	.long 0x079000bd	/* pmxvf16ger2nn a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvf16ger2nn a4,vs0,vs1,11,13,0 */
 	.long 0xee000e90
-	.long 0x079040bd	/* pmxvf16ger2nn a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvf16ger2nn a4,vs0,vs1,11,13,1 */
 	.long 0xee000e90
-	.long 0x07900000	/* pmxvf16ger2np a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvf16ger2np a4,vs0,vs1,0,0,0 */
 	.long 0xee000a90
-	.long 0x07904000	/* pmxvf16ger2np a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvf16ger2np a4,vs0,vs1,0,0,1 */
 	.long 0xee000a90
-	.long 0x0790000d	/* pmxvf16ger2np a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvf16ger2np a4,vs0,vs1,0,13,0 */
 	.long 0xee000a90
-	.long 0x0790400d	/* pmxvf16ger2np a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvf16ger2np a4,vs0,vs1,0,13,1 */
 	.long 0xee000a90
-	.long 0x079000b0	/* pmxvf16ger2np a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvf16ger2np a4,vs0,vs1,11,0,0 */
 	.long 0xee000a90
-	.long 0x079040b0	/* pmxvf16ger2np a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvf16ger2np a4,vs0,vs1,11,0,1 */
 	.long 0xee000a90
-	.long 0x079000bd	/* pmxvf16ger2np a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvf16ger2np a4,vs0,vs1,11,13,0 */
 	.long 0xee000a90
-	.long 0x079040bd	/* pmxvf16ger2np a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvf16ger2np a4,vs0,vs1,11,13,1 */
 	.long 0xee000a90
-	.long 0x07900000	/* pmxvf16ger2pn a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvf16ger2pn a4,vs0,vs1,0,0,0 */
 	.long 0xee000c90
-	.long 0x07904000	/* pmxvf16ger2pn a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvf16ger2pn a4,vs0,vs1,0,0,1 */
 	.long 0xee000c90
-	.long 0x0790000d	/* pmxvf16ger2pn a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvf16ger2pn a4,vs0,vs1,0,13,0 */
 	.long 0xee000c90
-	.long 0x0790400d	/* pmxvf16ger2pn a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvf16ger2pn a4,vs0,vs1,0,13,1 */
 	.long 0xee000c90
-	.long 0x079000b0	/* pmxvf16ger2pn a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvf16ger2pn a4,vs0,vs1,11,0,0 */
 	.long 0xee000c90
-	.long 0x079040b0	/* pmxvf16ger2pn a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvf16ger2pn a4,vs0,vs1,11,0,1 */
 	.long 0xee000c90
-	.long 0x079000bd	/* pmxvf16ger2pn a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvf16ger2pn a4,vs0,vs1,11,13,0 */
 	.long 0xee000c90
-	.long 0x079040bd	/* pmxvf16ger2pn a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvf16ger2pn a4,vs0,vs1,11,13,1 */
 	.long 0xee000c90
-	.long 0x07900000	/* pmxvf16ger2pp a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvf16ger2pp a4,vs0,vs1,0,0,0 */
 	.long 0xee000890
-	.long 0x07904000	/* pmxvf16ger2pp a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvf16ger2pp a4,vs0,vs1,0,0,1 */
 	.long 0xee000890
-	.long 0x0790000d	/* pmxvf16ger2pp a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvf16ger2pp a4,vs0,vs1,0,13,0 */
 	.long 0xee000890
-	.long 0x0790400d	/* pmxvf16ger2pp a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvf16ger2pp a4,vs0,vs1,0,13,1 */
 	.long 0xee000890
-	.long 0x079000b0	/* pmxvf16ger2pp a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvf16ger2pp a4,vs0,vs1,11,0,0 */
 	.long 0xee000890
-	.long 0x079040b0	/* pmxvf16ger2pp a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvf16ger2pp a4,vs0,vs1,11,0,1 */
 	.long 0xee000890
-	.long 0x079000bd	/* pmxvf16ger2pp a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvf16ger2pp a4,vs0,vs1,11,13,0 */
 	.long 0xee000890
-	.long 0x079040bd	/* pmxvf16ger2pp a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvf16ger2pp a4,vs0,vs1,11,13,1 */
 	.long 0xee000890
-	.long 0x07900000	/* pmxvf32ger a4,vs0,vs1,0,0 */
+	.long 0x07900000	/* pmdmxvf32ger a4,vs0,vs1,0,0 */
 	.long 0xee0008d8
-	.long 0x0790000d	/* pmxvf32ger a4,vs0,vs1,0,13 */
+	.long 0x0790000d	/* pmdmxvf32ger a4,vs0,vs1,0,13 */
 	.long 0xee0008d8
-	.long 0x079000b0	/* pmxvf32ger a4,vs0,vs1,11,0 */
+	.long 0x079000b0	/* pmdmxvf32ger a4,vs0,vs1,11,0 */
 	.long 0xee0008d8
-	.long 0x079000bd	/* pmxvf32ger a4,vs0,vs1,11,13 */
+	.long 0x079000bd	/* pmdmxvf32ger a4,vs0,vs1,11,13 */
 	.long 0xee0008d8
-	.long 0x07900000	/* pmxvf32gernn a4,vs0,vs1,0,0 */
+	.long 0x07900000	/* pmdmxvf32gernn a4,vs0,vs1,0,0 */
 	.long 0xee000ed0
-	.long 0x0790000d	/* pmxvf32gernn a4,vs0,vs1,0,13 */
+	.long 0x0790000d	/* pmdmxvf32gernn a4,vs0,vs1,0,13 */
 	.long 0xee000ed0
-	.long 0x079000b0	/* pmxvf32gernn a4,vs0,vs1,11,0 */
+	.long 0x079000b0	/* pmdmxvf32gernn a4,vs0,vs1,11,0 */
 	.long 0xee000ed0
-	.long 0x079000bd	/* pmxvf32gernn a4,vs0,vs1,11,13 */
+	.long 0x079000bd	/* pmdmxvf32gernn a4,vs0,vs1,11,13 */
 	.long 0xee000ed0
-	.long 0x07900000	/* pmxvf32gernp a4,vs0,vs1,0,0 */
+	.long 0x07900000	/* pmdmxvf32gernp a4,vs0,vs1,0,0 */
 	.long 0xee000ad0
-	.long 0x0790000d	/* pmxvf32gernp a4,vs0,vs1,0,13 */
+	.long 0x0790000d	/* pmdmxvf32gernp a4,vs0,vs1,0,13 */
 	.long 0xee000ad0
-	.long 0x079000b0	/* pmxvf32gernp a4,vs0,vs1,11,0 */
+	.long 0x079000b0	/* pmdmxvf32gernp a4,vs0,vs1,11,0 */
 	.long 0xee000ad0
-	.long 0x079000bd	/* pmxvf32gernp a4,vs0,vs1,11,13 */
+	.long 0x079000bd	/* pmdmxvf32gernp a4,vs0,vs1,11,13 */
 	.long 0xee000ad0
-	.long 0x07900000	/* pmxvf32gerpn a4,vs0,vs1,0,0 */
+	.long 0x07900000	/* pmdmxvf32gerpn a4,vs0,vs1,0,0 */
 	.long 0xee000cd0
-	.long 0x0790000d	/* pmxvf32gerpn a4,vs0,vs1,0,13 */
+	.long 0x0790000d	/* pmdmxvf32gerpn a4,vs0,vs1,0,13 */
 	.long 0xee000cd0
-	.long 0x079000b0	/* pmxvf32gerpn a4,vs0,vs1,11,0 */
+	.long 0x079000b0	/* pmdmxvf32gerpn a4,vs0,vs1,11,0 */
 	.long 0xee000cd0
-	.long 0x079000bd	/* pmxvf32gerpn a4,vs0,vs1,11,13 */
+	.long 0x079000bd	/* pmdmxvf32gerpn a4,vs0,vs1,11,13 */
 	.long 0xee000cd0
-	.long 0x07900000	/* pmxvf32gerpp a4,vs0,vs1,0,0 */
+	.long 0x07900000	/* pmdmxvf32gerpp a4,vs0,vs1,0,0 */
 	.long 0xee0008d0
-	.long 0x0790000d	/* pmxvf32gerpp a4,vs0,vs1,0,13 */
+	.long 0x0790000d	/* pmdmxvf32gerpp a4,vs0,vs1,0,13 */
 	.long 0xee0008d0
-	.long 0x079000b0	/* pmxvf32gerpp a4,vs0,vs1,11,0 */
+	.long 0x079000b0	/* pmdmxvf32gerpp a4,vs0,vs1,11,0 */
 	.long 0xee0008d0
-	.long 0x079000bd	/* pmxvf32gerpp a4,vs0,vs1,11,13 */
+	.long 0x079000bd	/* pmdmxvf32gerpp a4,vs0,vs1,11,13 */
 	.long 0xee0008d0
-	.long 0x07900000	/* pmxvf64ger a4,vs22,vs0,0,0 */
+	.long 0x07900000	/* pmdmxvf64ger a4,vs22,vs0,0,0 */
 	.long 0xee1601d8
-	.long 0x07900004	/* pmxvf64ger a4,vs22,vs0,0,1 */
+	.long 0x07900004	/* pmdmxvf64ger a4,vs22,vs0,0,1 */
 	.long 0xee1601d8
-	.long 0x079000b0	/* pmxvf64ger a4,vs22,vs0,11,0 */
+	.long 0x079000b0	/* pmdmxvf64ger a4,vs22,vs0,11,0 */
 	.long 0xee1601d8
-	.long 0x079000b4	/* pmxvf64ger a4,vs22,vs0,11,1 */
+	.long 0x079000b4	/* pmdmxvf64ger a4,vs22,vs0,11,1 */
 	.long 0xee1601d8
-	.long 0x07900000	/* pmxvf64gernn a4,vs22,vs0,0,0 */
+	.long 0x07900000	/* pmdmxvf64gernn a4,vs22,vs0,0,0 */
 	.long 0xee1607d0
-	.long 0x07900004	/* pmxvf64gernn a4,vs22,vs0,0,1 */
+	.long 0x07900004	/* pmdmxvf64gernn a4,vs22,vs0,0,1 */
 	.long 0xee1607d0
-	.long 0x079000b0	/* pmxvf64gernn a4,vs22,vs0,11,0 */
+	.long 0x079000b0	/* pmdmxvf64gernn a4,vs22,vs0,11,0 */
 	.long 0xee1607d0
-	.long 0x079000b4	/* pmxvf64gernn a4,vs22,vs0,11,1 */
+	.long 0x079000b4	/* pmdmxvf64gernn a4,vs22,vs0,11,1 */
 	.long 0xee1607d0
-	.long 0x07900000	/* pmxvf64gernp a4,vs22,vs0,0,0 */
+	.long 0x07900000	/* pmdmxvf64gernp a4,vs22,vs0,0,0 */
 	.long 0xee1603d0
-	.long 0x07900004	/* pmxvf64gernp a4,vs22,vs0,0,1 */
+	.long 0x07900004	/* pmdmxvf64gernp a4,vs22,vs0,0,1 */
 	.long 0xee1603d0
-	.long 0x079000b0	/* pmxvf64gernp a4,vs22,vs0,11,0 */
+	.long 0x079000b0	/* pmdmxvf64gernp a4,vs22,vs0,11,0 */
 	.long 0xee1603d0
-	.long 0x079000b4	/* pmxvf64gernp a4,vs22,vs0,11,1 */
+	.long 0x079000b4	/* pmdmxvf64gernp a4,vs22,vs0,11,1 */
 	.long 0xee1603d0
-	.long 0x07900000	/* pmxvf64gerpn a4,vs22,vs0,0,0 */
+	.long 0x07900000	/* pmdmxvf64gerpn a4,vs22,vs0,0,0 */
 	.long 0xee1605d0
-	.long 0x07900004	/* pmxvf64gerpn a4,vs22,vs0,0,1 */
+	.long 0x07900004	/* pmdmxvf64gerpn a4,vs22,vs0,0,1 */
 	.long 0xee1605d0
-	.long 0x079000b0	/* pmxvf64gerpn a4,vs22,vs0,11,0 */
+	.long 0x079000b0	/* pmdmxvf64gerpn a4,vs22,vs0,11,0 */
 	.long 0xee1605d0
-	.long 0x079000b4	/* pmxvf64gerpn a4,vs22,vs0,11,1 */
+	.long 0x079000b4	/* pmdmxvf64gerpn a4,vs22,vs0,11,1 */
 	.long 0xee1605d0
-	.long 0x07900000	/* pmxvf64gerpp a4,vs22,vs0,0,0 */
+	.long 0x07900000	/* pmdmxvf64gerpp a4,vs22,vs0,0,0 */
 	.long 0xee1601d0
-	.long 0x07900004	/* pmxvf64gerpp a4,vs22,vs0,0,1 */
+	.long 0x07900004	/* pmdmxvf64gerpp a4,vs22,vs0,0,1 */
 	.long 0xee1601d0
-	.long 0x079000b0	/* pmxvf64gerpp a4,vs22,vs0,11,0 */
+	.long 0x079000b0	/* pmdmxvf64gerpp a4,vs22,vs0,11,0 */
 	.long 0xee1601d0
-	.long 0x079000b4	/* pmxvf64gerpp a4,vs22,vs0,11,1 */
+	.long 0x079000b4	/* pmdmxvf64gerpp a4,vs22,vs0,11,1 */
 	.long 0xee1601d0
-	.long 0x07900000	/* pmxvi16ger2 a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvi16ger2 a4,vs0,vs1,0,0,0 */
 	.long 0xee000a58
-	.long 0x07904000	/* pmxvi16ger2 a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvi16ger2 a4,vs0,vs1,0,0,1 */
 	.long 0xee000a58
-	.long 0x0790000d	/* pmxvi16ger2 a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvi16ger2 a4,vs0,vs1,0,13,0 */
 	.long 0xee000a58
-	.long 0x0790400d	/* pmxvi16ger2 a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvi16ger2 a4,vs0,vs1,0,13,1 */
 	.long 0xee000a58
-	.long 0x079000b0	/* pmxvi16ger2 a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvi16ger2 a4,vs0,vs1,11,0,0 */
 	.long 0xee000a58
-	.long 0x079040b0	/* pmxvi16ger2 a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvi16ger2 a4,vs0,vs1,11,0,1 */
 	.long 0xee000a58
-	.long 0x079000bd	/* pmxvi16ger2 a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvi16ger2 a4,vs0,vs1,11,13,0 */
 	.long 0xee000a58
-	.long 0x079040bd	/* pmxvi16ger2 a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvi16ger2 a4,vs0,vs1,11,13,1 */
 	.long 0xee000a58
-	.long 0x07900000	/* pmxvi16ger2pp a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvi16ger2pp a4,vs0,vs1,0,0,0 */
 	.long 0xee000b58
-	.long 0x07904000	/* pmxvi16ger2pp a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvi16ger2pp a4,vs0,vs1,0,0,1 */
 	.long 0xee000b58
-	.long 0x0790000d	/* pmxvi16ger2pp a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvi16ger2pp a4,vs0,vs1,0,13,0 */
 	.long 0xee000b58
-	.long 0x0790400d	/* pmxvi16ger2pp a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvi16ger2pp a4,vs0,vs1,0,13,1 */
 	.long 0xee000b58
-	.long 0x079000b0	/* pmxvi16ger2pp a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvi16ger2pp a4,vs0,vs1,11,0,0 */
 	.long 0xee000b58
-	.long 0x079040b0	/* pmxvi16ger2pp a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvi16ger2pp a4,vs0,vs1,11,0,1 */
 	.long 0xee000b58
-	.long 0x079000bd	/* pmxvi16ger2pp a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvi16ger2pp a4,vs0,vs1,11,13,0 */
 	.long 0xee000b58
-	.long 0x079040bd	/* pmxvi16ger2pp a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvi16ger2pp a4,vs0,vs1,11,13,1 */
 	.long 0xee000b58
-	.long 0x07900000	/* pmxvi16ger2s a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvi16ger2s a4,vs0,vs1,0,0,0 */
 	.long 0xee000958
-	.long 0x07904000	/* pmxvi16ger2s a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvi16ger2s a4,vs0,vs1,0,0,1 */
 	.long 0xee000958
-	.long 0x0790000d	/* pmxvi16ger2s a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvi16ger2s a4,vs0,vs1,0,13,0 */
 	.long 0xee000958
-	.long 0x0790400d	/* pmxvi16ger2s a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvi16ger2s a4,vs0,vs1,0,13,1 */
 	.long 0xee000958
-	.long 0x079000b0	/* pmxvi16ger2s a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvi16ger2s a4,vs0,vs1,11,0,0 */
 	.long 0xee000958
-	.long 0x079040b0	/* pmxvi16ger2s a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvi16ger2s a4,vs0,vs1,11,0,1 */
 	.long 0xee000958
-	.long 0x079000bd	/* pmxvi16ger2s a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvi16ger2s a4,vs0,vs1,11,13,0 */
 	.long 0xee000958
-	.long 0x079040bd	/* pmxvi16ger2s a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvi16ger2s a4,vs0,vs1,11,13,1 */
 	.long 0xee000958
-	.long 0x07900000	/* pmxvi16ger2spp a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvi16ger2spp a4,vs0,vs1,0,0,0 */
 	.long 0xee000950
-	.long 0x07904000	/* pmxvi16ger2spp a4,vs0,vs1,0,0,1 */
+	.long 0x07904000	/* pmdmxvi16ger2spp a4,vs0,vs1,0,0,1 */
 	.long 0xee000950
-	.long 0x0790000d	/* pmxvi16ger2spp a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvi16ger2spp a4,vs0,vs1,0,13,0 */
 	.long 0xee000950
-	.long 0x0790400d	/* pmxvi16ger2spp a4,vs0,vs1,0,13,1 */
+	.long 0x0790400d	/* pmdmxvi16ger2spp a4,vs0,vs1,0,13,1 */
 	.long 0xee000950
-	.long 0x079000b0	/* pmxvi16ger2spp a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvi16ger2spp a4,vs0,vs1,11,0,0 */
 	.long 0xee000950
-	.long 0x079040b0	/* pmxvi16ger2spp a4,vs0,vs1,11,0,1 */
+	.long 0x079040b0	/* pmdmxvi16ger2spp a4,vs0,vs1,11,0,1 */
 	.long 0xee000950
-	.long 0x079000bd	/* pmxvi16ger2spp a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvi16ger2spp a4,vs0,vs1,11,13,0 */
 	.long 0xee000950
-	.long 0x079040bd	/* pmxvi16ger2spp a4,vs0,vs1,11,13,1 */
+	.long 0x079040bd	/* pmdmxvi16ger2spp a4,vs0,vs1,11,13,1 */
 	.long 0xee000950
-	.long 0x07900000	/* pmxvi4ger8 a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvi4ger8 a4,vs0,vs1,0,0,0 */
 	.long 0xee000918
-	.long 0x07902d00	/* pmxvi4ger8 a4,vs0,vs1,0,0,45 */
+	.long 0x07902d00	/* pmdmxvi4ger8 a4,vs0,vs1,0,0,45 */
 	.long 0xee000918
-	.long 0x07900001	/* pmxvi4ger8 a4,vs0,vs1,0,1,0 */
+	.long 0x07900001	/* pmdmxvi4ger8 a4,vs0,vs1,0,1,0 */
 	.long 0xee000918
-	.long 0x07902d01	/* pmxvi4ger8 a4,vs0,vs1,0,1,45 */
+	.long 0x07902d01	/* pmdmxvi4ger8 a4,vs0,vs1,0,1,45 */
 	.long 0xee000918
-	.long 0x079000b0	/* pmxvi4ger8 a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvi4ger8 a4,vs0,vs1,11,0,0 */
 	.long 0xee000918
-	.long 0x07902db0	/* pmxvi4ger8 a4,vs0,vs1,11,0,45 */
+	.long 0x07902db0	/* pmdmxvi4ger8 a4,vs0,vs1,11,0,45 */
 	.long 0xee000918
-	.long 0x079000b1	/* pmxvi4ger8 a4,vs0,vs1,11,1,0 */
+	.long 0x079000b1	/* pmdmxvi4ger8 a4,vs0,vs1,11,1,0 */
 	.long 0xee000918
-	.long 0x07902db1	/* pmxvi4ger8 a4,vs0,vs1,11,1,45 */
+	.long 0x07902db1	/* pmdmxvi4ger8 a4,vs0,vs1,11,1,45 */
 	.long 0xee000918
-	.long 0x07900000	/* pmxvi4ger8pp a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvi4ger8pp a4,vs0,vs1,0,0,0 */
 	.long 0xee000910
-	.long 0x07902d00	/* pmxvi4ger8pp a4,vs0,vs1,0,0,45 */
+	.long 0x07902d00	/* pmdmxvi4ger8pp a4,vs0,vs1,0,0,45 */
 	.long 0xee000910
-	.long 0x07900001	/* pmxvi4ger8pp a4,vs0,vs1,0,1,0 */
+	.long 0x07900001	/* pmdmxvi4ger8pp a4,vs0,vs1,0,1,0 */
 	.long 0xee000910
-	.long 0x07902d01	/* pmxvi4ger8pp a4,vs0,vs1,0,1,45 */
+	.long 0x07902d01	/* pmdmxvi4ger8pp a4,vs0,vs1,0,1,45 */
 	.long 0xee000910
-	.long 0x079000b0	/* pmxvi4ger8pp a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvi4ger8pp a4,vs0,vs1,11,0,0 */
 	.long 0xee000910
-	.long 0x07902db0	/* pmxvi4ger8pp a4,vs0,vs1,11,0,45 */
+	.long 0x07902db0	/* pmdmxvi4ger8pp a4,vs0,vs1,11,0,45 */
 	.long 0xee000910
-	.long 0x079000b1	/* pmxvi4ger8pp a4,vs0,vs1,11,1,0 */
+	.long 0x079000b1	/* pmdmxvi4ger8pp a4,vs0,vs1,11,1,0 */
 	.long 0xee000910
-	.long 0x07902db1	/* pmxvi4ger8pp a4,vs0,vs1,11,1,45 */
+	.long 0x07902db1	/* pmdmxvi4ger8pp a4,vs0,vs1,11,1,45 */
 	.long 0xee000910
-	.long 0x07900000	/* pmxvi8ger4 a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvi8ger4 a4,vs0,vs1,0,0,0 */
 	.long 0xee000818
-	.long 0x07905000	/* pmxvi8ger4 a4,vs0,vs1,0,0,5 */
+	.long 0x07905000	/* pmdmxvi8ger4 a4,vs0,vs1,0,0,5 */
 	.long 0xee000818
-	.long 0x0790000d	/* pmxvi8ger4 a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvi8ger4 a4,vs0,vs1,0,13,0 */
 	.long 0xee000818
-	.long 0x0790500d	/* pmxvi8ger4 a4,vs0,vs1,0,13,5 */
+	.long 0x0790500d	/* pmdmxvi8ger4 a4,vs0,vs1,0,13,5 */
 	.long 0xee000818
-	.long 0x079000b0	/* pmxvi8ger4 a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvi8ger4 a4,vs0,vs1,11,0,0 */
 	.long 0xee000818
-	.long 0x079050b0	/* pmxvi8ger4 a4,vs0,vs1,11,0,5 */
+	.long 0x079050b0	/* pmdmxvi8ger4 a4,vs0,vs1,11,0,5 */
 	.long 0xee000818
-	.long 0x079000bd	/* pmxvi8ger4 a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvi8ger4 a4,vs0,vs1,11,13,0 */
 	.long 0xee000818
-	.long 0x079050bd	/* pmxvi8ger4 a4,vs0,vs1,11,13,5 */
+	.long 0x079050bd	/* pmdmxvi8ger4 a4,vs0,vs1,11,13,5 */
 	.long 0xee000818
-	.long 0x07900000	/* pmxvi8ger4pp a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvi8ger4pp a4,vs0,vs1,0,0,0 */
 	.long 0xee000810
-	.long 0x07905000	/* pmxvi8ger4pp a4,vs0,vs1,0,0,5 */
+	.long 0x07905000	/* pmdmxvi8ger4pp a4,vs0,vs1,0,0,5 */
 	.long 0xee000810
-	.long 0x0790000d	/* pmxvi8ger4pp a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvi8ger4pp a4,vs0,vs1,0,13,0 */
 	.long 0xee000810
-	.long 0x0790500d	/* pmxvi8ger4pp a4,vs0,vs1,0,13,5 */
+	.long 0x0790500d	/* pmdmxvi8ger4pp a4,vs0,vs1,0,13,5 */
 	.long 0xee000810
-	.long 0x079000b0	/* pmxvi8ger4pp a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvi8ger4pp a4,vs0,vs1,11,0,0 */
 	.long 0xee000810
-	.long 0x079050b0	/* pmxvi8ger4pp a4,vs0,vs1,11,0,5 */
+	.long 0x079050b0	/* pmdmxvi8ger4pp a4,vs0,vs1,11,0,5 */
 	.long 0xee000810
-	.long 0x079000bd	/* pmxvi8ger4pp a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvi8ger4pp a4,vs0,vs1,11,13,0 */
 	.long 0xee000810
-	.long 0x079050bd	/* pmxvi8ger4pp a4,vs0,vs1,11,13,5 */
+	.long 0x079050bd	/* pmdmxvi8ger4pp a4,vs0,vs1,11,13,5 */
 	.long 0xee000810
-	.long 0x07900000	/* pmxvi8ger4spp a4,vs0,vs1,0,0,0 */
+	.long 0x07900000	/* pmdmxvi8ger4spp a4,vs0,vs1,0,0,0 */
 	.long 0xee000b18
-	.long 0x07905000	/* pmxvi8ger4spp a4,vs0,vs1,0,0,5 */
+	.long 0x07905000	/* pmdmxvi8ger4spp a4,vs0,vs1,0,0,5 */
 	.long 0xee000b18
-	.long 0x0790000d	/* pmxvi8ger4spp a4,vs0,vs1,0,13,0 */
+	.long 0x0790000d	/* pmdmxvi8ger4spp a4,vs0,vs1,0,13,0 */
 	.long 0xee000b18
-	.long 0x0790500d	/* pmxvi8ger4spp a4,vs0,vs1,0,13,5 */
+	.long 0x0790500d	/* pmdmxvi8ger4spp a4,vs0,vs1,0,13,5 */
 	.long 0xee000b18
-	.long 0x079000b0	/* pmxvi8ger4spp a4,vs0,vs1,11,0,0 */
+	.long 0x079000b0	/* pmdmxvi8ger4spp a4,vs0,vs1,11,0,0 */
 	.long 0xee000b18
-	.long 0x079050b0	/* pmxvi8ger4spp a4,vs0,vs1,11,0,5 */
+	.long 0x079050b0	/* pmdmxvi8ger4spp a4,vs0,vs1,11,0,5 */
 	.long 0xee000b18
-	.long 0x079000bd	/* pmxvi8ger4spp a4,vs0,vs1,11,13,0 */
+	.long 0x079000bd	/* pmdmxvi8ger4spp a4,vs0,vs1,11,13,0 */
 	.long 0xee000b18
-	.long 0x079050bd	/* pmxvi8ger4spp a4,vs0,vs1,11,13,5 */
+	.long 0x079050bd	/* pmdmxvi8ger4spp a4,vs0,vs1,11,13,5 */
 	.long 0xee000b18
 	.long 0x06000000	/* pstb r0,0(r1) */
 	.long 0x98010000
-- 
2.37.2



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 2/2] PowerPC: update comments for the MMA instruction name changes.
       [not found] <20111d66a467599d893bf85bbdf2e82b76377127.camel@us.ibm.com>
  2022-11-03 17:30 ` [PATCH 0//2] PowerPC: MMA+ Outer-Product Instruction name change Carl Love
  2022-11-03 17:30 ` [PATCH 1/2] PowerPC: fix for the gdb.arch/powerpc-power10.exp test Carl Love
@ 2022-11-03 17:30 ` Carl Love
  2022-11-04 13:04   ` Ulrich Weigand
  2 siblings, 1 reply; 8+ messages in thread
From: Carl Love @ 2022-11-03 17:30 UTC (permalink / raw)
  To: gdb-patches; +Cc: cel, Ulrich Weigand, Will Schmidt, Peter Bergner

GDB maintainers:

This patch updates the instruction names in the comments in multiple
files with the new mnemonic names.  The mnemonics for the various MMA
instructions were changed by commit:

  commit bb98553cad4e017f1851153fa5de91f2cee98fb2
  Author: Peter Bergner <bergner@linux.ibm.com>  
  Date:   Sat Oct 8 16:19:51 2022 -0500    

    PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions

This patch only changes the comments in the files.  There are no
functional changes. 

The patch has been tested as part of the patch set with no regression
errors.

                  Carl Love
--------------------------
PowerPC update comments for the MMA instruction name changes.

The mnemonics for the pmxvf16ger*, pmxvf32ger*,pmxvf64ger*, pmxvi4ger8*,
pmxvi8ger4*, and pmxvi16ger2* instructions were officially changed to
pmdmxbf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*, pmdmxvi8ger4*,
pmdmxvi16ger* respectively.  The old mnemonics are still supported by the
assembler as extended mnemonics.  The disassembler generates the new
mnemonics.  The name changes occurred in commit:

  commit bb98553cad4e017f1851153fa5de91f2cee98fb2
  Author: Peter Bergner <bergner@linux.ibm.com>
  Date:   Sat Oct 8 16:19:51 2022 -0500

    PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions

    gas/
            * config/tc-ppc.c (md_assemble): Only check for prefix opcodes.
            * testsuite/gas/ppc/rfc02658.s: New test.
            * testsuite/gas/ppc/rfc02658.d: Likewise.
            * testsuite/gas/ppc/ppc.exp: Run it.

    opcodes/
            * ppc-opc.c (XMSK8, P_GERX4_MASK, P_GERX2_MASK, XX3GERX_MASK): New.
            (powerpc_opcodes): Add dmxvi8gerx4pp, dmxvi8gerx4, dmxvf16gerx2pp,
            dmxvf16gerx2, dmxvbf16gerx2pp, dmxvf16gerx2np, dmxvbf16gerx2,
            dmxvi8gerx4spp, dmxvbf16gerx2np, dmxvf16gerx2pn, dmxvbf16gerx2pn,
            dmxvf16gerx2nn, dmxvbf16gerx2nn, pmdmxvi8gerx4pp, pmdmxvi8gerx4,
            pmdmxvf16gerx2pp, pmdmxvf16gerx2, pmdmxvbf16gerx2pp, pmdmxvf16gerx2np,
            pmdmxvbf16gerx2, pmdmxvi8gerx4spp, pmdmxvbf16gerx2np, pmdmxvf16gerx2pn,
            pmdmxvbf16gerx2pn, pmdmxvf16gerx2nn, pmdmxvbf16gerx2nn.

This patch updates the comments in the various gdb files to reflect the
name changes.  There are no functional changes made by this patch.
---
 gdb/rs6000-tdep.c                             | 73 +++++++++++--------
 .../gdb.reverse/ppc_record_test_isa_3_1.c     | 15 +++-
 .../gdb.reverse/ppc_record_test_isa_3_1.exp   |  4 +-
 3 files changed, 56 insertions(+), 36 deletions(-)

diff --git a/gdb/rs6000-tdep.c b/gdb/rs6000-tdep.c
index 51b41967b41..cbd84514795 100644
--- a/gdb/rs6000-tdep.c
+++ b/gdb/rs6000-tdep.c
@@ -5535,6 +5535,10 @@ ppc_process_record_op59 (struct gdbarch *gdbarch, struct regcache *regcache,
   int ext = PPC_EXTOP (insn);
   int at = PPC_FIELD (insn, 6, 3);
 
+  /* Note the mnemonics for the pmxvf64ger* instructions were officially
+     changed to pmdmxvf64ger*.  The old mnemonics are still supported as
+     extended mnemonics.  */
+
   switch (ext & 0x1f)
     {
     case 18:		/* Floating Divide */
@@ -5603,7 +5607,8 @@ ppc_process_record_op59 (struct gdbarch *gdbarch, struct regcache *regcache,
     case 218:	/* VSX Vector 32-bit Floating-Point GER Negative multiply,
 		   Negative accumulate, xvf32gernn */
 
-    case 59:	/* VSX Vector 64-bit Floating-Point GER, pmxvf64ger */
+    case 59:	/* VSX Vector 64-bit Floating-Point GER, pmdmxvf64ger
+		   (pmxvf64ger)  */
     case 58:	/* VSX Vector 64-bit Floating-Point GER Positive multiply,
 		   Positive accumulate, xvf64gerpp */
     case 186:	/* VSX Vector 64-bit Floating-Point GER Positive multiply,
@@ -5611,7 +5616,7 @@ ppc_process_record_op59 (struct gdbarch *gdbarch, struct regcache *regcache,
     case 122:	/* VSX Vector 64-bit Floating-Point GER Negative multiply,
 		   Positive accumulate, xvf64gernp */
     case 250:	/* VSX Vector 64-bit Floating-Point GER Negative multiply,
-		   Negative accumulate, pmxvf64gernn */
+		   Negative accumulate, pmdmxvf64gernn (pmxvf64gernn)  */
 
     case 51:	/* VSX Vector bfloat16 GER, xvbf16ger2 */
     case 50:	/* VSX Vector bfloat16 GER Positive multiply,
@@ -6486,98 +6491,106 @@ ppc_process_record_prefix_op59_XX3 (struct gdbarch *gdbarch,
   int at = PPC_FIELD (insn_suffix, 6, 3);
   ppc_gdbarch_tdep *tdep = gdbarch_tdep<ppc_gdbarch_tdep> (gdbarch);
 
+  /* Note, the mnemonics for the pmxvf16ger*, pmxvf32ger*,pmxvf64ger*,
+     pmxvi4ger8*, pmxvi8ger4* pmxvi16ger2* instructions were officially
+     changed to pmdmxbf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*,
+     pmdmxvi8ger4*, pmdmxvi16ger* respectively.  The old mnemonics are still
+     supported by the assembler as extended mnemonics.  The disassembler
+     generates the new mnemonics.  */
   if (type == 3)
     {
       if (ST4 == 9)
 	switch (opcode)
 	  {
 	  case 35:	/* Prefixed Masked VSX Vector 4-bit Signed Integer GER
-			   MMIRR, pmxvi4ger8 */
+			   MMIRR, pmdmxvi4ger8 (pmxvi4ger8) */
 	  case 34:	/* Prefixed Masked VSX Vector 4-bit Signed Integer GER
-			   MMIRR, pmxvi4ger8pp */
+			   MMIRR, pmdmxvi4ger8pp (pmxvi4ger8pp) */
 
 	  case 99:	/* Prefixed Masked VSX Vector 8-bit Signed/Unsigned
 			   Integer GER with Saturate Positive multiply,
 			   Positive accumulate, xvi8ger4spp */
 
 	  case 3:	/* Prefixed Masked VSX Vector 8-bit Signed/Unsigned
-			   Integer GER MMIRR, pmxvi8ger4 */
+			   Integer GER MMIRR, pmdmxvi8ger4 (pmxvi8ger4)  */
 	  case 2:	/* Prefixed Masked VSX Vector 8-bit Signed/Unsigned
 			   Integer GER Positive multiply, Positive accumulate
-			   MMIRR, pmxvi8ger4pp */
+			   MMIRR, pmdmxvi8ger4pp (pmxvi8ger4pp)  */
 
 	  case 75:	/* Prefixed Masked VSX Vector 16-bit Signed Integer
-			   GER MMIRR, pmxvi16ger2 */
+			   GER MMIRR, pmdmxvi16ger2 (pmxvi16ger2)  */
 	  case 107:	/* Prefixed Masked VSX Vector 16-bit Signed Integer
 			   GER  Positive multiply, Positive accumulate,
-			   pmxvi16ger2pp */
+			   pmdmxvi16ger2pp (pmxvi16ger2pp)  */
 
 	  case 43:	/* Prefixed Masked VSX Vector 16-bit Signed Integer
-			   GER with Saturation MMIRR, pmxvi16ger2s */
+			   GER with Saturation MMIRR, pmdmxvi16ger2s
+			   (pmxvi16ger2s)  */
 	  case 42:	/* Prefixed Masked VSX Vector 16-bit Signed Integer
 			   GER with Saturation Positive multiply, Positive
-			   accumulate MMIRR, pmxvi16ger2spp */
+			   accumulate MMIRR, pmdmxvi16ger2spp (pmxvi16ger2spp)
+			*/
 	    ppc_record_ACC_fpscr (regcache, tdep, at, false);
 	    return 0;
 
 	  case 19:	/* Prefixed Masked VSX Vector 16-bit Floating-Point
-			   GER MMIRR, pmxvf16ger2 */
+			   GER MMIRR, pmdmxvf16ger2 (pmxvf16ger2)  */
 	  case 18:	/* Prefixed Masked VSX Vector 16-bit Floating-Point
 			   GER Positive multiply, Positive accumulate MMIRR,
-			   pmxvf16ger2pp */
+			   pmdmxvf16ger2pp (pmxvf16ger2pp)  */
 	  case 146:	/* Prefixed Masked VSX Vector 16-bit Floating-Point
 			   GER Positive multiply, Negative accumulate MMIRR,
-			   pmxvf16ger2pn */
+			   pmdmxvf16ger2pn (pmxvf16ger2pn)  */
 	  case 82:	/* Prefixed Masked VSX Vector 16-bit Floating-Point
 			   GER Negative multiply, Positive accumulate MMIRR,
-			   pmxvf16ger2np */
+			   pmdmxvf16ger2np (pmxvf16ger2np)  */
 	  case 210:	/* Prefixed Masked VSX Vector 16-bit Floating-Point
 			   GER Negative multiply, Negative accumulate MMIRR,
-			   pmxvf16ger2nn */
+			   pmdmxvf16ger2nn (pmxvf16ger2nn)  */
 
 	  case 27:	/* Prefixed Masked VSX Vector 32-bit Floating-Point
-			   GER MMIRR, pmxvf32ger */
+			   GER MMIRR, pmdmxvf32ger (pmxvf32ger)  */
 	  case 26:	/* Prefixed Masked VSX Vector 32-bit Floating-Point
 			   GER Positive multiply, Positive accumulate MMIRR,
-			   pmxvf32gerpp */
+			   pmdmxvf32gerpp (pmxvf32gerpp)  */
 	  case 154:	/* Prefixed Masked VSX Vector 32-bit Floating-Point
 			   GER Positive multiply, Negative accumulate MMIRR,
-			   pmxvf32gerpn */
+			   pmdmxvf32gerpn (pmxvf32gerpn)  */
 	  case 90:	/* Prefixed Masked VSX Vector 32-bit Floating-Point
 			   GER Negative multiply, Positive accumulate MMIRR,
-			   pmxvf32gernp */
+			   pmdmxvf32gernp (pmxvf32gernp )*/
 	  case 218:	/* Prefixed Masked VSX Vector 32-bit Floating-Point
 			   GER Negative multiply, Negative accumulate MMIRR,
-			   pmxvf32gernn */
+			   pmdmxvf32gernn (pmxvf32gernn)  */
 
 	  case 59:	/* Prefixed Masked VSX Vector 64-bit Floating-Point
-			   GER MMIRR, pmxvf64ger */
+			   GER MMIRR, pmdmxvf64ger (pmxvf64ger)  */
 	  case 58:	/* Floating-Point GER Positive multiply, Positive
-			   accumulate MMIRR, pmxvf64gerpp */
+			   accumulate MMIRR, pmdmxvf64gerpp (pmxvf64gerpp)  */
 	  case 186:	/* Prefixed Masked VSX Vector 64-bit Floating-Point
 			   GER Positive multiply, Negative accumulate MMIRR,
-			   pmxvf64gerpn */
+			   pmdmxvf64gerpn (pmxvf64gerpn)  */
 	  case 122:	/* Prefixed Masked VSX Vector 64-bit Floating-Point
 			   GER Negative multiply, Positive accumulate MMIRR,
-			   pmxvf64gernp */
+			   pmdmxvf64gernp (pmxvf64gernp)  */
 	  case 250:	/* Prefixed Masked VSX Vector 64-bit Floating-Point
 			   GER Negative multiply, Negative accumulate MMIRR,
-			   pmxvf64gernn */
+			   pmdmxvf64gernn (pmxvf64gernn)  */
 
 	  case 51:	/* Prefixed Masked VSX Vector bfloat16 GER MMIRR,
-			   pmxvbf16ger2 */
+			   pmdmxvbf16ger2 (pmxvbf16ger2)  */
 	  case 50:	/* Prefixed Masked VSX Vector bfloat16 GER Positive
 			   multiply, Positive accumulate MMIRR,
-			   pmxvbf16ger2pp */
+			   pmdmxvbf16ger2pp (pmxvbf16ger2pp)  */
 	  case 178:	/* Prefixed Masked VSX Vector bfloat16 GER Positive
 			   multiply, Negative accumulate MMIRR,
-			   pmxvbf16ger2pn */
+			   pmdmxvbf16ger2pn (pmxvbf16ger2pn)  */
 	  case 114:	/* Prefixed Masked VSX Vector bfloat16 GER Negative
 			   multiply, Positive accumulate MMIRR,
-			   pmxvbf16ger2np */
+			   pmdmxvbf16ger2np (pmxvbf16ger2np)  */
 	  case 242:	/* Prefixed Masked VSX Vector bfloat16 GER Negative
 			   multiply, Negative accumulate MMIRR,
-			   pmxvbf16ger2nn */
+			   pmdmxvbf16ger2nn (pmxvbf16ger2nn)  */
 	    ppc_record_ACC_fpscr (regcache, tdep, at, true);
 	    return 0;
 	  }
diff --git a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c
index c0d65d944af..6513b61d40a 100644
--- a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c
+++ b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c
@@ -22,6 +22,13 @@ static unsigned long ra, rb, rs;
 int
 main ()
 {
+
+  /* This test is used to verify the recording of the MMA instructions.  The
+     names of the MMA instructions pmxbf16ger*, pmxvf32ger*,pmxvf64ger*,
+     pmxvi4ger8*, pmxvi8ger4* pmxvi16ger2* instructions were officially changed
+     to pmdmxbf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*,
+     pmdmxvi8ger4*, pmdmxvi16ger* respectively.  The new mnemonics are used in
+     this test.   */
   ra = 0xABCDEF012;
   rb = 0;
   rs = 0x012345678;
@@ -42,8 +49,8 @@ main ()
      xxsetaccz    - ACC[3]
      xvi4ger8     - ACC[4]
      xvf16ger2pn  - ACC[5]
-     pmxvi8ger4   - ACC[6]
-     pmxvf32gerpp - ACC[7] and fpscr */
+     pmdmxvi8ger4   - ACC[6]
+     pmdmxvf32gerpp - ACC[7] and fpscr */
   /* Need to initialize the vs registers to a non zero value.  */
   ra = (unsigned long) & vec_xb;
   __asm__ __volatile__ ("lxvd2x 12, %0, %1" :: "r" (ra ), "r" (rb));
@@ -87,9 +94,9 @@ main ()
 			"wa" (vec_xb) );
   __asm__ __volatile__ ("xvf16ger2pn 5, %x0, %x1" :: "wa" (vec_xa),\
 			"wa" (vec_xb) );
-  __asm__ __volatile__ ("pmxvi8ger4spp  6, %x0, %x1, 11, 13, 5"
+  __asm__ __volatile__ ("pmdmxvi8ger4spp  6, %x0, %x1, 11, 13, 5"
                                 :: "wa" (vec_xa), "wa" (vec_xb) );
-  __asm__ __volatile__ ("pmxvf32gerpp  7, %x0, %x1, 11, 13"
+  __asm__ __volatile__ ("pmdmxvf32gerpp  7, %x0, %x1, 11, 13"
                                 :: "wa" (vec_xa), "wa" (vec_xb) );
   ra = 0;                               /* stop 4 */
 }
diff --git a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp
index 8cecb067667..d5a1279374d 100644
--- a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp
+++ b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp
@@ -121,8 +121,8 @@ gdb_test_no_output "record" "start recording test2"
 ##       xxsetaccz    - ACC[3], vs[12] to vs[15]
 ##       xvi4ger8     - ACC[4], vs[16] to vs[19]
 ##       xvf16ger2pn  - ACC[5], vs[20] to vs[23]
-##       pmxvi8ger4   - ACC[6], vs[21] to vs[27]
-##       pmxvf32gerpp - ACC[7], vs[28] to vs[31] and fpscr
+##       pmdmxvi8ger4   - ACC[6], vs[21] to vs[27]
+##       pmdmxvf32gerpp - ACC[7], vs[28] to vs[31] and fpscr
 
 set stop3 [gdb_get_line_number "stop 3"]
 set stop4 [gdb_get_line_number "stop 4"]
-- 
2.37.2



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] PowerPC: fix for the gdb.arch/powerpc-power10.exp test.
  2022-11-03 17:30 ` [PATCH 1/2] PowerPC: fix for the gdb.arch/powerpc-power10.exp test Carl Love
@ 2022-11-04 13:02   ` Ulrich Weigand
  0 siblings, 0 replies; 8+ messages in thread
From: Ulrich Weigand @ 2022-11-04 13:02 UTC (permalink / raw)
  To: gdb-patches, cel; +Cc: will_schmidt, bergner

Carl Love <cel@us.ibm.com> wrote:

>PowerPC fix for the gdb.arch/powerpc-power10.exp test.

This is OK.

Thanks,
Ulrich


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] PowerPC: update comments for the MMA instruction name changes.
  2022-11-03 17:30 ` [PATCH 2/2] PowerPC: update comments for the MMA instruction name changes Carl Love
@ 2022-11-04 13:04   ` Ulrich Weigand
  2022-11-04 15:25     ` Carl Love
  2022-11-04 15:49     ` [PATCH 2/2 version 2] " Carl Love
  0 siblings, 2 replies; 8+ messages in thread
From: Ulrich Weigand @ 2022-11-04 13:04 UTC (permalink / raw)
  To: gdb-patches, cel; +Cc: will_schmidt, bergner

Carl Love <cel@us.ibm.com> wrote:

>PowerPC update comments for the MMA instruction name changes.

The *comment* changes are OK.  However, this:

-  __asm__ __volatile__ ("pmxvi8ger4spp  6, %x0, %x1, 11, 13, 5"
+  __asm__ __volatile__ ("pmdmxvi8ger4spp  6, %x0, %x1, 11, 13, 5"
                                 :: "wa" (vec_xa), "wa" (vec_xb) );
-  __asm__ __volatile__ ("pmxvf32gerpp  7, %x0, %x1, 11, 13"
+  __asm__ __volatile__ ("pmdmxvf32gerpp  7, %x0, %x1, 11, 13"
                                 :: "wa" (vec_xa), "wa" (vec_xb) );

actually changes code, and in particular it requires that the
compiler and/or assembler used to build the test case understands
the new name.  This would cause the test to fail when run on a
system with an older toolchain.

I think this part of the patch should be removed.

Thanks,
Ulrich


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] PowerPC: update comments for the MMA instruction name changes.
  2022-11-04 13:04   ` Ulrich Weigand
@ 2022-11-04 15:25     ` Carl Love
  2022-11-04 15:49     ` [PATCH 2/2 version 2] " Carl Love
  1 sibling, 0 replies; 8+ messages in thread
From: Carl Love @ 2022-11-04 15:25 UTC (permalink / raw)
  To: Ulrich Weigand, gdb-patches; +Cc: will_schmidt, bergner

Ulrich:

On Fri, 2022-11-04 at 13:04 +0000, Ulrich Weigand wrote:
> Carl Love <cel@us.ibm.com> wrote:
> 
> > PowerPC update comments for the MMA instruction name changes.
> 
> The *comment* changes are OK.  However, this:
> 
> -  __asm__ __volatile__ ("pmxvi8ger4spp  6, %x0, %x1, 11, 13, 5"
> +  __asm__ __volatile__ ("pmdmxvi8ger4spp  6, %x0, %x1, 11, 13, 5"
>                                  :: "wa" (vec_xa), "wa" (vec_xb) );
> -  __asm__ __volatile__ ("pmxvf32gerpp  7, %x0, %x1, 11, 13"
> +  __asm__ __volatile__ ("pmdmxvf32gerpp  7, %x0, %x1, 11, 13"
>                                  :: "wa" (vec_xa), "wa" (vec_xb) );
> 
> actually changes code, and in particular it requires that the
> compiler and/or assembler used to build the test case understands
> the new name.  This would cause the test to fail when run on a
> system with an older toolchain.
> 
> I think this part of the patch should be removed.

Yup, forgot about that change.  I have reverted it back to the older
names and put a comment in stating that the test uses the older names
for backward compatibility.  Thanks.

I will post version 2 of the patch.

                     Carl 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE:  [PATCH 2/2 version 2] PowerPC: update comments for the MMA instruction name changes.
  2022-11-04 13:04   ` Ulrich Weigand
  2022-11-04 15:25     ` Carl Love
@ 2022-11-04 15:49     ` Carl Love
  2022-11-04 16:02       ` Ulrich Weigand
  1 sibling, 1 reply; 8+ messages in thread
From: Carl Love @ 2022-11-04 15:49 UTC (permalink / raw)
  To: Ulrich Weigand, gdb-patches; +Cc: will_schmidt, bergner, cel


GDB maintainers:

Version 2, fixed the file gdb.reverse/ppc_record_test_isa_3_1.c to
revert the instruction names back to the original names per Ulrich's
comments. Also reverted the instruction names the source and expect
file for the test to the older names.  Added comments in the source and
expect files to say that the test uses the old names for backward
compatibility.  Felt it was best to be consistent in the use of the
instruction names in both the comments and the code for clarity but
also document that the older names are being used in the test.

This patch updates the instruction names in the comments in multiple
files with the new mnemonic names.  The mnemonics for the various MMA
instructions were changed by commit:

  commit bb98553cad4e017f1851153fa5de91f2cee98fb2
  Author: Peter Bergner <bergner@linux.ibm.com>  
  Date:   Sat Oct 8 16:19:51 2022 -0500    

    PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions

The mnemonics in the gdb.reverse/ppc_record_test_isa_3_1.c use the old
names for backward compatibility.

                  Carl Love


---------------------------------------------------
PowerPC update comments for the MMA instruction name changes.

The mnemonics for the pmxvf16ger*, pmxvf32ger*,pmxvf64ger*, pmxvi4ger8*,
pmxvi8ger4*, and pmxvi16ger2* instructions were officially changed to
pmdmxbf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*, pmdmxvi8ger4*,
pmdmxvi16ger* respectively.  The old mnemonics are still supported by the
assembler as extended mnemonics.  The disassembler generates the new
mnemonics.  The name changes occurred in commit:

  commit bb98553cad4e017f1851153fa5de91f2cee98fb2
  Author: Peter Bergner <bergner@linux.ibm.com>
  Date:   Sat Oct 8 16:19:51 2022 -0500

    PowerPC: Add support for RFC02658 - MMA+ Outer-Product Instructions

    gas/
            * config/tc-ppc.c (md_assemble): Only check for prefix opcodes.
            * testsuite/gas/ppc/rfc02658.s: New test.
            * testsuite/gas/ppc/rfc02658.d: Likewise.
            * testsuite/gas/ppc/ppc.exp: Run it.

    opcodes/
            * ppc-opc.c (XMSK8, P_GERX4_MASK, P_GERX2_MASK, XX3GERX_MASK): New.
            (powerpc_opcodes): Add dmxvi8gerx4pp, dmxvi8gerx4, dmxvf16gerx2pp,
            dmxvf16gerx2, dmxvbf16gerx2pp, dmxvf16gerx2np, dmxvbf16gerx2,
            dmxvi8gerx4spp, dmxvbf16gerx2np, dmxvf16gerx2pn, dmxvbf16gerx2pn,
            dmxvf16gerx2nn, dmxvbf16gerx2nn, pmdmxvi8gerx4pp, pmdmxvi8gerx4,
            pmdmxvf16gerx2pp, pmdmxvf16gerx2, pmdmxvbf16gerx2pp, pmdmxvf16gerx2np,
            pmdmxvbf16gerx2, pmdmxvi8gerx4spp, pmdmxvbf16gerx2np, pmdmxvf16gerx2pn,
            pmdmxvbf16gerx2pn, pmdmxvf16gerx2nn, pmdmxvbf16gerx2nn.

This patch updates the comments in the various gdb files to reflect the
name changes.  There are no functional changes made by this patch.

The older instruction names are still used in the test
gdb.reverse/ppc_record_test_isa_3_1.exp for backwards compatibility.

Patch has been tested on Power 10 with no regressions.
---
 gdb/rs6000-tdep.c                             | 73 +++++++++++--------
 .../gdb.reverse/ppc_record_test_isa_3_1.c     |  8 ++
 .../gdb.reverse/ppc_record_test_isa_3_1.exp   |  5 ++
 3 files changed, 56 insertions(+), 30 deletions(-)

diff --git a/gdb/rs6000-tdep.c b/gdb/rs6000-tdep.c
index 51b41967b41..cbd84514795 100644
--- a/gdb/rs6000-tdep.c
+++ b/gdb/rs6000-tdep.c
@@ -5535,6 +5535,10 @@ ppc_process_record_op59 (struct gdbarch *gdbarch, struct regcache *regcache,
   int ext = PPC_EXTOP (insn);
   int at = PPC_FIELD (insn, 6, 3);
 
+  /* Note the mnemonics for the pmxvf64ger* instructions were officially
+     changed to pmdmxvf64ger*.  The old mnemonics are still supported as
+     extended mnemonics.  */
+
   switch (ext & 0x1f)
     {
     case 18:		/* Floating Divide */
@@ -5603,7 +5607,8 @@ ppc_process_record_op59 (struct gdbarch *gdbarch, struct regcache *regcache,
     case 218:	/* VSX Vector 32-bit Floating-Point GER Negative multiply,
 		   Negative accumulate, xvf32gernn */
 
-    case 59:	/* VSX Vector 64-bit Floating-Point GER, pmxvf64ger */
+    case 59:	/* VSX Vector 64-bit Floating-Point GER, pmdmxvf64ger
+		   (pmxvf64ger)  */
     case 58:	/* VSX Vector 64-bit Floating-Point GER Positive multiply,
 		   Positive accumulate, xvf64gerpp */
     case 186:	/* VSX Vector 64-bit Floating-Point GER Positive multiply,
@@ -5611,7 +5616,7 @@ ppc_process_record_op59 (struct gdbarch *gdbarch, struct regcache *regcache,
     case 122:	/* VSX Vector 64-bit Floating-Point GER Negative multiply,
 		   Positive accumulate, xvf64gernp */
     case 250:	/* VSX Vector 64-bit Floating-Point GER Negative multiply,
-		   Negative accumulate, pmxvf64gernn */
+		   Negative accumulate, pmdmxvf64gernn (pmxvf64gernn)  */
 
     case 51:	/* VSX Vector bfloat16 GER, xvbf16ger2 */
     case 50:	/* VSX Vector bfloat16 GER Positive multiply,
@@ -6486,98 +6491,106 @@ ppc_process_record_prefix_op59_XX3 (struct gdbarch *gdbarch,
   int at = PPC_FIELD (insn_suffix, 6, 3);
   ppc_gdbarch_tdep *tdep = gdbarch_tdep<ppc_gdbarch_tdep> (gdbarch);
 
+  /* Note, the mnemonics for the pmxvf16ger*, pmxvf32ger*,pmxvf64ger*,
+     pmxvi4ger8*, pmxvi8ger4* pmxvi16ger2* instructions were officially
+     changed to pmdmxbf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*,
+     pmdmxvi8ger4*, pmdmxvi16ger* respectively.  The old mnemonics are still
+     supported by the assembler as extended mnemonics.  The disassembler
+     generates the new mnemonics.  */
   if (type == 3)
     {
       if (ST4 == 9)
 	switch (opcode)
 	  {
 	  case 35:	/* Prefixed Masked VSX Vector 4-bit Signed Integer GER
-			   MMIRR, pmxvi4ger8 */
+			   MMIRR, pmdmxvi4ger8 (pmxvi4ger8) */
 	  case 34:	/* Prefixed Masked VSX Vector 4-bit Signed Integer GER
-			   MMIRR, pmxvi4ger8pp */
+			   MMIRR, pmdmxvi4ger8pp (pmxvi4ger8pp) */
 
 	  case 99:	/* Prefixed Masked VSX Vector 8-bit Signed/Unsigned
 			   Integer GER with Saturate Positive multiply,
 			   Positive accumulate, xvi8ger4spp */
 
 	  case 3:	/* Prefixed Masked VSX Vector 8-bit Signed/Unsigned
-			   Integer GER MMIRR, pmxvi8ger4 */
+			   Integer GER MMIRR, pmdmxvi8ger4 (pmxvi8ger4)  */
 	  case 2:	/* Prefixed Masked VSX Vector 8-bit Signed/Unsigned
 			   Integer GER Positive multiply, Positive accumulate
-			   MMIRR, pmxvi8ger4pp */
+			   MMIRR, pmdmxvi8ger4pp (pmxvi8ger4pp)  */
 
 	  case 75:	/* Prefixed Masked VSX Vector 16-bit Signed Integer
-			   GER MMIRR, pmxvi16ger2 */
+			   GER MMIRR, pmdmxvi16ger2 (pmxvi16ger2)  */
 	  case 107:	/* Prefixed Masked VSX Vector 16-bit Signed Integer
 			   GER  Positive multiply, Positive accumulate,
-			   pmxvi16ger2pp */
+			   pmdmxvi16ger2pp (pmxvi16ger2pp)  */
 
 	  case 43:	/* Prefixed Masked VSX Vector 16-bit Signed Integer
-			   GER with Saturation MMIRR, pmxvi16ger2s */
+			   GER with Saturation MMIRR, pmdmxvi16ger2s
+			   (pmxvi16ger2s)  */
 	  case 42:	/* Prefixed Masked VSX Vector 16-bit Signed Integer
 			   GER with Saturation Positive multiply, Positive
-			   accumulate MMIRR, pmxvi16ger2spp */
+			   accumulate MMIRR, pmdmxvi16ger2spp (pmxvi16ger2spp)
+			*/
 	    ppc_record_ACC_fpscr (regcache, tdep, at, false);
 	    return 0;
 
 	  case 19:	/* Prefixed Masked VSX Vector 16-bit Floating-Point
-			   GER MMIRR, pmxvf16ger2 */
+			   GER MMIRR, pmdmxvf16ger2 (pmxvf16ger2)  */
 	  case 18:	/* Prefixed Masked VSX Vector 16-bit Floating-Point
 			   GER Positive multiply, Positive accumulate MMIRR,
-			   pmxvf16ger2pp */
+			   pmdmxvf16ger2pp (pmxvf16ger2pp)  */
 	  case 146:	/* Prefixed Masked VSX Vector 16-bit Floating-Point
 			   GER Positive multiply, Negative accumulate MMIRR,
-			   pmxvf16ger2pn */
+			   pmdmxvf16ger2pn (pmxvf16ger2pn)  */
 	  case 82:	/* Prefixed Masked VSX Vector 16-bit Floating-Point
 			   GER Negative multiply, Positive accumulate MMIRR,
-			   pmxvf16ger2np */
+			   pmdmxvf16ger2np (pmxvf16ger2np)  */
 	  case 210:	/* Prefixed Masked VSX Vector 16-bit Floating-Point
 			   GER Negative multiply, Negative accumulate MMIRR,
-			   pmxvf16ger2nn */
+			   pmdmxvf16ger2nn (pmxvf16ger2nn)  */
 
 	  case 27:	/* Prefixed Masked VSX Vector 32-bit Floating-Point
-			   GER MMIRR, pmxvf32ger */
+			   GER MMIRR, pmdmxvf32ger (pmxvf32ger)  */
 	  case 26:	/* Prefixed Masked VSX Vector 32-bit Floating-Point
 			   GER Positive multiply, Positive accumulate MMIRR,
-			   pmxvf32gerpp */
+			   pmdmxvf32gerpp (pmxvf32gerpp)  */
 	  case 154:	/* Prefixed Masked VSX Vector 32-bit Floating-Point
 			   GER Positive multiply, Negative accumulate MMIRR,
-			   pmxvf32gerpn */
+			   pmdmxvf32gerpn (pmxvf32gerpn)  */
 	  case 90:	/* Prefixed Masked VSX Vector 32-bit Floating-Point
 			   GER Negative multiply, Positive accumulate MMIRR,
-			   pmxvf32gernp */
+			   pmdmxvf32gernp (pmxvf32gernp )*/
 	  case 218:	/* Prefixed Masked VSX Vector 32-bit Floating-Point
 			   GER Negative multiply, Negative accumulate MMIRR,
-			   pmxvf32gernn */
+			   pmdmxvf32gernn (pmxvf32gernn)  */
 
 	  case 59:	/* Prefixed Masked VSX Vector 64-bit Floating-Point
-			   GER MMIRR, pmxvf64ger */
+			   GER MMIRR, pmdmxvf64ger (pmxvf64ger)  */
 	  case 58:	/* Floating-Point GER Positive multiply, Positive
-			   accumulate MMIRR, pmxvf64gerpp */
+			   accumulate MMIRR, pmdmxvf64gerpp (pmxvf64gerpp)  */
 	  case 186:	/* Prefixed Masked VSX Vector 64-bit Floating-Point
 			   GER Positive multiply, Negative accumulate MMIRR,
-			   pmxvf64gerpn */
+			   pmdmxvf64gerpn (pmxvf64gerpn)  */
 	  case 122:	/* Prefixed Masked VSX Vector 64-bit Floating-Point
 			   GER Negative multiply, Positive accumulate MMIRR,
-			   pmxvf64gernp */
+			   pmdmxvf64gernp (pmxvf64gernp)  */
 	  case 250:	/* Prefixed Masked VSX Vector 64-bit Floating-Point
 			   GER Negative multiply, Negative accumulate MMIRR,
-			   pmxvf64gernn */
+			   pmdmxvf64gernn (pmxvf64gernn)  */
 
 	  case 51:	/* Prefixed Masked VSX Vector bfloat16 GER MMIRR,
-			   pmxvbf16ger2 */
+			   pmdmxvbf16ger2 (pmxvbf16ger2)  */
 	  case 50:	/* Prefixed Masked VSX Vector bfloat16 GER Positive
 			   multiply, Positive accumulate MMIRR,
-			   pmxvbf16ger2pp */
+			   pmdmxvbf16ger2pp (pmxvbf16ger2pp)  */
 	  case 178:	/* Prefixed Masked VSX Vector bfloat16 GER Positive
 			   multiply, Negative accumulate MMIRR,
-			   pmxvbf16ger2pn */
+			   pmdmxvbf16ger2pn (pmxvbf16ger2pn)  */
 	  case 114:	/* Prefixed Masked VSX Vector bfloat16 GER Negative
 			   multiply, Positive accumulate MMIRR,
-			   pmxvbf16ger2np */
+			   pmdmxvbf16ger2np (pmxvbf16ger2np)  */
 	  case 242:	/* Prefixed Masked VSX Vector bfloat16 GER Negative
 			   multiply, Negative accumulate MMIRR,
-			   pmxvbf16ger2nn */
+			   pmdmxvbf16ger2nn (pmxvbf16ger2nn)  */
 	    ppc_record_ACC_fpscr (regcache, tdep, at, true);
 	    return 0;
 	  }
diff --git a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c
index c0d65d944af..e44645e0f58 100644
--- a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c
+++ b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.c
@@ -22,6 +22,13 @@ static unsigned long ra, rb, rs;
 int
 main ()
 {
+
+  /* This test is used to verify the recording of the MMA instructions.  The
+     names of the MMA instructions pmxbf16ger*, pmxvf32ger*,pmxvf64ger*,
+     pmxvi4ger8*, pmxvi8ger4* pmxvi16ger2* instructions were officially changed
+     to pmdmxbf16ger*, pmdmxvf32ger*, pmdmxvf64ger*, pmdmxvi4ger8*,
+     pmdmxvi8ger4*, pmdmxvi16ger* respectively.  The old mnemonics are used in
+     this test for backward compatibity.   */
   ra = 0xABCDEF012;
   rb = 0;
   rs = 0x012345678;
@@ -87,6 +94,7 @@ main ()
 			"wa" (vec_xb) );
   __asm__ __volatile__ ("xvf16ger2pn 5, %x0, %x1" :: "wa" (vec_xa),\
 			"wa" (vec_xb) );
+  /* Use the older instruction name for backward compatibility */
   __asm__ __volatile__ ("pmxvi8ger4spp  6, %x0, %x1, 11, 13, 5"
                                 :: "wa" (vec_xa), "wa" (vec_xb) );
   __asm__ __volatile__ ("pmxvf32gerpp  7, %x0, %x1, 11, 13"
diff --git a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp
index 8cecb067667..79f04f65b64 100644
--- a/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp
+++ b/gdb/testsuite/gdb.reverse/ppc_record_test_isa_3_1.exp
@@ -124,6 +124,11 @@ gdb_test_no_output "record" "start recording test2"
 ##       pmxvi8ger4   - ACC[6], vs[21] to vs[27]
 ##       pmxvf32gerpp - ACC[7], vs[28] to vs[31] and fpscr
 
+## Note the names for pmxvi8ger4 and pmxvf32gerpp have been officially
+## changed to pmdmxvi8ger4 and pmdmxvf32gerpp respectively.  The older
+## names are still supported by the assembler as extended mnemonics.  The
+## older names are used in this test for backward compatibility.
+
 set stop3 [gdb_get_line_number "stop 3"]
 set stop4 [gdb_get_line_number "stop 4"]
 
-- 
2.37.2




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re:  [PATCH 2/2 version 2] PowerPC: update comments for the MMA instruction name changes.
  2022-11-04 15:49     ` [PATCH 2/2 version 2] " Carl Love
@ 2022-11-04 16:02       ` Ulrich Weigand
  0 siblings, 0 replies; 8+ messages in thread
From: Ulrich Weigand @ 2022-11-04 16:02 UTC (permalink / raw)
  To: gdb-patches, cel; +Cc: will_schmidt, bergner

Carl Love <cel@us.ibm.com> wrote:

>PowerPC update comments for the MMA instruction name changes.

This version is OK.

Thanks,
Ulrich


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-11-04 16:02 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20111d66a467599d893bf85bbdf2e82b76377127.camel@us.ibm.com>
2022-11-03 17:30 ` [PATCH 0//2] PowerPC: MMA+ Outer-Product Instruction name change Carl Love
2022-11-03 17:30 ` [PATCH 1/2] PowerPC: fix for the gdb.arch/powerpc-power10.exp test Carl Love
2022-11-04 13:02   ` Ulrich Weigand
2022-11-03 17:30 ` [PATCH 2/2] PowerPC: update comments for the MMA instruction name changes Carl Love
2022-11-04 13:04   ` Ulrich Weigand
2022-11-04 15:25     ` Carl Love
2022-11-04 15:49     ` [PATCH 2/2 version 2] " Carl Love
2022-11-04 16:02       ` Ulrich Weigand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).