public inbox for binutils@sourceware.org
 help / color / mirror / Atom feed
* [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension
@ 2016-08-23  9:05 Richard Sandiford
  2016-08-23  9:06 ` [AArch64][SVE 02/32] Avoid hard-coded limit in indented_print Richard Sandiford
                   ` (32 more replies)
  0 siblings, 33 replies; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:05 UTC (permalink / raw)
  To: binutils

This series of patches adds support for the ARMv8-A Scalable Vector
Extension (SVE), which was announced at Hot Chips yesterday.  Please
see Nigel Stephen's blog at:

    https://community.arm.com/groups/processors/blog/2016/08/22/technology-update-the-scalable-vector-extension-sve-for-the-armv8-a-architecture

for more details.
 
I've committed the patches to the users/ARM/sve branch, in case anyone
wants to see the end result without having to apply the patches locally.
 
The series was tested on aarch64-linux-gnu.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 02/32] Avoid hard-coded limit in indented_print
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
@ 2016-08-23  9:06 ` Richard Sandiford
  2016-08-23 14:35   ` Richard Earnshaw (lists)
  2016-08-23  9:06 ` [AArch64][SVE 01/32] Remove parse_neon_operand_type Richard Sandiford
                   ` (31 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:06 UTC (permalink / raw)
  To: binutils

The maximum indentation needed by aarch64-gen.c grows as more
instructions are added to aarch64-tbl.h.  Rather than having to
increase the indentation limit to a higher value, it seemed better
to replace it with "%*s".

OK to install?

Thanks,
Richard


opcodes/
	* aarch64-gen.c (indented_print): Avoid hard-coded indentation limit.

diff --git a/opcodes/aarch64-gen.c b/opcodes/aarch64-gen.c
index ed0834a..b87dea4 100644
--- a/opcodes/aarch64-gen.c
+++ b/opcodes/aarch64-gen.c
@@ -378,13 +378,9 @@ initialize_decoder_tree (void)
 static void __attribute__ ((format (printf, 2, 3)))
 indented_print (unsigned int indent, const char *format, ...)
 {
-  /* 80 number of spaces pluc a NULL terminator.  */
-  static const char spaces[81] =
-    "                                                                                ";
   va_list ap;
   va_start (ap, format);
-  assert (indent <= 80);
-  printf ("%s", &spaces[80 - indent]);
+  printf ("%*s", indent, "");
   vprintf (format, ap);
   va_end (ap);
 }

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 01/32] Remove parse_neon_operand_type
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
  2016-08-23  9:06 ` [AArch64][SVE 02/32] Avoid hard-coded limit in indented_print Richard Sandiford
@ 2016-08-23  9:06 ` Richard Sandiford
  2016-08-23 14:28   ` Richard Earnshaw (lists)
  2016-08-23  9:07 ` [AArch64][SVE 04/32] Rename neon_type_el to vector_type_el Richard Sandiford
                   ` (30 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:06 UTC (permalink / raw)
  To: binutils

A false return from parse_neon_operand_type had an overloaded
meaning: either the parsing failed, or there was nothing to parse
(which isn't necessarily an error).  The only caller, parse_typed_reg,
would therefore not consume the suffix if it was invalid but instead
(successfully) parse the register without a suffix.  It would still
leave inst.parsing_error with an error about the invalid suffix.

It seems wrong for a successful parse to leave an error message,
so this patch makes parse_typed_reg return PARSE_FAIL instead.

The patch doesn't seem to make much difference in practice.
Most possible follow-on errors use set_first_error and so the
error about the suffix tended to win despite the successful parse.

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (parse_neon_operand_type): Delete.
	(parse_typed_reg): Call parse_neon_type_for_operand directly.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 34fdc53..ce8e713 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -821,31 +821,6 @@ elt_size:
   return TRUE;
 }
 
-/* Parse a single type, e.g. ".8b", leading period included.
-   Only applicable to Vn registers.
-
-   Return TRUE on success; otherwise return FALSE.  */
-static bfd_boolean
-parse_neon_operand_type (struct neon_type_el *vectype, char **ccp)
-{
-  char *str = *ccp;
-
-  if (*str == '.')
-    {
-      if (! parse_neon_type_for_operand (vectype, &str))
-	{
-	  first_error (_("vector type expected"));
-	  return FALSE;
-	}
-    }
-  else
-    return FALSE;
-
-  *ccp = str;
-
-  return TRUE;
-}
-
 /* Parse a register of the type TYPE.
 
    Return PARSE_FAIL if the string pointed by *CCP is not a valid register
@@ -889,9 +864,11 @@ parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
     }
   type = reg->type;
 
-  if (type == REG_TYPE_VN
-      && parse_neon_operand_type (&parsetype, &str))
+  if (type == REG_TYPE_VN && *str == '.')
     {
+      if (!parse_neon_type_for_operand (&parsetype, &str))
+	return PARSE_FAIL;
+
       /* Register if of the form Vn.[bhsdq].  */
       is_typed_vecreg = TRUE;
 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 04/32] Rename neon_type_el to vector_type_el
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
  2016-08-23  9:06 ` [AArch64][SVE 02/32] Avoid hard-coded limit in indented_print Richard Sandiford
  2016-08-23  9:06 ` [AArch64][SVE 01/32] Remove parse_neon_operand_type Richard Sandiford
@ 2016-08-23  9:07 ` Richard Sandiford
  2016-08-23 14:37   ` Richard Earnshaw (lists)
  2016-08-23  9:07 ` [AArch64][SVE 03/32] Rename neon_el_type to vector_el_type Richard Sandiford
                   ` (29 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:07 UTC (permalink / raw)
  To: binutils

Similar to the previous patch, but this time for the neon_type_el
structure.

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (neon_type_el): Rename to...
	(vector_type_el): ...this.
	(parse_neon_type_for_operand): Update accordingly.
	(parse_typed_reg): Likewise.
	(aarch64_reg_parse): Likewise.
	(vectype_to_qualifier): Likewise.
	(parse_operands): Likewise.
	(eq_neon_type_el): Likewise.  Rename to...
	(eq_vector_type_el): ...this.
	(parse_neon_reg_list): Update accordingly.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index de1a74d..db30ab4 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -86,11 +86,11 @@ enum vector_el_type
   NT_q
 };
 
-/* Bits for DEFINED field in neon_type_el.  */
+/* Bits for DEFINED field in vector_type_el.  */
 #define NTA_HASTYPE  1
 #define NTA_HASINDEX 2
 
-struct neon_type_el
+struct vector_type_el
 {
   enum vector_el_type type;
   unsigned char defined;
@@ -747,7 +747,7 @@ aarch64_reg_parse_32_64 (char **ccp, int reject_sp, int reject_rz,
    8b 16b 2h 4h 8h 2s 4s 1d 2d
    b h s d q  */
 static bfd_boolean
-parse_neon_type_for_operand (struct neon_type_el *parsed_type, char **str)
+parse_neon_type_for_operand (struct vector_type_el *parsed_type, char **str)
 {
   char *ptr = *str;
   unsigned width;
@@ -835,12 +835,12 @@ elt_size:
 
 static int
 parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
-		 struct neon_type_el *typeinfo, bfd_boolean in_reg_list)
+		 struct vector_type_el *typeinfo, bfd_boolean in_reg_list)
 {
   char *str = *ccp;
   const reg_entry *reg = parse_reg (&str);
-  struct neon_type_el atype;
-  struct neon_type_el parsetype;
+  struct vector_type_el atype;
+  struct vector_type_el parsetype;
   bfd_boolean is_typed_vecreg = FALSE;
 
   atype.defined = 0;
@@ -955,9 +955,9 @@ parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
 
 static int
 aarch64_reg_parse (char **ccp, aarch64_reg_type type,
-		   aarch64_reg_type *rtype, struct neon_type_el *vectype)
+		   aarch64_reg_type *rtype, struct vector_type_el *vectype)
 {
-  struct neon_type_el atype;
+  struct vector_type_el atype;
   char *str = *ccp;
   int reg = parse_typed_reg (&str, type, rtype, &atype,
 			     /*in_reg_list= */ FALSE);
@@ -974,7 +974,7 @@ aarch64_reg_parse (char **ccp, aarch64_reg_type type,
 }
 
 static inline bfd_boolean
-eq_neon_type_el (struct neon_type_el e1, struct neon_type_el e2)
+eq_vector_type_el (struct vector_type_el e1, struct vector_type_el e2)
 {
   return
     e1.type == e2.type
@@ -1003,11 +1003,11 @@ eq_neon_type_el (struct neon_type_el e1, struct neon_type_el e2)
    (by reg_list_valid_p).  */
 
 static int
-parse_neon_reg_list (char **ccp, struct neon_type_el *vectype)
+parse_neon_reg_list (char **ccp, struct vector_type_el *vectype)
 {
   char *str = *ccp;
   int nb_regs;
-  struct neon_type_el typeinfo, typeinfo_first;
+  struct vector_type_el typeinfo, typeinfo_first;
   int val, val_range;
   int in_range;
   int ret_val;
@@ -1072,7 +1072,7 @@ parse_neon_reg_list (char **ccp, struct neon_type_el *vectype)
 	  val_range = val;
 	  if (nb_regs == 0)
 	    typeinfo_first = typeinfo;
-	  else if (! eq_neon_type_el (typeinfo_first, typeinfo))
+	  else if (! eq_vector_type_el (typeinfo_first, typeinfo))
 	    {
 	      set_first_syntax_error
 		(_("type mismatch in vector register list"));
@@ -4631,11 +4631,11 @@ opcode_lookup (char **str)
   return NULL;
 }
 
-/* Internal helper routine converting a vector neon_type_el structure
-   *VECTYPE to a corresponding operand qualifier.  */
+/* Internal helper routine converting a vector_type_el structure *VECTYPE
+   to a corresponding operand qualifier.  */
 
 static inline aarch64_opnd_qualifier_t
-vectype_to_qualifier (const struct neon_type_el *vectype)
+vectype_to_qualifier (const struct vector_type_el *vectype)
 {
   /* Element size in bytes indexed by vector_el_type.  */
   const unsigned char ele_size[5]
@@ -4988,7 +4988,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
       int isreg32, isregzero;
       int comma_skipped_p = 0;
       aarch64_reg_type rtype;
-      struct neon_type_el vectype;
+      struct vector_type_el vectype;
       aarch64_opnd_info *info = &inst.base.operands[i];
 
       DEBUG_TRACE ("parse operand %d", i);

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 03/32] Rename neon_el_type to vector_el_type
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (2 preceding siblings ...)
  2016-08-23  9:07 ` [AArch64][SVE 04/32] Rename neon_type_el to vector_type_el Richard Sandiford
@ 2016-08-23  9:07 ` Richard Sandiford
  2016-08-23 14:36   ` Richard Earnshaw (lists)
  2016-08-23  9:08 ` [AArch64][SVE 06/32] Generalise parse_neon_reg_list Richard Sandiford
                   ` (28 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:07 UTC (permalink / raw)
  To: binutils

Later patches will add SVEisms to neon_el_type, so this patch renames
it to something more generic.

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (neon_el_type: Rename to...
	(vector_el_type): ...this.
	(neon_type_el): Update accordingly.
	(parse_neon_type_for_operand): Likewise.
	(vectype_to_qualifier): Likewise.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index ce8e713..de1a74d 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -76,7 +76,7 @@ static enum aarch64_abi_type aarch64_abi = AARCH64_ABI_LP64;
 #define ilp32_p		(aarch64_abi == AARCH64_ABI_ILP32)
 #endif
 
-enum neon_el_type
+enum vector_el_type
 {
   NT_invtype = -1,
   NT_b,
@@ -92,7 +92,7 @@ enum neon_el_type
 
 struct neon_type_el
 {
-  enum neon_el_type type;
+  enum vector_el_type type;
   unsigned char defined;
   unsigned width;
   int64_t index;
@@ -752,7 +752,7 @@ parse_neon_type_for_operand (struct neon_type_el *parsed_type, char **str)
   char *ptr = *str;
   unsigned width;
   unsigned element_size;
-  enum neon_el_type type;
+  enum vector_el_type type;
 
   /* skip '.' */
   ptr++;
@@ -4637,7 +4637,7 @@ opcode_lookup (char **str)
 static inline aarch64_opnd_qualifier_t
 vectype_to_qualifier (const struct neon_type_el *vectype)
 {
-  /* Element size in bytes indexed by neon_el_type.  */
+  /* Element size in bytes indexed by vector_el_type.  */
   const unsigned char ele_size[5]
     = {1, 2, 4, 8, 16};
   const unsigned int ele_base [5] =

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 05/32] Rename parse_neon_type_for_operand
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (4 preceding siblings ...)
  2016-08-23  9:08 ` [AArch64][SVE 06/32] Generalise parse_neon_reg_list Richard Sandiford
@ 2016-08-23  9:08 ` Richard Sandiford
  2016-08-23 14:37   ` Richard Earnshaw (lists)
  2016-08-23  9:09 ` [AArch64][SVE 07/32] Replace hard-coded uses of REG_TYPE_R_Z_BHSDQ_V Richard Sandiford
                   ` (26 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:08 UTC (permalink / raw)
  To: binutils

Generalise the name of parse_neon_type_for_operand to
parse_vector_type_for_operand.  Later patches will add SVEisms to it.

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (parse_neon_type_for_operand): Rename to...
	(parse_vector_type_for_operand): ...this.
	(parse_typed_reg): Update accordingly.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index db30ab4..c425418 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -747,7 +747,7 @@ aarch64_reg_parse_32_64 (char **ccp, int reject_sp, int reject_rz,
    8b 16b 2h 4h 8h 2s 4s 1d 2d
    b h s d q  */
 static bfd_boolean
-parse_neon_type_for_operand (struct vector_type_el *parsed_type, char **str)
+parse_vector_type_for_operand (struct vector_type_el *parsed_type, char **str)
 {
   char *ptr = *str;
   unsigned width;
@@ -866,7 +866,7 @@ parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
 
   if (type == REG_TYPE_VN && *str == '.')
     {
-      if (!parse_neon_type_for_operand (&parsetype, &str))
+      if (!parse_vector_type_for_operand (&parsetype, &str))
 	return PARSE_FAIL;
 
       /* Register if of the form Vn.[bhsdq].  */

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 06/32] Generalise parse_neon_reg_list
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (3 preceding siblings ...)
  2016-08-23  9:07 ` [AArch64][SVE 03/32] Rename neon_el_type to vector_el_type Richard Sandiford
@ 2016-08-23  9:08 ` Richard Sandiford
  2016-08-23 14:39   ` Richard Earnshaw (lists)
  2016-08-23  9:08 ` [AArch64][SVE 05/32] Rename parse_neon_type_for_operand Richard Sandiford
                   ` (27 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:08 UTC (permalink / raw)
  To: binutils

Rename parse_neon_reg_list to parse_vector_reg_list and take
in the required register type as an argument.  Later patches
will reuse the function for SVE registers.

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (parse_neon_reg_list): Rename to...
	(parse_vector_reg_list): ...this and take a register type
	as input.
	(parse_operands): Update accordingly.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index c425418..e65cc7a 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -982,8 +982,9 @@ eq_vector_type_el (struct vector_type_el e1, struct vector_type_el e2)
     && e1.width == e2.width && e1.index == e2.index;
 }
 
-/* This function parses the NEON register list.  On success, it returns
-   the parsed register list information in the following encoded format:
+/* This function parses a list of vector registers of type TYPE.
+   On success, it returns the parsed register list information in the
+   following encoded format:
 
    bit   18-22   |   13-17   |   7-11    |    2-6    |   0-1
        4th regno | 3rd regno | 2nd regno | 1st regno | num_of_reg
@@ -1003,7 +1004,8 @@ eq_vector_type_el (struct vector_type_el e1, struct vector_type_el e2)
    (by reg_list_valid_p).  */
 
 static int
-parse_neon_reg_list (char **ccp, struct vector_type_el *vectype)
+parse_vector_reg_list (char **ccp, aarch64_reg_type type,
+		       struct vector_type_el *vectype)
 {
   char *str = *ccp;
   int nb_regs;
@@ -1038,7 +1040,7 @@ parse_neon_reg_list (char **ccp, struct vector_type_el *vectype)
 	  str++;		/* skip over '-' */
 	  val_range = val;
 	}
-      val = parse_typed_reg (&str, REG_TYPE_VN, NULL, &typeinfo,
+      val = parse_typed_reg (&str, type, NULL, &typeinfo,
 			     /*in_reg_list= */ TRUE);
       if (val == PARSE_FAIL)
 	{
@@ -5135,7 +5137,8 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	case AARCH64_OPND_LVt:
 	case AARCH64_OPND_LVt_AL:
 	case AARCH64_OPND_LEt:
-	  if ((val = parse_neon_reg_list (&str, &vectype)) == PARSE_FAIL)
+	  if ((val = parse_vector_reg_list (&str, REG_TYPE_VN,
+					    &vectype)) == PARSE_FAIL)
 	    goto failure;
 	  if (! reg_list_valid_p (val, /* accept_alternate */ 0))
 	    {

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 07/32] Replace hard-coded uses of REG_TYPE_R_Z_BHSDQ_V
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (5 preceding siblings ...)
  2016-08-23  9:08 ` [AArch64][SVE 05/32] Rename parse_neon_type_for_operand Richard Sandiford
@ 2016-08-23  9:09 ` Richard Sandiford
  2016-08-25 10:36   ` Richard Earnshaw (lists)
  2016-08-23  9:10 ` [AArch64][SVE 08/32] Generalise aarch64_double_precision_fmovable Richard Sandiford
                   ` (25 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:09 UTC (permalink / raw)
  To: binutils

To remove parsing ambiguities and to avoid register names being
accidentally added to the symbol table, the immediate parsing
routines reject things like:

	.equ	x0, 0
	add	v0.4s, v0.4s, x0

An explicit '#' must be used instead:

	.equ	x0, 0
	add	v0.4s, v0.4s, #x0

Of course, it wasn't possible to predict what other register
names might be added in future, so this behaviour was restricted
to the register names that were defined at the time.  For backwards
compatibility, we should continue to allow things like:

	.equ	p0, 0
	add	v0.4s, v0.4s, p0

even though p0 is now an SVE register.

However, it seems reasonable to extend the x0 behaviour above to
SVE registers when parsing SVE instructions, especially since none
of the SVE immediate formats are relocatable.  Doing so removes the
same parsing ambiguity for SVE instructions as the x0 behaviour removes
for base AArch64 instructions.

As a prerequisite, we then need to be able to tell the parsing routines
which registers to reject.  This patch changes the interface to make
that possible, although the set of rejected registers doesn't change
at this stage.

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (parse_immediate_expression): Add a
	reg_type parameter.
	(parse_constant_immediate): Likewise, and update calls.
	(parse_aarch64_imm_float): Likewise.
	(parse_big_immediate): Likewise.
	(po_imm_nc_or_fail): Update accordingly, passing down a new
	imm_reg_type variable.
	(po_imm_of_fail): Likewise.
	(parse_operands): Likewise.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index e65cc7a..eec08c7 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -2004,14 +2004,14 @@ reg_name_p (char *str, aarch64_reg_type reg_type)
 
    To prevent the expression parser from pushing a register name
    into the symbol table as an undefined symbol, firstly a check is
-   done to find out whether STR is a valid register name followed
-   by a comma or the end of line.  Return FALSE if STR is such a
-   string.  */
+   done to find out whether STR is a register of type REG_TYPE followed
+   by a comma or the end of line.  Return FALSE if STR is such a string.  */
 
 static bfd_boolean
-parse_immediate_expression (char **str, expressionS *exp)
+parse_immediate_expression (char **str, expressionS *exp,
+			    aarch64_reg_type reg_type)
 {
-  if (reg_name_p (*str, REG_TYPE_R_Z_BHSDQ_V))
+  if (reg_name_p (*str, reg_type))
     {
       set_recoverable_error (_("immediate operand required"));
       return FALSE;
@@ -2030,16 +2030,17 @@ parse_immediate_expression (char **str, expressionS *exp)
 
 /* Constant immediate-value read function for use in insn parsing.
    STR points to the beginning of the immediate (with the optional
-   leading #); *VAL receives the value.
+   leading #); *VAL receives the value.  REG_TYPE says which register
+   names should be treated as registers rather than as symbolic immediates.
 
    Return TRUE on success; otherwise return FALSE.  */
 
 static bfd_boolean
-parse_constant_immediate (char **str, int64_t * val)
+parse_constant_immediate (char **str, int64_t *val, aarch64_reg_type reg_type)
 {
   expressionS exp;
 
-  if (! parse_immediate_expression (str, &exp))
+  if (! parse_immediate_expression (str, &exp, reg_type))
     return FALSE;
 
   if (exp.X_op != O_constant)
@@ -2148,12 +2149,14 @@ aarch64_double_precision_fmovable (uint64_t imm, uint32_t *fpword)
    value in *IMMED in the format of IEEE754 single-precision encoding.
    *CCP points to the start of the string; DP_P is TRUE when the immediate
    is expected to be in double-precision (N.B. this only matters when
-   hexadecimal representation is involved).
+   hexadecimal representation is involved).  REG_TYPE says which register
+   names should be treated as registers rather than as symbolic immediates.
 
    N.B. 0.0 is accepted by this function.  */
 
 static bfd_boolean
-parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p)
+parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p,
+			 aarch64_reg_type reg_type)
 {
   char *str = *ccp;
   char *fpnum;
@@ -2173,7 +2176,7 @@ parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p)
       /* Support the hexadecimal representation of the IEEE754 encoding.
 	 Double-precision is expected when DP_P is TRUE, otherwise the
 	 representation should be in single-precision.  */
-      if (! parse_constant_immediate (&str, &val))
+      if (! parse_constant_immediate (&str, &val, reg_type))
 	goto invalid_fp;
 
       if (dp_p)
@@ -2237,15 +2240,15 @@ invalid_fp:
 
    To prevent the expression parser from pushing a register name into the
    symbol table as an undefined symbol, a check is firstly done to find
-   out whether STR is a valid register name followed by a comma or the end
-   of line.  Return FALSE if STR is such a register.  */
+   out whether STR is a register of type REG_TYPE followed by a comma or
+   the end of line.  Return FALSE if STR is such a register.  */
 
 static bfd_boolean
-parse_big_immediate (char **str, int64_t *imm)
+parse_big_immediate (char **str, int64_t *imm, aarch64_reg_type reg_type)
 {
   char *ptr = *str;
 
-  if (reg_name_p (ptr, REG_TYPE_R_Z_BHSDQ_V))
+  if (reg_name_p (ptr, reg_type))
     {
       set_syntax_error (_("immediate operand required"));
       return FALSE;
@@ -3736,12 +3739,12 @@ parse_sys_ins_reg (char **str, struct hash_control *sys_ins_regs)
   } while (0)
 
 #define po_imm_nc_or_fail() do {				\
-    if (! parse_constant_immediate (&str, &val))		\
+    if (! parse_constant_immediate (&str, &val, imm_reg_type))	\
       goto failure;						\
   } while (0)
 
 #define po_imm_or_fail(min, max) do {				\
-    if (! parse_constant_immediate (&str, &val))		\
+    if (! parse_constant_immediate (&str, &val, imm_reg_type))	\
       goto failure;						\
     if (val < min || val > max)					\
       {								\
@@ -4980,10 +4983,13 @@ parse_operands (char *str, const aarch64_opcode *opcode)
   int i;
   char *backtrack_pos = 0;
   const enum aarch64_opnd *operands = opcode->operands;
+  aarch64_reg_type imm_reg_type;
 
   clear_error ();
   skip_whitespace (str);
 
+  imm_reg_type = REG_TYPE_R_Z_BHSDQ_V;
+
   for (i = 0; operands[i] != AARCH64_OPND_NIL; i++)
     {
       int64_t val;
@@ -5219,8 +5225,10 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	    bfd_boolean res1 = FALSE, res2 = FALSE;
 	    /* N.B. -0.0 will be rejected; although -0.0 shouldn't be rejected,
 	       it is probably not worth the effort to support it.  */
-	    if (!(res1 = parse_aarch64_imm_float (&str, &qfloat, FALSE))
-		&& !(res2 = parse_constant_immediate (&str, &val)))
+	    if (!(res1 = parse_aarch64_imm_float (&str, &qfloat, FALSE,
+						  imm_reg_type))
+		&& !(res2 = parse_constant_immediate (&str, &val,
+						      imm_reg_type)))
 	      goto failure;
 	    if ((res1 && qfloat == 0) || (res2 && val == 0))
 	      {
@@ -5253,7 +5261,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 
 	case AARCH64_OPND_SIMD_IMM:
 	case AARCH64_OPND_SIMD_IMM_SFT:
-	  if (! parse_big_immediate (&str, &val))
+	  if (! parse_big_immediate (&str, &val, imm_reg_type))
 	    goto failure;
 	  assign_imm_if_const_or_fixup_later (&inst.reloc, info,
 					      /* addr_off_p */ 0,
@@ -5284,7 +5292,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	    bfd_boolean dp_p
 	      = (aarch64_get_qualifier_esize (inst.base.operands[0].qualifier)
 		 == 8);
-	    if (! parse_aarch64_imm_float (&str, &qfloat, dp_p))
+	    if (! parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type))
 	      goto failure;
 	    if (qfloat == 0)
 	      {
@@ -5372,7 +5380,8 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  break;
 
 	case AARCH64_OPND_EXCEPTION:
-	  po_misc_or_fail (parse_immediate_expression (&str, &inst.reloc.exp));
+	  po_misc_or_fail (parse_immediate_expression (&str, &inst.reloc.exp,
+						       imm_reg_type));
 	  assign_imm_if_const_or_fixup_later (&inst.reloc, info,
 					      /* addr_off_p */ 0,
 					      /* need_libopcodes_p */ 0,

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 08/32] Generalise aarch64_double_precision_fmovable
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (6 preceding siblings ...)
  2016-08-23  9:09 ` [AArch64][SVE 07/32] Replace hard-coded uses of REG_TYPE_R_Z_BHSDQ_V Richard Sandiford
@ 2016-08-23  9:10 ` Richard Sandiford
  2016-08-25 13:17   ` Richard Earnshaw (lists)
  2016-08-23  9:11 ` [AArch64][SVE 09/32] Improve error messages for invalid floats Richard Sandiford
                   ` (24 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:10 UTC (permalink / raw)
  To: binutils

SVE has single-bit floating-point constants that don't really
have any relation to the AArch64 8-bit floating-point encoding.
(E.g. one of the constants selects between 0 and 1.)  The easiest
way of representing them in the aarch64_opnd_info seemed to be
to use the IEEE float representation directly, rather than invent
some new scheme.

This patch paves the way for that by making the code that converts IEEE
doubles to IEEE floats accept any value in the range of an IEEE float,
not just zero and 8-bit floats.  It leaves the range checking to the
caller (which already handles it).

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (aarch64_double_precision_fmovable): Rename
	to...
	(can_convert_double_to_float): ...this.  Accept any double-precision
	value that converts to single precision without loss of precision.
	(parse_aarch64_imm_float): Update accordingly.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index eec08c7..40f6253 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -2093,56 +2093,52 @@ aarch64_imm_float_p (uint32_t imm)
     && ((imm & 0x7e000000) == pattern);	/* bits 25 - 29 == ~ bit 30.  */
 }
 
-/* Like aarch64_imm_float_p but for a double-precision floating-point value.
-
-   Return TRUE if the value encoded in IMM can be expressed in the AArch64
-   8-bit signed floating-point format with 3-bit exponent and normalized 4
-   bits of precision (i.e. can be used in an FMOV instruction); return the
-   equivalent single-precision encoding in *FPWORD.
-
-   Otherwise return FALSE.  */
+/* Return TRUE if the IEEE double value encoded in IMM can be expressed
+   as an IEEE float without any loss of precision.  Store the value in
+   *FPWORD if so.  */
 
 static bfd_boolean
-aarch64_double_precision_fmovable (uint64_t imm, uint32_t *fpword)
+can_convert_double_to_float (uint64_t imm, uint32_t *fpword)
 {
   /* If a double-precision floating-point value has the following bit
-     pattern, it can be expressed in the AArch64 8-bit floating-point
-     format:
+     pattern, it can be expressed in a float:
 
-     6 66655555555 554444444...21111111111
-     3 21098765432 109876543...098765432109876543210
-     n Eeeeeeeeexx xxxx00000...000000000000000000000
+     6 66655555555 5544 44444444 33333333 33222222 22221111 111111
+     3 21098765432 1098 76543210 98765432 10987654 32109876 54321098 76543210
+     n E~~~eeeeeee ssss ssssssss ssssssss SSS00000 00000000 00000000 00000000
 
-     where n, e and each x are either 0 or 1 independently, with
-     E == ~ e.  */
+       ----------------------------->     nEeeeeee esssssss ssssssss sssssSSS
+	 if Eeee_eeee != 1111_1111
+
+     where n, e, s and S are either 0 or 1 independently and where ~ is the
+     inverse of E.  */
 
   uint32_t pattern;
   uint32_t high32 = imm >> 32;
+  uint32_t low32 = imm;
 
-  /* Lower 32 bits need to be 0s.  */
-  if ((imm & 0xffffffff) != 0)
+  /* Lower 29 bits need to be 0s.  */
+  if ((imm & 0x1fffffff) != 0)
     return FALSE;
 
   /* Prepare the pattern for 'Eeeeeeeee'.  */
   if (((high32 >> 30) & 0x1) == 0)
-    pattern = 0x3fc00000;
+    pattern = 0x38000000;
   else
     pattern = 0x40000000;
 
-  if ((high32 & 0xffff) == 0			/* bits 32 - 47 are 0.  */
-      && (high32 & 0x7fc00000) == pattern)	/* bits 54 - 61 == ~ bit 62.  */
-    {
-      /* Convert to the single-precision encoding.
-         i.e. convert
-	   n Eeeeeeeeexx xxxx00000...000000000000000000000
-	 to
-	   n Eeeeeexx xxxx0000000000000000000.  */
-      *fpword = ((high32 & 0xfe000000)			/* nEeeeee.  */
-		 | (((high32 >> 16) & 0x3f) << 19));	/* xxxxxx.  */
-      return TRUE;
-    }
-  else
+  /* Check E~~~.  */
+  if ((high32 & 0x78000000) != pattern)
     return FALSE;
+
+  /* Check Eeee_eeee != 1111_1111.  */
+  if ((high32 & 0x7ff00000) == 0x47f00000)
+    return FALSE;
+
+  *fpword = ((high32 & 0xc0000000)		/* 1 n bit and 1 E bit.  */
+	     | ((high32 << 3) & 0x3ffffff8)	/* 7 e and 20 s bits.  */
+	     | (low32 >> 29));			/* 3 S bits.  */
+  return TRUE;
 }
 
 /* Parse a floating-point immediate.  Return TRUE on success and return the
@@ -2181,7 +2177,7 @@ parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p,
 
       if (dp_p)
 	{
-	  if (! aarch64_double_precision_fmovable (val, &fpword))
+	  if (!can_convert_double_to_float (val, &fpword))
 	    goto invalid_fp;
 	}
       else if ((uint64_t) val > 0xffffffff)

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 10/32] Move range check out of parse_aarch64_imm_float
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (8 preceding siblings ...)
  2016-08-23  9:11 ` [AArch64][SVE 09/32] Improve error messages for invalid floats Richard Sandiford
@ 2016-08-23  9:11 ` Richard Sandiford
  2016-08-25 13:20   ` Richard Earnshaw (lists)
  2016-08-23  9:12 ` [AArch64][SVE 11/32] Tweak aarch64_reg_parse_32_64 interface Richard Sandiford
                   ` (22 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:11 UTC (permalink / raw)
  To: binutils

Since some SVE constants are no longer explicitly tied to the 8-bit
FP immediate format, it seems better to move the range checks out of
parse_aarch64_imm_float and into the callers.

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (parse_aarch64_imm_float): Remove range check.
	(parse_operands): Check the range of 8-bit FP immediates here instead.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 388c4bf..2489d5b 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -2148,7 +2148,8 @@ can_convert_double_to_float (uint64_t imm, uint32_t *fpword)
    hexadecimal representation is involved).  REG_TYPE says which register
    names should be treated as registers rather than as symbolic immediates.
 
-   N.B. 0.0 is accepted by this function.  */
+   This routine accepts any IEEE float; it is up to the callers to reject
+   invalid ones.  */
 
 static bfd_boolean
 parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p,
@@ -2224,12 +2225,9 @@ parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p,
 	}
     }
 
-  if (aarch64_imm_float_p (fpword) || fpword == 0)
-    {
-      *immed = fpword;
-      *ccp = str;
-      return TRUE;
-    }
+  *immed = fpword;
+  *ccp = str;
+  return TRUE;
 
 invalid_fp:
   set_fatal_syntax_error (_("invalid floating-point constant"));
@@ -5296,7 +5294,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	      = (aarch64_get_qualifier_esize (inst.base.operands[0].qualifier)
 		 == 8);
 	    if (!parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type)
-		|| qfloat == 0)
+		|| !aarch64_imm_float_p (qfloat))
 	      {
 		if (!error_p ())
 		  set_fatal_syntax_error (_("invalid floating-point"

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 09/32] Improve error messages for invalid floats
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (7 preceding siblings ...)
  2016-08-23  9:10 ` [AArch64][SVE 08/32] Generalise aarch64_double_precision_fmovable Richard Sandiford
@ 2016-08-23  9:11 ` Richard Sandiford
  2016-08-25 13:19   ` Richard Earnshaw (lists)
  2016-08-23  9:11 ` [AArch64][SVE 10/32] Move range check out of parse_aarch64_imm_float Richard Sandiford
                   ` (23 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:11 UTC (permalink / raw)
  To: binutils

Previously:

    fmov d0, #2

would give an error:

    Operand 2 should be an integer register

whereas the user probably just forgot to add the ".0" to make:

    fmov d0, #2.0

This patch reports an invalid floating point constant unless the
operand is obviously a register.

The FPIMM8 handling is only relevant for SVE.  Without it:

    fmov z0, z1

would try to parse z1 as an integer immediate zero (the res2 path),
whereas it's more likely that the user forgot the predicate.  This is
tested by the final patch.

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (parse_aarch64_imm_float): Report a specific
	low-severity error for registers.
	(parse_operands): Report an invalid floating point constant for
	if parsing an FPIMM8 fails, and if no better error has been
	recorded.
	* testsuite/gas/aarch64/diagnostic.s,
	testsuite/gas/aarch64/diagnostic.l: Add tests for integer operands
	to FMOV.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 40f6253..388c4bf 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -2189,6 +2189,12 @@ parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p,
     }
   else
     {
+      if (reg_name_p (str, reg_type))
+	{
+	  set_recoverable_error (_("immediate operand required"));
+	  return FALSE;
+	}
+
       /* We must not accidentally parse an integer as a floating-point number.
 	 Make sure that the value we parse is not an integer by checking for
 	 special characters '.' or 'e'.  */
@@ -5223,8 +5229,9 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	       it is probably not worth the effort to support it.  */
 	    if (!(res1 = parse_aarch64_imm_float (&str, &qfloat, FALSE,
 						  imm_reg_type))
-		&& !(res2 = parse_constant_immediate (&str, &val,
-						      imm_reg_type)))
+		&& (error_p ()
+		    || !(res2 = parse_constant_immediate (&str, &val,
+							  imm_reg_type))))
 	      goto failure;
 	    if ((res1 && qfloat == 0) || (res2 && val == 0))
 	      {
@@ -5288,11 +5295,12 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	    bfd_boolean dp_p
 	      = (aarch64_get_qualifier_esize (inst.base.operands[0].qualifier)
 		 == 8);
-	    if (! parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type))
-	      goto failure;
-	    if (qfloat == 0)
+	    if (!parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type)
+		|| qfloat == 0)
 	      {
-		set_fatal_syntax_error (_("invalid floating-point constant"));
+		if (!error_p ())
+		  set_fatal_syntax_error (_("invalid floating-point"
+					    " constant"));
 		goto failure;
 	      }
 	    inst.base.operands[i].imm.value = encode_imm_float_bits (qfloat);
diff --git a/gas/testsuite/gas/aarch64/diagnostic.l b/gas/testsuite/gas/aarch64/diagnostic.l
index c278887..67ef484 100644
--- a/gas/testsuite/gas/aarch64/diagnostic.l
+++ b/gas/testsuite/gas/aarch64/diagnostic.l
@@ -144,3 +144,7 @@
 [^:]*:255: Error: register element index out of range 0 to 15 at operand 1 -- `ld2 {v0\.b,v1\.b}\[-1\],\[x0\]'
 [^:]*:258: Error: register element index out of range 0 to 15 at operand 1 -- `ld2 {v0\.b,v1\.b}\[16\],\[x0\]'
 [^:]*:259: Error: register element index out of range 0 to 15 at operand 1 -- `ld2 {v0\.b,v1\.b}\[67\],\[x0\]'
+[^:]*:261: Error: invalid floating-point constant at operand 2 -- `fmov d0,#2'
+[^:]*:262: Error: invalid floating-point constant at operand 2 -- `fmov d0,#-2'
+[^:]*:263: Error: invalid floating-point constant at operand 2 -- `fmov s0,2'
+[^:]*:264: Error: invalid floating-point constant at operand 2 -- `fmov s0,-2'
diff --git a/gas/testsuite/gas/aarch64/diagnostic.s b/gas/testsuite/gas/aarch64/diagnostic.s
index ac2eb5c..3092b9b 100644
--- a/gas/testsuite/gas/aarch64/diagnostic.s
+++ b/gas/testsuite/gas/aarch64/diagnostic.s
@@ -257,3 +257,8 @@
 	ld2	{v0.b, v1.b}[15], [x0]
 	ld2	{v0.b, v1.b}[16], [x0]
 	ld2	{v0.b, v1.b}[67], [x0]
+
+	fmov	d0, #2
+	fmov	d0, #-2
+	fmov	s0, 2
+	fmov	s0, -2

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 11/32] Tweak aarch64_reg_parse_32_64 interface
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (9 preceding siblings ...)
  2016-08-23  9:11 ` [AArch64][SVE 10/32] Move range check out of parse_aarch64_imm_float Richard Sandiford
@ 2016-08-23  9:12 ` Richard Sandiford
  2016-08-25 13:27   ` Richard Earnshaw (lists)
  2016-08-23  9:13 ` [AArch64][SVE 12/32] Make more use of bfd_boolean Richard Sandiford
                   ` (21 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:12 UTC (permalink / raw)
  To: binutils

aarch64_reg_parse_32_64 is currently used to parse address registers,
among other things.  It returns two bits of information about the
register: whether it's W rather than X, and whether it's a zero register.

SVE adds addressing modes in which the base or offset can be a vector
register instead of a scalar, so a choice between W and X is no longer
enough.  It's more convenient to pass the type of register around as
a qualifier instead.

As it happens, two callers of aarch64_reg_parse_32_64 already wanted
the information in the form of a qualifier, so the change feels pretty
natural even without SVE.

Also, the function took two parameters to control whether {W}SP
and (W|X)ZR should be accepted.  These parameters were negative
"reject" parameters, but the closely-related parse_address_main
had a positive "accept" parameter (for post-indexed addressing).
One of the SVE patches adds a parameter to parse_address_main
that needs to be passed down alongside the aarch64_reg_parse_32_64
parameters, which as things stood led to an awkward mix of positive
and negative bools.  The patch therefore changes the
aarch64_reg_parse_32_64 parameters to "accept_sp" and "accept_rz"
instead.

Finally, the two input parameters and isregzero return value were
all ints but logically bools.  The patch changes the types to
bfd_boolean.

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (aarch64_reg_parse_32_64): Return the register
	type as a qualifier rather than an "isreg32" boolean.  Turn the
	SP/ZR control parameters from negative "reject" to positive
	"accept".  Make them and *ISREGZERO bfd_booleans rather than ints.
	(parse_shifter_operand): Update accordingly.
	(parse_address_main): Likewise.
	(po_int_reg_or_fail): Likewise.  Make the same reject->accept
	change to the macro parameters.
	(parse_operands): Update after the above changes, replacing
	the "isreg32" local variable with one called "qualifier".

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 2489d5b..2e0e4f8 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -690,15 +690,21 @@ aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
     }
 }
 
-/* Parse a register and return PARSE_FAIL if the register is not of type R_Z_SP.
-   Return the register number otherwise.  *ISREG32 is set to one if the
-   register is 32-bit wide; *ISREGZERO is set to one if the register is
-   of type Z_32 or Z_64.
+/* Try to parse a base or offset register.  ACCEPT_SP says whether {W}SP
+   should be considered valid and ACCEPT_RZ says whether zero registers
+   should be considered valid.
+
+   Return the register number on success, setting *QUALIFIER to the
+   register qualifier and *ISREGZERO to whether the register is a zero
+   register.  Return PARSE_FAIL otherwise.
+
    Note that this function does not issue any diagnostics.  */
 
 static int
-aarch64_reg_parse_32_64 (char **ccp, int reject_sp, int reject_rz,
-			 int *isreg32, int *isregzero)
+aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
+			 bfd_boolean accept_rz,
+			 aarch64_opnd_qualifier_t *qualifier,
+			 bfd_boolean *isregzero)
 {
   char *str = *ccp;
   const reg_entry *reg = parse_reg (&str);
@@ -713,22 +719,28 @@ aarch64_reg_parse_32_64 (char **ccp, int reject_sp, int reject_rz,
     {
     case REG_TYPE_SP_32:
     case REG_TYPE_SP_64:
-      if (reject_sp)
+      if (!accept_sp)
 	return PARSE_FAIL;
-      *isreg32 = reg->type == REG_TYPE_SP_32;
-      *isregzero = 0;
+      *qualifier = (reg->type == REG_TYPE_SP_32
+		    ? AARCH64_OPND_QLF_W
+		    : AARCH64_OPND_QLF_X);
+      *isregzero = FALSE;
       break;
     case REG_TYPE_R_32:
     case REG_TYPE_R_64:
-      *isreg32 = reg->type == REG_TYPE_R_32;
-      *isregzero = 0;
+      *qualifier = (reg->type == REG_TYPE_R_32
+		    ? AARCH64_OPND_QLF_W
+		    : AARCH64_OPND_QLF_X);
+      *isregzero = FALSE;
       break;
     case REG_TYPE_Z_32:
     case REG_TYPE_Z_64:
-      if (reject_rz)
+      if (!accept_rz)
 	return PARSE_FAIL;
-      *isreg32 = reg->type == REG_TYPE_Z_32;
-      *isregzero = 1;
+      *qualifier = (reg->type == REG_TYPE_Z_32
+		    ? AARCH64_OPND_QLF_W
+		    : AARCH64_OPND_QLF_X);
+      *isregzero = TRUE;
       break;
     default:
       return PARSE_FAIL;
@@ -3033,12 +3045,13 @@ parse_shifter_operand (char **str, aarch64_opnd_info *operand,
 		       enum parse_shift_mode mode)
 {
   int reg;
-  int isreg32, isregzero;
+  aarch64_opnd_qualifier_t qualifier;
+  bfd_boolean isregzero;
   enum aarch64_operand_class opd_class
     = aarch64_get_operand_class (operand->type);
 
-  if ((reg =
-       aarch64_reg_parse_32_64 (str, 0, 0, &isreg32, &isregzero)) != PARSE_FAIL)
+  if ((reg = aarch64_reg_parse_32_64 (str, TRUE, TRUE, &qualifier,
+				      &isregzero)) != PARSE_FAIL)
     {
       if (opd_class == AARCH64_OPND_CLASS_IMMEDIATE)
 	{
@@ -3053,7 +3066,7 @@ parse_shifter_operand (char **str, aarch64_opnd_info *operand,
 	}
 
       operand->reg.regno = reg;
-      operand->qualifier = isreg32 ? AARCH64_OPND_QLF_W : AARCH64_OPND_QLF_X;
+      operand->qualifier = qualifier;
 
       /* Accept optional shift operation on register.  */
       if (! skip_past_comma (str))
@@ -3193,7 +3206,9 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
 {
   char *p = *str;
   int reg;
-  int isreg32, isregzero;
+  aarch64_opnd_qualifier_t base_qualifier;
+  aarch64_opnd_qualifier_t offset_qualifier;
+  bfd_boolean isregzero;
   expressionS *exp = &inst.reloc.exp;
 
   if (! skip_past_char (&p, '['))
@@ -3271,8 +3286,8 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
   /* [ */
 
   /* Accept SP and reject ZR */
-  reg = aarch64_reg_parse_32_64 (&p, 0, 1, &isreg32, &isregzero);
-  if (reg == PARSE_FAIL || isreg32)
+  reg = aarch64_reg_parse_32_64 (&p, TRUE, FALSE, &base_qualifier, &isregzero);
+  if (reg == PARSE_FAIL || base_qualifier == AARCH64_OPND_QLF_W)
     {
       set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_64)));
       return FALSE;
@@ -3286,7 +3301,8 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
       operand->addr.preind = 1;
 
       /* Reject SP and accept ZR */
-      reg = aarch64_reg_parse_32_64 (&p, 1, 0, &isreg32, &isregzero);
+      reg = aarch64_reg_parse_32_64 (&p, FALSE, TRUE, &offset_qualifier,
+				     &isregzero);
       if (reg != PARSE_FAIL)
 	{
 	  /* [Xn,Rm  */
@@ -3309,13 +3325,13 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
 	      || operand->shifter.kind == AARCH64_MOD_LSL
 	      || operand->shifter.kind == AARCH64_MOD_SXTX)
 	    {
-	      if (isreg32)
+	      if (offset_qualifier == AARCH64_OPND_QLF_W)
 		{
 		  set_syntax_error (_("invalid use of 32-bit register offset"));
 		  return FALSE;
 		}
 	    }
-	  else if (!isreg32)
+	  else if (offset_qualifier == AARCH64_OPND_QLF_X)
 	    {
 	      set_syntax_error (_("invalid use of 64-bit register offset"));
 	      return FALSE;
@@ -3399,11 +3415,12 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
 	}
 
       if (accept_reg_post_index
-	  && (reg = aarch64_reg_parse_32_64 (&p, 1, 1, &isreg32,
+	  && (reg = aarch64_reg_parse_32_64 (&p, FALSE, FALSE,
+					     &offset_qualifier,
 					     &isregzero)) != PARSE_FAIL)
 	{
 	  /* [Xn],Xm */
-	  if (isreg32)
+	  if (offset_qualifier == AARCH64_OPND_QLF_W)
 	    {
 	      set_syntax_error (_("invalid 32-bit register offset"));
 	      return FALSE;
@@ -3723,19 +3740,16 @@ parse_sys_ins_reg (char **str, struct hash_control *sys_ins_regs)
       }								\
   } while (0)
 
-#define po_int_reg_or_fail(reject_sp, reject_rz) do {		\
-    val = aarch64_reg_parse_32_64 (&str, reject_sp, reject_rz,	\
-                                   &isreg32, &isregzero);	\
+#define po_int_reg_or_fail(accept_sp, accept_rz) do {		\
+    val = aarch64_reg_parse_32_64 (&str, accept_sp, accept_rz,	\
+                                   &qualifier, &isregzero);	\
     if (val == PARSE_FAIL)					\
       {								\
 	set_default_error ();					\
 	goto failure;						\
       }								\
     info->reg.regno = val;					\
-    if (isreg32)						\
-      info->qualifier = AARCH64_OPND_QLF_W;			\
-    else							\
-      info->qualifier = AARCH64_OPND_QLF_X;			\
+    info->qualifier = qualifier;				\
   } while (0)
 
 #define po_imm_nc_or_fail() do {				\
@@ -4993,10 +5007,11 @@ parse_operands (char *str, const aarch64_opcode *opcode)
   for (i = 0; operands[i] != AARCH64_OPND_NIL; i++)
     {
       int64_t val;
-      int isreg32, isregzero;
+      bfd_boolean isregzero;
       int comma_skipped_p = 0;
       aarch64_reg_type rtype;
       struct vector_type_el vectype;
+      aarch64_opnd_qualifier_t qualifier;
       aarch64_opnd_info *info = &inst.base.operands[i];
 
       DEBUG_TRACE ("parse operand %d", i);
@@ -5032,12 +5047,12 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	case AARCH64_OPND_Ra:
 	case AARCH64_OPND_Rt_SYS:
 	case AARCH64_OPND_PAIRREG:
-	  po_int_reg_or_fail (1, 0);
+	  po_int_reg_or_fail (FALSE, TRUE);
 	  break;
 
 	case AARCH64_OPND_Rd_SP:
 	case AARCH64_OPND_Rn_SP:
-	  po_int_reg_or_fail (0, 1);
+	  po_int_reg_or_fail (TRUE, FALSE);
 	  break;
 
 	case AARCH64_OPND_Rm_EXT:

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 12/32] Make more use of bfd_boolean
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (10 preceding siblings ...)
  2016-08-23  9:12 ` [AArch64][SVE 11/32] Tweak aarch64_reg_parse_32_64 interface Richard Sandiford
@ 2016-08-23  9:13 ` Richard Sandiford
  2016-08-25 13:39   ` Richard Earnshaw (lists)
  2016-08-23  9:14 ` [AArch64][SVE 13/32] Add an F_STRICT flag Richard Sandiford
                   ` (20 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:13 UTC (permalink / raw)
  To: binutils

Following on from the previous patch, which converted the
aarch64_reg_parse_32_64 parameters to bfd_booleans, this one
does the same for parse_address_main and parse_address.
It also documents the parameters.

This isn't an attempt to convert the whole file to use bfd_booleans
more often.  It's simply trying to avoid inconsistencies with new
SVE parameters.

OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (parse_address_main): Turn reloc and
	accept_reg_post_index into bfd_booleans.  Add commentary.
	(parse_address_reloc): Update accordingly.  Add commentary.
	(parse_address): Likewise.  Also change accept_reg_post_index
	into a bfd_boolean here.
	(parse_operands): Update calls accordingly.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 2e0e4f8..165ab9a 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -3197,12 +3197,17 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
 
    The shift/extension information, if any, will be stored in .shifter.
 
-   It is the caller's responsibility to check for addressing modes not
-   supported by the instruction, and to set inst.reloc.type.  */
+   RELOC says whether relocation operators should be accepted
+   and ACCEPT_REG_POST_INDEX says whether post-indexed register
+   addressing should be accepted.
+
+   In all other cases, it is the caller's responsibility to check whether
+   the addressing mode is supported by the instruction.  It is also the
+   caller's responsibility to set inst.reloc.type.  */
 
 static bfd_boolean
-parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
-		    int accept_reg_post_index)
+parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
+		    bfd_boolean accept_reg_post_index)
 {
   char *p = *str;
   int reg;
@@ -3455,19 +3460,26 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
   return TRUE;
 }
 
-/* Return TRUE on success; otherwise return FALSE.  */
+/* Parse an address that cannot contain relocation operators.
+   Look for and parse "[Xn], (Xm|#m)" as post-indexed addressing
+   if ACCEPT_REG_POST_INDEX is true.
+
+   Return TRUE on success.  */
 static bfd_boolean
 parse_address (char **str, aarch64_opnd_info *operand,
-	       int accept_reg_post_index)
+	       bfd_boolean accept_reg_post_index)
 {
-  return parse_address_main (str, operand, 0, accept_reg_post_index);
+  return parse_address_main (str, operand, FALSE, accept_reg_post_index);
 }
 
-/* Return TRUE on success; otherwise return FALSE.  */
+/* Parse an address that can contain relocation operators.  Do not
+   accept post-indexed addressing.
+
+   Return TRUE on success.  */
 static bfd_boolean
 parse_address_reloc (char **str, aarch64_opnd_info *operand)
 {
-  return parse_address_main (str, operand, 1, 0);
+  return parse_address_main (str, operand, TRUE, FALSE);
 }
 
 /* Parse an operand for a MOVZ, MOVN or MOVK instruction.
@@ -5534,7 +5546,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 
 	case AARCH64_OPND_ADDR_REGOFF:
 	  /* [<Xn|SP>, <R><m>{, <extend> {<amount>}}]  */
-	  po_misc_or_fail (parse_address (&str, info, 0));
+	  po_misc_or_fail (parse_address (&str, info, FALSE));
 	  if (info->addr.pcrel || !info->addr.offset.is_reg
 	      || !info->addr.preind || info->addr.postind
 	      || info->addr.writeback)
@@ -5553,7 +5565,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  break;
 
 	case AARCH64_OPND_ADDR_SIMM7:
-	  po_misc_or_fail (parse_address (&str, info, 0));
+	  po_misc_or_fail (parse_address (&str, info, FALSE));
 	  if (info->addr.pcrel || info->addr.offset.is_reg
 	      || (!info->addr.preind && !info->addr.postind))
 	    {
@@ -5609,7 +5621,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 
 	case AARCH64_OPND_SIMD_ADDR_POST:
 	  /* [<Xn|SP>], <Xm|#<amount>>  */
-	  po_misc_or_fail (parse_address (&str, info, 1));
+	  po_misc_or_fail (parse_address (&str, info, TRUE));
 	  if (!info->addr.postind || !info->addr.writeback)
 	    {
 	      set_syntax_error (_("invalid addressing mode"));

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 13/32] Add an F_STRICT flag
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (11 preceding siblings ...)
  2016-08-23  9:13 ` [AArch64][SVE 12/32] Make more use of bfd_boolean Richard Sandiford
@ 2016-08-23  9:14 ` Richard Sandiford
  2016-08-25 13:45   ` Richard Earnshaw (lists)
  2016-08-23  9:15 ` [AArch64][SVE 15/32] Add {insert,extract}_all_fields helpers Richard Sandiford
                   ` (19 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:14 UTC (permalink / raw)
  To: binutils

SVE predicate operands can appear in three forms:

1. unsuffixed: "Pn"
2. with a predication type: "Pn/[ZM]"
3. with a size suffix: "Pn.[BHSD]"

No variation is allowed: unsuffixed operands cannot have a (redundant)
suffix, and the suffixes can never be dropped.  Unsuffixed Pn are used
in LDR and STR, but they are also used for Pg operands in cases where
the result is scalar and where there is therefore no choice to be made
between "merging" and "zeroing".  This means that some Pg operands have
suffixes and others don't.

It would be possible to use context-sensitive parsing to handle
this difference.  The tc-aarch64.c code would then raise an error
if the wrong kind of suffix is used for a particular instruction.

However, we get much more user-friendly error messages if we parse
all three forms for all SVE instructions and record the suffix as a
qualifier.  The normal qualifier matching code can then report cases
where the wrong kind of suffix is used.  This is a slight extension
of existing usage, which really only checks for the wrong choice of
suffix within a particular kind of suffix.

The only catch is a that a "NIL" entry in the qualifier list
specifically means "no suffix should be present" (case 1 above).
NIL isn't a wildcard here.  It also means that an instruction that
requires all-NIL qualifiers can fail to match (because a suffix was
supplied when it shouldn't have been); this requires a slight change
to find_best_match.

This patch adds an F_STRICT flag to select this behaviour.
The flag will be set for all SVE instructions.  The behaviour
for other instructions doesn't change.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (F_STRICT): New flag.

opcodes/
	* aarch64-opc.c (match_operands_qualifier): Handle F_STRICT.

gas/
	* config/tc-aarch64.c (find_best_match): Simplify, allowing an
	instruction with all-NIL qualifiers to fail to match.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 165ab9a..9591704 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -4182,7 +4182,7 @@ find_best_match (const aarch64_inst *instr,
     }
 
   max_num_matched = 0;
-  idx = -1;
+  idx = 0;
 
   /* For each pattern.  */
   for (i = 0; i < AARCH64_MAX_QLF_SEQ_NUM; ++i, ++qualifiers_list)
@@ -4194,9 +4194,6 @@ find_best_match (const aarch64_inst *instr,
       if (empty_qualifier_sequence_p (qualifiers) == TRUE)
 	{
 	  DEBUG_TRACE_IF (i == 0, "empty list of qualifier sequence");
-	  if (i != 0 && idx == -1)
-	    /* If nothing has been matched, return the 1st sequence.  */
-	    idx = 0;
 	  break;
 	}
 
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index 1e38749..24a2ddb 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -598,7 +598,9 @@ extern aarch64_opcode aarch64_opcode_table[];
 #define F_OD(X) (((X) & 0x7) << 24)
 /* Instruction has the field of 'sz'.  */
 #define F_LSE_SZ (1 << 27)
-/* Next bit is 28.  */
+/* Require an exact qualifier match, even for NIL qualifiers.  */
+#define F_STRICT (1ULL << 28)
+/* Next bit is 29.  */
 
 static inline bfd_boolean
 alias_opcode_p (const aarch64_opcode *opcode)
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 322b991..d870fd6 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -854,7 +854,7 @@ aarch64_find_best_match (const aarch64_inst *inst,
 static int
 match_operands_qualifier (aarch64_inst *inst, bfd_boolean update_p)
 {
-  int i;
+  int i, nops;
   aarch64_opnd_qualifier_seq_t qualifiers;
 
   if (!aarch64_find_best_match (inst, inst->opcode->qualifiers_list, -1,
@@ -864,6 +864,15 @@ match_operands_qualifier (aarch64_inst *inst, bfd_boolean update_p)
       return 0;
     }
 
+  if (inst->opcode->flags & F_STRICT)
+    {
+      /* Require an exact qualifier match, even for NIL qualifiers.  */
+      nops = aarch64_num_of_operands (inst->opcode);
+      for (i = 0; i < nops; ++i)
+	if (inst->operands[i].qualifier != qualifiers[i])
+	  return FALSE;
+    }
+
   /* Update the qualifiers.  */
   if (update_p == TRUE)
     for (i = 0; i < AARCH64_MAX_OPND_NUM; ++i)

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 14/32] Make aarch64_logical_immediate_p take an element size
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (13 preceding siblings ...)
  2016-08-23  9:15 ` [AArch64][SVE 15/32] Add {insert,extract}_all_fields helpers Richard Sandiford
@ 2016-08-23  9:15 ` Richard Sandiford
  2016-08-25 13:48   ` Richard Earnshaw (lists)
  2016-08-23  9:16 ` [AArch64][SVE 18/32] Tidy definition of aarch64-opc.c:int_reg Richard Sandiford
                   ` (17 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:15 UTC (permalink / raw)
  To: binutils

SVE supports logical immediate operations on 8-bit, 16-bit and 32-bit
elements, treating them as aliases of operations on 64-bit elements in
which the immediate is replicated.  This patch therefore replaces the
"32-bit/64-bit" input to aarch64_logical_immediate_p with a more
general "number of bytes" input.

OK to install?

Thanks,
Richard


opcodes/
	* aarch64-opc.c (aarch64_logical_immediate_p): Replace is32
	with an esize parameter.
	(operand_general_constraint_met_p): Update accordingly.
	Fix misindented code.
	* aarch64-asm.c (aarch64_ins_limm): Update call to
	aarch64_logical_immediate_p.

diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index 2430be5..8fbd66f 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -436,11 +436,11 @@ aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
 {
   aarch64_insn value;
   uint64_t imm = info->imm.value;
-  int is32 = aarch64_get_qualifier_esize (inst->operands[0].qualifier) == 4;
+  int esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
 
   if (inst->opcode->op == OP_BIC)
     imm = ~imm;
-  if (aarch64_logical_immediate_p (imm, is32, &value) == FALSE)
+  if (aarch64_logical_immediate_p (imm, esize, &value) == FALSE)
     /* The constraint check should have guaranteed this wouldn't happen.  */
     assert (0);
 
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index d870fd6..84da821 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -1062,16 +1062,18 @@ build_immediate_table (void)
    be accepted by logical (immediate) instructions
    e.g. ORR <Xd|SP>, <Xn>, #<imm>.
 
-   IS32 indicates whether or not VALUE is a 32-bit immediate.
+   ESIZE is the number of bytes in the decoded immediate value.
    If ENCODING is not NULL, on the return of TRUE, the standard encoding for
    VALUE will be returned in *ENCODING.  */
 
 bfd_boolean
-aarch64_logical_immediate_p (uint64_t value, int is32, aarch64_insn *encoding)
+aarch64_logical_immediate_p (uint64_t value, int esize, aarch64_insn *encoding)
 {
   simd_imm_encoding imm_enc;
   const simd_imm_encoding *imm_encoding;
   static bfd_boolean initialized = FALSE;
+  uint64_t upper;
+  int i;
 
   DEBUG_TRACE ("enter with 0x%" PRIx64 "(%" PRIi64 "), is32: %d", value,
 	       value, is32);
@@ -1082,17 +1084,16 @@ aarch64_logical_immediate_p (uint64_t value, int is32, aarch64_insn *encoding)
       initialized = TRUE;
     }
 
-  if (is32)
-    {
-      /* Allow all zeros or all ones in top 32-bits, so that
-	 constant expressions like ~1 are permitted.  */
-      if (value >> 32 != 0 && value >> 32 != 0xffffffff)
-	return FALSE;
+  /* Allow all zeros or all ones in top bits, so that
+     constant expressions like ~1 are permitted.  */
+  upper = (uint64_t) -1 << (esize * 4) << (esize * 4);
+  if ((value & ~upper) != value && (value | upper) != value)
+    return FALSE;
 
-      /* Replicate the 32 lower bits to the 32 upper bits.  */
-      value &= 0xffffffff;
-      value |= value << 32;
-    }
+  /* Replicate to a full 64-bit value.  */
+  value &= ~upper;
+  for (i = esize * 8; i < 64; i *= 2)
+    value |= (value << i);
 
   imm_enc.imm = value;
   imm_encoding = (const simd_imm_encoding *)
@@ -1645,7 +1646,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 
 	case AARCH64_OPND_IMM_MOV:
 	    {
-	      int is32 = aarch64_get_qualifier_esize (opnds[0].qualifier) == 4;
+	      int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
 	      imm = opnd->imm.value;
 	      assert (idx == 1);
 	      switch (opcode->op)
@@ -1654,7 +1655,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 		  imm = ~imm;
 		  /* Fall through...  */
 		case OP_MOV_IMM_WIDE:
-		  if (!aarch64_wide_constant_p (imm, is32, NULL))
+		  if (!aarch64_wide_constant_p (imm, esize == 4, NULL))
 		    {
 		      set_other_error (mismatch_detail, idx,
 				       _("immediate out of range"));
@@ -1662,7 +1663,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 		    }
 		  break;
 		case OP_MOV_IMM_LOG:
-		  if (!aarch64_logical_immediate_p (imm, is32, NULL))
+		  if (!aarch64_logical_immediate_p (imm, esize, NULL))
 		    {
 		      set_other_error (mismatch_detail, idx,
 				       _("immediate out of range"));
@@ -1707,18 +1708,18 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	  break;
 
 	case AARCH64_OPND_LIMM:
-	    {
-	      int is32 = opnds[0].qualifier == AARCH64_OPND_QLF_W;
-	      uint64_t uimm = opnd->imm.value;
-	      if (opcode->op == OP_BIC)
-		uimm = ~uimm;
-	      if (aarch64_logical_immediate_p (uimm, is32, NULL) == FALSE)
-		{
-		  set_other_error (mismatch_detail, idx,
-				   _("immediate out of range"));
-		  return 0;
-		}
-	    }
+	  {
+	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
+	    uint64_t uimm = opnd->imm.value;
+	    if (opcode->op == OP_BIC)
+	      uimm = ~uimm;
+	    if (aarch64_logical_immediate_p (uimm, esize, NULL) == FALSE)
+	      {
+		set_other_error (mismatch_detail, idx,
+				 _("immediate out of range"));
+		return 0;
+	      }
+	  }
 	  break;
 
 	case AARCH64_OPND_IMM0:

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 15/32] Add {insert,extract}_all_fields helpers
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (12 preceding siblings ...)
  2016-08-23  9:14 ` [AArch64][SVE 13/32] Add an F_STRICT flag Richard Sandiford
@ 2016-08-23  9:15 ` Richard Sandiford
  2016-08-25 13:50   ` Richard Earnshaw (lists)
  2016-08-23  9:15 ` [AArch64][SVE 14/32] Make aarch64_logical_immediate_p take an element size Richard Sandiford
                   ` (18 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:15 UTC (permalink / raw)
  To: binutils

Several of the SVE operands use the aarch64_operand fields array
to store the fields that make up the operand, rather than hard-coding
the names in the C code.  This patch adds helpers for inserting and
extracting those fields.

OK to install?

Thanks,
Richard


opcodes/
	* aarch64-asm.c: Include libiberty.h.
	(insert_fields): New function.
	(aarch64_ins_imm): Use it.
	* aarch64-dis.c (extract_fields): New function.
	(aarch64_ext_imm): Use it.

diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index 8fbd66f..3b0a383 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -20,6 +20,7 @@
 
 #include "sysdep.h"
 #include <stdarg.h>
+#include "libiberty.h"
 #include "aarch64-asm.h"
 
 /* Utilities.  */
@@ -55,6 +56,25 @@ insert_fields (aarch64_insn *code, aarch64_insn value, aarch64_insn mask, ...)
   va_end (va);
 }
 
+/* Insert a raw field value VALUE into all fields in SELF->fields.
+   The least significant bit goes in the final field.  */
+
+static void
+insert_all_fields (const aarch64_operand *self, aarch64_insn *code,
+		   aarch64_insn value)
+{
+  unsigned int i;
+  enum aarch64_field_kind kind;
+
+  for (i = ARRAY_SIZE (self->fields); i-- > 0; )
+    if (self->fields[i] != FLD_NIL)
+      {
+	kind = self->fields[i];
+	insert_field (kind, code, value, 0);
+	value >>= fields[kind].width;
+      }
+}
+
 /* Operand inserters.  */
 
 /* Insert register number.  */
@@ -318,17 +338,11 @@ aarch64_ins_imm (const aarch64_operand *self, const aarch64_opnd_info *info,
 		 const aarch64_inst *inst ATTRIBUTE_UNUSED)
 {
   int64_t imm;
-  /* Maximum of two fields to insert.  */
-  assert (self->fields[2] == FLD_NIL);
 
   imm = info->imm.value;
   if (operand_need_shift_by_two (self))
     imm >>= 2;
-  if (self->fields[1] == FLD_NIL)
-    insert_field (self->fields[0], code, imm, 0);
-  else
-    /* e.g. TBZ b5:b40.  */
-    insert_fields (code, imm, 0, 2, self->fields[1], self->fields[0]);
+  insert_all_fields (self, code, imm);
   return NULL;
 }
 
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index 9ffc713..67daa66 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -145,6 +145,26 @@ extract_fields (aarch64_insn code, aarch64_insn mask, ...)
   return value;
 }
 
+/* Extract the value of all fields in SELF->fields from instruction CODE.
+   The least significant bit comes from the final field.  */
+
+static aarch64_insn
+extract_all_fields (const aarch64_operand *self, aarch64_insn code)
+{
+  aarch64_insn value;
+  unsigned int i;
+  enum aarch64_field_kind kind;
+
+  value = 0;
+  for (i = 0; i < ARRAY_SIZE (self->fields) && self->fields[i] != FLD_NIL; ++i)
+    {
+      kind = self->fields[i];
+      value <<= fields[kind].width;
+      value |= extract_field (kind, code, 0);
+    }
+  return value;
+}
+
 /* Sign-extend bit I of VALUE.  */
 static inline int32_t
 sign_extend (aarch64_insn value, unsigned i)
@@ -575,14 +595,8 @@ aarch64_ext_imm (const aarch64_operand *self, aarch64_opnd_info *info,
 		 const aarch64_inst *inst ATTRIBUTE_UNUSED)
 {
   int64_t imm;
-  /* Maximum of two fields to extract.  */
-  assert (self->fields[2] == FLD_NIL);
 
-  if (self->fields[1] == FLD_NIL)
-    imm = extract_field (self->fields[0], code, 0);
-  else
-    /* e.g. TBZ b5:b40.  */
-    imm = extract_fields (code, 0, 2, self->fields[0], self->fields[1]);
+  imm = extract_all_fields (self, code);
 
   if (info->type == AARCH64_OPND_FPIMM)
     info->imm.is_fp = 1;

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 17/32] Add a prefix parameter to print_register_list
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (16 preceding siblings ...)
  2016-08-23  9:16 ` [AArch64][SVE 16/32] Use specific insert/extract methods for fpimm Richard Sandiford
@ 2016-08-23  9:16 ` Richard Sandiford
  2016-08-25 13:53   ` Richard Earnshaw (lists)
  2016-08-23  9:17 ` [AArch64][SVE 19/32] Refactor address-printing code Richard Sandiford
                   ` (14 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:16 UTC (permalink / raw)
  To: binutils

This patch generalises the interface to print_register_list so
that it can print register lists involving SVE z registers as
well as AdvSIMD v ones.

OK to install?

Thanks,
Richard


opcodes/
	* aarch64-opc.c (print_register_list): Add a prefix parameter.
	(aarch64_print_operand): Update accordingly.

diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 84da821..6eac70a 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -2261,9 +2261,11 @@ expand_fp_imm (int size, uint32_t imm8)
 }
 
 /* Produce the string representation of the register list operand *OPND
-   in the buffer pointed by BUF of size SIZE.  */
+   in the buffer pointed by BUF of size SIZE.  PREFIX is the part of
+   the register name that comes before the register number, such as "v".  */
 static void
-print_register_list (char *buf, size_t size, const aarch64_opnd_info *opnd)
+print_register_list (char *buf, size_t size, const aarch64_opnd_info *opnd,
+		     const char *prefix)
 {
   const int num_regs = opnd->reglist.num_regs;
   const int first_reg = opnd->reglist.first_regno;
@@ -2284,8 +2286,8 @@ print_register_list (char *buf, size_t size, const aarch64_opnd_info *opnd)
      more than two registers in the list, and the register numbers
      are monotonically increasing in increments of one.  */
   if (num_regs > 2 && last_reg > first_reg)
-    snprintf (buf, size, "{v%d.%s-v%d.%s}%s", first_reg, qlf_name,
-	      last_reg, qlf_name, tb);
+    snprintf (buf, size, "{%s%d.%s-%s%d.%s}%s", prefix, first_reg, qlf_name,
+	      prefix, last_reg, qlf_name, tb);
   else
     {
       const int reg0 = first_reg;
@@ -2296,20 +2298,21 @@ print_register_list (char *buf, size_t size, const aarch64_opnd_info *opnd)
       switch (num_regs)
 	{
 	case 1:
-	  snprintf (buf, size, "{v%d.%s}%s", reg0, qlf_name, tb);
+	  snprintf (buf, size, "{%s%d.%s}%s", prefix, reg0, qlf_name, tb);
 	  break;
 	case 2:
-	  snprintf (buf, size, "{v%d.%s, v%d.%s}%s", reg0, qlf_name,
-		    reg1, qlf_name, tb);
+	  snprintf (buf, size, "{%s%d.%s, %s%d.%s}%s", prefix, reg0, qlf_name,
+		    prefix, reg1, qlf_name, tb);
 	  break;
 	case 3:
-	  snprintf (buf, size, "{v%d.%s, v%d.%s, v%d.%s}%s", reg0, qlf_name,
-		    reg1, qlf_name, reg2, qlf_name, tb);
+	  snprintf (buf, size, "{%s%d.%s, %s%d.%s, %s%d.%s}%s",
+		    prefix, reg0, qlf_name, prefix, reg1, qlf_name,
+		    prefix, reg2, qlf_name, tb);
 	  break;
 	case 4:
-	  snprintf (buf, size, "{v%d.%s, v%d.%s, v%d.%s, v%d.%s}%s",
-		    reg0, qlf_name, reg1, qlf_name, reg2, qlf_name,
-		    reg3, qlf_name, tb);
+	  snprintf (buf, size, "{%s%d.%s, %s%d.%s, %s%d.%s, %s%d.%s}%s",
+		    prefix, reg0, qlf_name, prefix, reg1, qlf_name,
+		    prefix, reg2, qlf_name, prefix, reg3, qlf_name, tb);
 	  break;
 	}
     }
@@ -2513,7 +2516,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
     case AARCH64_OPND_LVt:
     case AARCH64_OPND_LVt_AL:
     case AARCH64_OPND_LEt:
-      print_register_list (buf, size, opnd);
+      print_register_list (buf, size, opnd, "v");
       break;
 
     case AARCH64_OPND_Cn:

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 16/32] Use specific insert/extract methods for fpimm
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (15 preceding siblings ...)
  2016-08-23  9:16 ` [AArch64][SVE 18/32] Tidy definition of aarch64-opc.c:int_reg Richard Sandiford
@ 2016-08-23  9:16 ` Richard Sandiford
  2016-08-25 13:52   ` Richard Earnshaw (lists)
  2016-08-23  9:16 ` [AArch64][SVE 17/32] Add a prefix parameter to print_register_list Richard Sandiford
                   ` (15 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:16 UTC (permalink / raw)
  To: binutils

FPIMM used the normal "imm" insert/extract methods, with a specific
test for FPIMM in the extract method.  SVE needs to use the same
extractors, so rather than add extra checks for specific operand types,
it seemed cleaner to use a separate insert/extract method.

OK to install?

Thanks,
Richard


opcodes/
	* aarch64-tbl.h (AARCH64_OPERNADS): Use fpimm rather than imm
	for FPIMM.
	* aarch64-asm.h (ins_fpimm): New inserter.
	* aarch64-asm.c (aarch64_ins_fpimm): New function.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis.h (ext_fpimm): New extractor.
	* aarch64-dis.c (aarch64_ext_imm): Remove fpimm test.
	(aarch64_ext_fpimm): New function.
	* aarch64-dis-2.c: Regenerate.

diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index 605bf08..439dd3d 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -500,7 +500,6 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 34:
       return aarch64_ins_ldst_elemlist (self, info, code, inst);
     case 37:
-    case 46:
     case 47:
     case 48:
     case 49:
@@ -525,6 +524,8 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 41:
     case 42:
       return aarch64_ins_advsimd_imm_modified (self, info, code, inst);
+    case 46:
+      return aarch64_ins_fpimm (self, info, code, inst);
     case 59:
       return aarch64_ins_limm (self, info, code, inst);
     case 60:
diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index 3b0a383..f291495 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -417,6 +417,16 @@ aarch64_ins_advsimd_imm_modified (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return NULL;
 }
 
+/* Insert fields for an 8-bit floating-point immediate.  */
+const char *
+aarch64_ins_fpimm (const aarch64_operand *self, const aarch64_opnd_info *info,
+		   aarch64_insn *code,
+		   const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  insert_all_fields (self, code, info->imm.value);
+  return NULL;
+}
+
 /* Insert #<fbits> for the immediate operand in fp fix-point instructions,
    e.g.  SCVTF <Dd>, <Wn>, #<fbits>.  */
 const char *
diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
index ad9183d..3211aff 100644
--- a/opcodes/aarch64-asm.h
+++ b/opcodes/aarch64-asm.h
@@ -50,6 +50,7 @@ AARCH64_DECL_OPD_INSERTER (ins_advsimd_imm_shift);
 AARCH64_DECL_OPD_INSERTER (ins_imm);
 AARCH64_DECL_OPD_INSERTER (ins_imm_half);
 AARCH64_DECL_OPD_INSERTER (ins_advsimd_imm_modified);
+AARCH64_DECL_OPD_INSERTER (ins_fpimm);
 AARCH64_DECL_OPD_INSERTER (ins_fbits);
 AARCH64_DECL_OPD_INSERTER (ins_aimm);
 AARCH64_DECL_OPD_INSERTER (ins_limm);
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index 8e85dbf..a86a84d 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10450,7 +10450,6 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 34:
       return aarch64_ext_ldst_elemlist (self, info, code, inst);
     case 37:
-    case 46:
     case 47:
     case 48:
     case 49:
@@ -10478,6 +10477,8 @@ aarch64_extract_operand (const aarch64_operand *self,
       return aarch64_ext_advsimd_imm_modified (self, info, code, inst);
     case 43:
       return aarch64_ext_shll_imm (self, info, code, inst);
+    case 46:
+      return aarch64_ext_fpimm (self, info, code, inst);
     case 59:
       return aarch64_ext_limm (self, info, code, inst);
     case 60:
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index 67daa66..4c3b521 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -598,9 +598,6 @@ aarch64_ext_imm (const aarch64_operand *self, aarch64_opnd_info *info,
 
   imm = extract_all_fields (self, code);
 
-  if (info->type == AARCH64_OPND_FPIMM)
-    info->imm.is_fp = 1;
-
   if (operand_need_sign_extension (self))
     imm = sign_extend (imm, get_operand_fields_width (self) - 1);
 
@@ -695,6 +692,17 @@ aarch64_ext_advsimd_imm_modified (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return 1;
 }
 
+/* Decode an 8-bit floating-point immediate.  */
+int
+aarch64_ext_fpimm (const aarch64_operand *self, aarch64_opnd_info *info,
+		   const aarch64_insn code,
+		   const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  info->imm.value = extract_all_fields (self, code);
+  info->imm.is_fp = 1;
+  return 1;
+}
+
 /* Decode scale for e.g. SCVTF <Dd>, <Wn>, #<fbits>.  */
 int
 aarch64_ext_fbits (const aarch64_operand *self ATTRIBUTE_UNUSED,
diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
index 9be5d7f..1f10157 100644
--- a/opcodes/aarch64-dis.h
+++ b/opcodes/aarch64-dis.h
@@ -72,6 +72,7 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_shll_imm);
 AARCH64_DECL_OPD_EXTRACTOR (ext_imm);
 AARCH64_DECL_OPD_EXTRACTOR (ext_imm_half);
 AARCH64_DECL_OPD_EXTRACTOR (ext_advsimd_imm_modified);
+AARCH64_DECL_OPD_EXTRACTOR (ext_fpimm);
 AARCH64_DECL_OPD_EXTRACTOR (ext_fbits);
 AARCH64_DECL_OPD_EXTRACTOR (ext_aimm);
 AARCH64_DECL_OPD_EXTRACTOR (ext_limm);
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index b82678f..9a831e4 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2738,7 +2738,7 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "an immediate shift amount of 8, 16 or 32")			\
     X(IMMEDIATE, 0, 0, "IMM0", 0, F(), "0")				\
     X(IMMEDIATE, 0, 0, "FPIMM0", 0, F(), "0.0")				\
-    Y(IMMEDIATE, imm, "FPIMM", 0, F(FLD_imm8),				\
+    Y(IMMEDIATE, fpimm, "FPIMM", 0, F(FLD_imm8),			\
       "an 8-bit floating-point constant")				\
     Y(IMMEDIATE, imm, "IMMR", 0, F(FLD_immr),				\
       "the right rotate amount")					\

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 18/32] Tidy definition of aarch64-opc.c:int_reg
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (14 preceding siblings ...)
  2016-08-23  9:15 ` [AArch64][SVE 14/32] Make aarch64_logical_immediate_p take an element size Richard Sandiford
@ 2016-08-23  9:16 ` Richard Sandiford
  2016-08-25 13:55   ` Richard Earnshaw (lists)
  2016-08-23  9:16 ` [AArch64][SVE 16/32] Use specific insert/extract methods for fpimm Richard Sandiford
                   ` (16 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:16 UTC (permalink / raw)
  To: binutils

Use a macro to define 31 regular registers followed by a supplied
value for 0b11111.  The SVE code will also use this for vector base
and offset registers.

OK to install?

Thanks,
Richard


opcodes/
	* aarch64-opc.c (BANK): New macro.
	(R32, R64): Take a register number as argument
	(int_reg): Use BANK.

diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 6eac70a..3f9be62 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -2149,32 +2149,25 @@ aarch64_operand_index (const enum aarch64_opnd *operands, enum aarch64_opnd oper
   return -1;
 }
 \f
+/* R0...R30, followed by FOR31.  */
+#define BANK(R, FOR31) \
+  { R  (0), R  (1), R  (2), R  (3), R  (4), R  (5), R  (6), R  (7), \
+    R  (8), R  (9), R (10), R (11), R (12), R (13), R (14), R (15), \
+    R (16), R (17), R (18), R (19), R (20), R (21), R (22), R (23), \
+    R (24), R (25), R (26), R (27), R (28), R (29), R (30),  FOR31 }
 /* [0][0]  32-bit integer regs with sp   Wn
    [0][1]  64-bit integer regs with sp   Xn  sf=1
    [1][0]  32-bit integer regs with #0   Wn
    [1][1]  64-bit integer regs with #0   Xn  sf=1 */
 static const char *int_reg[2][2][32] = {
-#define R32 "w"
-#define R64 "x"
-  { { R32  "0", R32  "1", R32  "2", R32  "3", R32  "4", R32  "5", R32  "6", R32  "7",
-      R32  "8", R32  "9", R32 "10", R32 "11", R32 "12", R32 "13", R32 "14", R32 "15",
-      R32 "16", R32 "17", R32 "18", R32 "19", R32 "20", R32 "21", R32 "22", R32 "23",
-      R32 "24", R32 "25", R32 "26", R32 "27", R32 "28", R32 "29", R32 "30",    "wsp" },
-    { R64  "0", R64  "1", R64  "2", R64  "3", R64  "4", R64  "5", R64  "6", R64  "7",
-      R64  "8", R64  "9", R64 "10", R64 "11", R64 "12", R64 "13", R64 "14", R64 "15",
-      R64 "16", R64 "17", R64 "18", R64 "19", R64 "20", R64 "21", R64 "22", R64 "23",
-      R64 "24", R64 "25", R64 "26", R64 "27", R64 "28", R64 "29", R64 "30",     "sp" } },
-  { { R32  "0", R32  "1", R32  "2", R32  "3", R32  "4", R32  "5", R32  "6", R32  "7",
-      R32  "8", R32  "9", R32 "10", R32 "11", R32 "12", R32 "13", R32 "14", R32 "15",
-      R32 "16", R32 "17", R32 "18", R32 "19", R32 "20", R32 "21", R32 "22", R32 "23",
-      R32 "24", R32 "25", R32 "26", R32 "27", R32 "28", R32 "29", R32 "30", R32 "zr" },
-    { R64  "0", R64  "1", R64  "2", R64  "3", R64  "4", R64  "5", R64  "6", R64  "7",
-      R64  "8", R64  "9", R64 "10", R64 "11", R64 "12", R64 "13", R64 "14", R64 "15",
-      R64 "16", R64 "17", R64 "18", R64 "19", R64 "20", R64 "21", R64 "22", R64 "23",
-      R64 "24", R64 "25", R64 "26", R64 "27", R64 "28", R64 "29", R64 "30", R64 "zr" } }
+#define R32(X) "w" #X
+#define R64(X) "x" #X
+  { BANK (R32, "wsp"), BANK (R64, "sp") },
+  { BANK (R32, "wzr"), BANK (R64, "xzr") }
 #undef R64
 #undef R32
 };
+#undef BANK
 
 /* Return the integer register name.
    if SP_REG_P is not 0, R31 is an SP reg, other R31 is the zero reg.  */

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 19/32] Refactor address-printing code
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (17 preceding siblings ...)
  2016-08-23  9:16 ` [AArch64][SVE 17/32] Add a prefix parameter to print_register_list Richard Sandiford
@ 2016-08-23  9:17 ` Richard Sandiford
  2016-08-25 13:57   ` Richard Earnshaw (lists)
  2016-08-23  9:18 ` [AArch64][SVE 20/32] Add support for tied operands Richard Sandiford
                   ` (13 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:17 UTC (permalink / raw)
  To: binutils

SVE adds addresses in which the base or offset are vector registers.
The addresses otherwise have the same kind of form as normal AArch64
addresses, including things like SXTW with or without a shift, UXTW
with or without a shift, and LSL.

This patch therefore refactors the address-printing code so that it
can cope with both scalar and vector registers.

OK to install?

Thanks,
Richard


opcodes/
	* aarch64-opc.c (get_offset_int_reg_name): New function.
	(print_immediate_offset_address): Likewise.
	(print_register_offset_address): Take the base and offset
	registers as parameters.
	(aarch64_print_operand): Update caller accordingly.  Use
	print_immediate_offset_address.

diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 3f9be62..7a73c7e 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -2189,6 +2189,27 @@ get_64bit_int_reg_name (int regno, int sp_reg_p)
   return int_reg[has_zr][1][regno];
 }
 
+/* Get the name of the integer offset register in OPND, using the shift type
+   to decide whether it's a word or doubleword.  */
+
+static inline const char *
+get_offset_int_reg_name (const aarch64_opnd_info *opnd)
+{
+  switch (opnd->shifter.kind)
+    {
+    case AARCH64_MOD_UXTW:
+    case AARCH64_MOD_SXTW:
+      return get_int_reg_name (opnd->addr.offset.regno, AARCH64_OPND_QLF_W, 0);
+
+    case AARCH64_MOD_LSL:
+    case AARCH64_MOD_SXTX:
+      return get_int_reg_name (opnd->addr.offset.regno, AARCH64_OPND_QLF_X, 0);
+
+    default:
+      abort ();
+    }
+}
+
 /* Types for expanding an encoded 8-bit value to a floating-point value.  */
 
 typedef union
@@ -2311,28 +2332,43 @@ print_register_list (char *buf, size_t size, const aarch64_opnd_info *opnd,
     }
 }
 
+/* Print the register+immediate address in OPND to BUF, which has SIZE
+   characters.  BASE is the name of the base register.  */
+
+static void
+print_immediate_offset_address (char *buf, size_t size,
+				const aarch64_opnd_info *opnd,
+				const char *base)
+{
+  if (opnd->addr.writeback)
+    {
+      if (opnd->addr.preind)
+	snprintf (buf, size, "[%s,#%d]!", base, opnd->addr.offset.imm);
+      else
+	snprintf (buf, size, "[%s],#%d", base, opnd->addr.offset.imm);
+    }
+  else
+    {
+      if (opnd->addr.offset.imm)
+	snprintf (buf, size, "[%s,#%d]", base, opnd->addr.offset.imm);
+      else
+	snprintf (buf, size, "[%s]", base);
+    }
+}
+
 /* Produce the string representation of the register offset address operand
-   *OPND in the buffer pointed by BUF of size SIZE.  */
+   *OPND in the buffer pointed by BUF of size SIZE.  BASE and OFFSET are
+   the names of the base and offset registers.  */
 static void
 print_register_offset_address (char *buf, size_t size,
-			       const aarch64_opnd_info *opnd)
+			       const aarch64_opnd_info *opnd,
+			       const char *base, const char *offset)
 {
   char tb[16];			/* Temporary buffer.  */
-  bfd_boolean lsl_p = FALSE;	/* Is LSL shift operator?  */
-  bfd_boolean wm_p = FALSE;	/* Should Rm be Wm?  */
   bfd_boolean print_extend_p = TRUE;
   bfd_boolean print_amount_p = TRUE;
   const char *shift_name = aarch64_operand_modifiers[opnd->shifter.kind].name;
 
-  switch (opnd->shifter.kind)
-    {
-    case AARCH64_MOD_UXTW: wm_p = TRUE; break;
-    case AARCH64_MOD_LSL : lsl_p = TRUE; break;
-    case AARCH64_MOD_SXTW: wm_p = TRUE; break;
-    case AARCH64_MOD_SXTX: break;
-    default: assert (0);
-    }
-
   if (!opnd->shifter.amount && (opnd->qualifier != AARCH64_OPND_QLF_S_B
 				|| !opnd->shifter.amount_present))
     {
@@ -2341,7 +2377,7 @@ print_register_offset_address (char *buf, size_t size,
       print_amount_p = FALSE;
       /* Likewise, no need to print the shift operator LSL in such a
 	 situation.  */
-      if (lsl_p)
+      if (opnd->shifter.kind == AARCH64_MOD_LSL)
 	print_extend_p = FALSE;
     }
 
@@ -2356,12 +2392,7 @@ print_register_offset_address (char *buf, size_t size,
   else
     tb[0] = '\0';
 
-  snprintf (buf, size, "[%s,%s%s]",
-	    get_64bit_int_reg_name (opnd->addr.base_regno, 1),
-	    get_int_reg_name (opnd->addr.offset.regno,
-			      wm_p ? AARCH64_OPND_QLF_W : AARCH64_OPND_QLF_X,
-			      0 /* sp_reg_p */),
-	    tb);
+  snprintf (buf, size, "[%s,%s%s]", base, offset, tb);
 }
 
 /* Generate the string representation of the operand OPNDS[IDX] for OPCODE
@@ -2668,27 +2699,16 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
       break;
 
     case AARCH64_OPND_ADDR_REGOFF:
-      print_register_offset_address (buf, size, opnd);
+      print_register_offset_address
+	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1),
+	 get_offset_int_reg_name (opnd));
       break;
 
     case AARCH64_OPND_ADDR_SIMM7:
     case AARCH64_OPND_ADDR_SIMM9:
     case AARCH64_OPND_ADDR_SIMM9_2:
-      name = get_64bit_int_reg_name (opnd->addr.base_regno, 1);
-      if (opnd->addr.writeback)
-	{
-	  if (opnd->addr.preind)
-	    snprintf (buf, size, "[%s,#%d]!", name, opnd->addr.offset.imm);
-	  else
-	    snprintf (buf, size, "[%s],#%d", name, opnd->addr.offset.imm);
-	}
-      else
-	{
-	  if (opnd->addr.offset.imm)
-	    snprintf (buf, size, "[%s,#%d]", name, opnd->addr.offset.imm);
-	  else
-	    snprintf (buf, size, "[%s]", name);
-	}
+      print_immediate_offset_address
+	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1));
       break;
 
     case AARCH64_OPND_ADDR_UIMM12:

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 21/32] Add Zn and Pn registers
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (19 preceding siblings ...)
  2016-08-23  9:18 ` [AArch64][SVE 20/32] Add support for tied operands Richard Sandiford
@ 2016-08-23  9:18 ` Richard Sandiford
  2016-08-25 14:07   ` Richard Earnshaw (lists)
  2016-08-23  9:19 ` [AArch64][SVE 22/32] Add qualifiers for merging and zeroing predication Richard Sandiford
                   ` (11 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:18 UTC (permalink / raw)
  To: binutils

This patch adds the Zn and Pn registers, and associated fields and
operands.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_CLASS_SVE_REG): New aarch64_operand_class.
	(AARCH64_OPND_CLASS_PRED_REG): Likewise.
	(AARCH64_OPND_SVE_Pd, AARCH64_OPND_SVE_Pg3, AARCH64_OPND_SVE_Pg4_5)
	(AARCH64_OPND_SVE_Pg4_10, AARCH64_OPND_SVE_Pg4_16)
	(AARCH64_OPND_SVE_Pm, AARCH64_OPND_SVE_Pn, AARCH64_OPND_SVE_Pt)
	(AARCH64_OPND_SVE_Za_5, AARCH64_OPND_SVE_Za_16, AARCH64_OPND_SVE_Zd)
	(AARCH64_OPND_SVE_Zm_5, AARCH64_OPND_SVE_Zm_16, AARCH64_OPND_SVE_Zn)
	(AARCH64_OPND_SVE_Zn_INDEX, AARCH64_OPND_SVE_ZnxN)
	(AARCH64_OPND_SVE_Zt, AARCH64_OPND_SVE_ZtxN): New aarch64_opnds.

opcodes/
	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for new SVE operands.
	* aarch64-opc.h (FLD_SVE_Pd, FLD_SVE_Pg3, FLD_SVE_Pg4_5)
	(FLD_SVE_Pg4_10, FLD_SVE_Pg4_16, FLD_SVE_Pm, FLD_SVE_Pn, FLD_SVE_Pt)
	(FLD_SVE_Za_5, FLD_SVE_Za_16, FLD_SVE_Zd, FLD_SVE_Zm_5, FLD_SVE_Zm_16)
	(FLD_SVE_Zn, FLD_SVE_Zt, FLD_SVE_tzsh): New aarch64_field_kinds.
	* aarch64-opc.c (fields): Add corresponding entries here.
	(operand_general_constraint_met_p): Check that SVE register lists
	have the correct length.  Check the ranges of SVE index registers.
	Check for cases where p8-p15 are used in 3-bit predicate fields.
	(aarch64_print_operand): Handle the new SVE operands.
	* aarch64-opc-2.c: Regenerate.
	* aarch64-asm.h (ins_sve_index, ins_sve_reglist): New inserters.
	* aarch64-asm.c (aarch64_ins_sve_index): New function.
	(aarch64_ins_sve_reglist): Likewise.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis.h (ext_sve_index, ext_sve_reglist): New extractors.
	* aarch64-dis.c (aarch64_ext_sve_index): New function.
	(aarch64_ext_sve_reglist): Likewise.
	* aarch64-dis-2.c: Regenerate.

gas/
	* config/tc-aarch64.c (NTA_HASVARWIDTH): New macro.
	(AARCH64_REG_TYPES): Add ZN and PN.
	(get_reg_expected_msg): Handle them.
	(aarch64_check_reg_type): Likewise.  Update comment for
	REG_TYPE_R_Z_BHSDQ_V.
	(parse_vector_type_for_operand): Add a reg_type parameter.
	Skip the width for Zn and Pn registers.
	(parse_typed_reg): Extend vector handling to Zn and Pn.  Update the
	call to parse_vector_type_for_operand.  Set HASVARTYPE for Zn and Pn,
	expecting the width to be 0.
	(parse_vector_reg_list): Restrict error about [BHSD]nn operands to
	REG_TYPE_VN.
	(vectype_to_qualifier): Use S_[BHSD] qualifiers for NTA_HASVARWIDTH.
	(parse_operands): Handle the new Zn and Pn operands.
	(REGSET16): New macro, split out from...
	(REGSET31): ...here.
	(reg_names): Add Zn and Pn entries.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 37f7d26..53e602f 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -87,8 +87,9 @@ enum vector_el_type
 };
 
 /* Bits for DEFINED field in vector_type_el.  */
-#define NTA_HASTYPE  1
-#define NTA_HASINDEX 2
+#define NTA_HASTYPE     1
+#define NTA_HASINDEX    2
+#define NTA_HASVARWIDTH 4
 
 struct vector_type_el
 {
@@ -265,6 +266,8 @@ struct reloc_entry
   BASIC_REG_TYPE(FP_Q)	/* q[0-31] */	\
   BASIC_REG_TYPE(CN)	/* c[0-7]  */	\
   BASIC_REG_TYPE(VN)	/* v[0-31] */	\
+  BASIC_REG_TYPE(ZN)	/* z[0-31] */	\
+  BASIC_REG_TYPE(PN)	/* p[0-15] */	\
   /* Typecheck: any 64-bit int reg         (inc SP exc XZR) */		\
   MULTI_REG_TYPE(R64_SP, REG_TYPE(R_64) | REG_TYPE(SP_64))		\
   /* Typecheck: any int                    (inc {W}SP inc [WX]ZR) */	\
@@ -378,6 +381,12 @@ get_reg_expected_msg (aarch64_reg_type reg_type)
     case REG_TYPE_VN:		/* any V reg  */
       msg = N_("vector register expected");
       break;
+    case REG_TYPE_ZN:
+      msg = N_("SVE vector register expected");
+      break;
+    case REG_TYPE_PN:
+      msg = N_("SVE predicate register expected");
+      break;
     default:
       as_fatal (_("invalid register type %d"), reg_type);
     }
@@ -678,12 +687,15 @@ aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
     {
     case REG_TYPE_R64_SP:	/* 64-bit integer reg (inc SP exc XZR).  */
     case REG_TYPE_R_Z_SP:	/* Integer reg (inc {X}SP inc [WX]ZR).  */
-    case REG_TYPE_R_Z_BHSDQ_V:	/* Any register apart from Cn.  */
+    case REG_TYPE_R_Z_BHSDQ_V:	/* Any register apart from Zn, Pn or Cn.  */
     case REG_TYPE_BHSDQ:	/* Any [BHSDQ]P FP or SIMD scalar register.  */
     case REG_TYPE_VN:		/* Vector register.  */
       gas_assert (reg->type < REG_TYPE_MAX && type < REG_TYPE_MAX);
       return ((reg_type_masks[reg->type] & reg_type_masks[type])
 	      == reg_type_masks[reg->type]);
+    case REG_TYPE_ZN:
+    case REG_TYPE_PN:
+      return reg->type == type;
     default:
       as_fatal ("unhandled type %d", type);
       abort ();
@@ -751,15 +763,16 @@ aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
   return reg->number;
 }
 
-/* Parse the qualifier of a SIMD vector register or a SIMD vector element.
-   Fill in *PARSED_TYPE and return TRUE if the parsing succeeds;
-   otherwise return FALSE.
+/* Parse the qualifier of a vector register or vector element of type
+   REG_TYPE.  Fill in *PARSED_TYPE and return TRUE if the parsing
+   succeeds; otherwise return FALSE.
 
    Accept only one occurrence of:
    8b 16b 2h 4h 8h 2s 4s 1d 2d
    b h s d q  */
 static bfd_boolean
-parse_vector_type_for_operand (struct vector_type_el *parsed_type, char **str)
+parse_vector_type_for_operand (aarch64_reg_type reg_type,
+			       struct vector_type_el *parsed_type, char **str)
 {
   char *ptr = *str;
   unsigned width;
@@ -769,7 +782,7 @@ parse_vector_type_for_operand (struct vector_type_el *parsed_type, char **str)
   /* skip '.' */
   ptr++;
 
-  if (!ISDIGIT (*ptr))
+  if (reg_type == REG_TYPE_ZN || reg_type == REG_TYPE_PN || !ISDIGIT (*ptr))
     {
       width = 0;
       goto elt_size;
@@ -876,15 +889,23 @@ parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
     }
   type = reg->type;
 
-  if (type == REG_TYPE_VN && *str == '.')
+  if ((type == REG_TYPE_VN || type == REG_TYPE_ZN || type == REG_TYPE_PN)
+      && *str == '.')
     {
-      if (!parse_vector_type_for_operand (&parsetype, &str))
+      if (!parse_vector_type_for_operand (type, &parsetype, &str))
 	return PARSE_FAIL;
 
       /* Register if of the form Vn.[bhsdq].  */
       is_typed_vecreg = TRUE;
 
-      if (parsetype.width == 0)
+      if (type == REG_TYPE_ZN || type == REG_TYPE_PN)
+	{
+	  /* The width is always variable; we don't allow an integer width
+	     to be specified.  */
+	  gas_assert (parsetype.width == 0);
+	  atype.defined |= NTA_HASVARWIDTH | NTA_HASTYPE;
+	}
+      else if (parsetype.width == 0)
 	/* Expect index. In the new scheme we cannot have
 	   Vn.[bhsdq] represent a scalar. Therefore any
 	   Vn.[bhsdq] should have an index following it.
@@ -1061,7 +1082,7 @@ parse_vector_reg_list (char **ccp, aarch64_reg_type type,
 	  continue;
 	}
       /* reject [bhsd]n */
-      if (typeinfo.defined == 0)
+      if (type == REG_TYPE_VN && typeinfo.defined == 0)
 	{
 	  set_first_syntax_error (_("invalid scalar register in list"));
 	  error = TRUE;
@@ -4687,7 +4708,7 @@ vectype_to_qualifier (const struct vector_type_el *vectype)
 
   gas_assert (vectype->type >= NT_b && vectype->type <= NT_q);
 
-  if (vectype->defined & NTA_HASINDEX)
+  if (vectype->defined & (NTA_HASINDEX | NTA_HASVARWIDTH))
     /* Vector element register.  */
     return AARCH64_OPND_QLF_S_B + vectype->type;
   else
@@ -5027,6 +5048,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
       struct vector_type_el vectype;
       aarch64_opnd_qualifier_t qualifier;
       aarch64_opnd_info *info = &inst.base.operands[i];
+      aarch64_reg_type reg_type;
 
       DEBUG_TRACE ("parse operand %d", i);
 
@@ -5109,22 +5131,54 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  info->qualifier = AARCH64_OPND_QLF_S_B + (rtype - REG_TYPE_FP_B);
 	  break;
 
+	case AARCH64_OPND_SVE_Pd:
+	case AARCH64_OPND_SVE_Pg3:
+	case AARCH64_OPND_SVE_Pg4_5:
+	case AARCH64_OPND_SVE_Pg4_10:
+	case AARCH64_OPND_SVE_Pg4_16:
+	case AARCH64_OPND_SVE_Pm:
+	case AARCH64_OPND_SVE_Pn:
+	case AARCH64_OPND_SVE_Pt:
+	  reg_type = REG_TYPE_PN;
+	  goto vector_reg;
+
+	case AARCH64_OPND_SVE_Za_5:
+	case AARCH64_OPND_SVE_Za_16:
+	case AARCH64_OPND_SVE_Zd:
+	case AARCH64_OPND_SVE_Zm_5:
+	case AARCH64_OPND_SVE_Zm_16:
+	case AARCH64_OPND_SVE_Zn:
+	case AARCH64_OPND_SVE_Zt:
+	  reg_type = REG_TYPE_ZN;
+	  goto vector_reg;
+
 	case AARCH64_OPND_Vd:
 	case AARCH64_OPND_Vn:
 	case AARCH64_OPND_Vm:
-	  val = aarch64_reg_parse (&str, REG_TYPE_VN, NULL, &vectype);
+	  reg_type = REG_TYPE_VN;
+	vector_reg:
+	  val = aarch64_reg_parse (&str, reg_type, NULL, &vectype);
 	  if (val == PARSE_FAIL)
 	    {
-	      first_error (_(get_reg_expected_msg (REG_TYPE_VN)));
+	      first_error (_(get_reg_expected_msg (reg_type)));
 	      goto failure;
 	    }
 	  if (vectype.defined & NTA_HASINDEX)
 	    goto failure;
 
 	  info->reg.regno = val;
-	  info->qualifier = vectype_to_qualifier (&vectype);
-	  if (info->qualifier == AARCH64_OPND_QLF_NIL)
-	    goto failure;
+	  if ((reg_type == REG_TYPE_PN || reg_type == REG_TYPE_ZN)
+	      && vectype.type == NT_invtype)
+	    /* Unqualified Pn and Zn registers are allowed in certain
+	       contexts.  Rely on F_STRICT qualifier checking to catch
+	       invalid uses.  */
+	    info->qualifier = AARCH64_OPND_QLF_NIL;
+	  else
+	    {
+	      info->qualifier = vectype_to_qualifier (&vectype);
+	      if (info->qualifier == AARCH64_OPND_QLF_NIL)
+		goto failure;
+	    }
 	  break;
 
 	case AARCH64_OPND_VdD1:
@@ -5149,13 +5203,19 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  info->qualifier = AARCH64_OPND_QLF_S_D;
 	  break;
 
+	case AARCH64_OPND_SVE_Zn_INDEX:
+	  reg_type = REG_TYPE_ZN;
+	  goto vector_reg_index;
+
 	case AARCH64_OPND_Ed:
 	case AARCH64_OPND_En:
 	case AARCH64_OPND_Em:
-	  val = aarch64_reg_parse (&str, REG_TYPE_VN, NULL, &vectype);
+	  reg_type = REG_TYPE_VN;
+	vector_reg_index:
+	  val = aarch64_reg_parse (&str, reg_type, NULL, &vectype);
 	  if (val == PARSE_FAIL)
 	    {
-	      first_error (_(get_reg_expected_msg (REG_TYPE_VN)));
+	      first_error (_(get_reg_expected_msg (reg_type)));
 	      goto failure;
 	    }
 	  if (vectype.type == NT_invtype || !(vectype.defined & NTA_HASINDEX))
@@ -5168,20 +5228,43 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	    goto failure;
 	  break;
 
+	case AARCH64_OPND_SVE_ZnxN:
+	case AARCH64_OPND_SVE_ZtxN:
+	  reg_type = REG_TYPE_ZN;
+	  goto vector_reg_list;
+
 	case AARCH64_OPND_LVn:
 	case AARCH64_OPND_LVt:
 	case AARCH64_OPND_LVt_AL:
 	case AARCH64_OPND_LEt:
-	  if ((val = parse_vector_reg_list (&str, REG_TYPE_VN,
-					    &vectype)) == PARSE_FAIL)
-	    goto failure;
-	  if (! reg_list_valid_p (val, /* accept_alternate */ 0))
+	  reg_type = REG_TYPE_VN;
+	vector_reg_list:
+	  if (reg_type == REG_TYPE_ZN
+	      && get_opcode_dependent_value (opcode) == 1
+	      && *str != '{')
 	    {
-	      set_fatal_syntax_error (_("invalid register list"));
-	      goto failure;
+	      val = aarch64_reg_parse (&str, reg_type, NULL, &vectype);
+	      if (val == PARSE_FAIL)
+		{
+		  first_error (_(get_reg_expected_msg (reg_type)));
+		  goto failure;
+		}
+	      info->reglist.first_regno = val;
+	      info->reglist.num_regs = 1;
+	    }
+	  else
+	    {
+	      val = parse_vector_reg_list (&str, reg_type, &vectype);
+	      if (val == PARSE_FAIL)
+		goto failure;
+	      if (! reg_list_valid_p (val, /* accept_alternate */ 0))
+		{
+		  set_fatal_syntax_error (_("invalid register list"));
+		  goto failure;
+		}
+	      info->reglist.first_regno = (val >> 2) & 0x1f;
+	      info->reglist.num_regs = (val & 0x3) + 1;
 	    }
-	  info->reglist.first_regno = (val >> 2) & 0x1f;
-	  info->reglist.num_regs = (val & 0x3) + 1;
 	  if (operands[i] == AARCH64_OPND_LEt)
 	    {
 	      if (!(vectype.defined & NTA_HASINDEX))
@@ -5189,8 +5272,17 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	      info->reglist.has_index = 1;
 	      info->reglist.index = vectype.index;
 	    }
-	  else if (!(vectype.defined & NTA_HASTYPE))
-	    goto failure;
+	  else
+	    {
+	      if (vectype.defined & NTA_HASINDEX)
+		goto failure;
+	      if (!(vectype.defined & NTA_HASTYPE))
+		{
+		  if (reg_type == REG_TYPE_ZN)
+		    set_fatal_syntax_error (_("missing type suffix"));
+		  goto failure;
+		}
+	    }
 	  info->qualifier = vectype_to_qualifier (&vectype);
 	  if (info->qualifier == AARCH64_OPND_QLF_NIL)
 	    goto failure;
@@ -6185,11 +6277,13 @@ aarch64_canonicalize_symbol_name (char *name)
 
 #define REGDEF(s,n,t) { #s, n, REG_TYPE_##t, TRUE }
 #define REGNUM(p,n,t) REGDEF(p##n, n, t)
-#define REGSET31(p,t) \
+#define REGSET16(p,t) \
   REGNUM(p, 0,t), REGNUM(p, 1,t), REGNUM(p, 2,t), REGNUM(p, 3,t), \
   REGNUM(p, 4,t), REGNUM(p, 5,t), REGNUM(p, 6,t), REGNUM(p, 7,t), \
   REGNUM(p, 8,t), REGNUM(p, 9,t), REGNUM(p,10,t), REGNUM(p,11,t), \
-  REGNUM(p,12,t), REGNUM(p,13,t), REGNUM(p,14,t), REGNUM(p,15,t), \
+  REGNUM(p,12,t), REGNUM(p,13,t), REGNUM(p,14,t), REGNUM(p,15,t)
+#define REGSET31(p,t) \
+  REGSET16(p, t), \
   REGNUM(p,16,t), REGNUM(p,17,t), REGNUM(p,18,t), REGNUM(p,19,t), \
   REGNUM(p,20,t), REGNUM(p,21,t), REGNUM(p,22,t), REGNUM(p,23,t), \
   REGNUM(p,24,t), REGNUM(p,25,t), REGNUM(p,26,t), REGNUM(p,27,t), \
@@ -6229,10 +6323,18 @@ static const reg_entry reg_names[] = {
 
   /* FP/SIMD registers.  */
   REGSET (v, VN), REGSET (V, VN),
+
+  /* SVE vector registers.  */
+  REGSET (z, ZN), REGSET (Z, ZN),
+
+  /* SVE predicate registers.  */
+  REGSET16 (p, PN), REGSET16 (P, PN)
 };
 
 #undef REGDEF
 #undef REGNUM
+#undef REGSET16
+#undef REGSET31
 #undef REGSET
 
 #define N 1
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index d39f10d..b0eb617 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -120,6 +120,8 @@ enum aarch64_operand_class
   AARCH64_OPND_CLASS_SISD_REG,
   AARCH64_OPND_CLASS_SIMD_REGLIST,
   AARCH64_OPND_CLASS_CP_REG,
+  AARCH64_OPND_CLASS_SVE_REG,
+  AARCH64_OPND_CLASS_PRED_REG,
   AARCH64_OPND_CLASS_ADDRESS,
   AARCH64_OPND_CLASS_IMMEDIATE,
   AARCH64_OPND_CLASS_SYSTEM,
@@ -241,6 +243,25 @@ enum aarch64_opnd
   AARCH64_OPND_BARRIER_ISB,	/* Barrier operand for ISB.  */
   AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
   AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
+
+  AARCH64_OPND_SVE_Pd,		/* SVE p0-p15 in Pd.  */
+  AARCH64_OPND_SVE_Pg3,		/* SVE p0-p7 in Pg.  */
+  AARCH64_OPND_SVE_Pg4_5,	/* SVE p0-p15 in Pg, bits [8,5].  */
+  AARCH64_OPND_SVE_Pg4_10,	/* SVE p0-p15 in Pg, bits [13,10].  */
+  AARCH64_OPND_SVE_Pg4_16,	/* SVE p0-p15 in Pg, bits [19,16].  */
+  AARCH64_OPND_SVE_Pm,		/* SVE p0-p15 in Pm.  */
+  AARCH64_OPND_SVE_Pn,		/* SVE p0-p15 in Pn.  */
+  AARCH64_OPND_SVE_Pt,		/* SVE p0-p15 in Pt.  */
+  AARCH64_OPND_SVE_Za_5,	/* SVE vector register in Za, bits [9,5].  */
+  AARCH64_OPND_SVE_Za_16,	/* SVE vector register in Za, bits [20,16].  */
+  AARCH64_OPND_SVE_Zd,		/* SVE vector register in Zd.  */
+  AARCH64_OPND_SVE_Zm_5,	/* SVE vector register in Zm, bits [9,5].  */
+  AARCH64_OPND_SVE_Zm_16,	/* SVE vector register in Zm, bits [20,16].  */
+  AARCH64_OPND_SVE_Zn,		/* SVE vector register in Zn.  */
+  AARCH64_OPND_SVE_Zn_INDEX,	/* Indexed SVE vector register, for DUP.  */
+  AARCH64_OPND_SVE_ZnxN,	/* SVE vector register list in Zn.  */
+  AARCH64_OPND_SVE_Zt,		/* SVE vector register in Zt.  */
+  AARCH64_OPND_SVE_ZtxN,	/* SVE vector register list in Zt.  */
 };
 
 /* Qualifier constrains an operand.  It either specifies a variant of an
diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index 439dd3d..9c797b2 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -480,6 +480,21 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
+    case 89:
+    case 90:
+    case 91:
+    case 92:
+    case 93:
+    case 94:
+    case 95:
+    case 96:
+    case 97:
+    case 98:
+    case 99:
+    case 100:
+    case 101:
+    case 102:
+    case 105:
       return aarch64_ins_regno (self, info, code, inst);
     case 12:
       return aarch64_ins_reg_extended (self, info, code, inst);
@@ -566,6 +581,11 @@ aarch64_insert_operand (const aarch64_operand *self,
       return aarch64_ins_prfop (self, info, code, inst);
     case 88:
       return aarch64_ins_hint (self, info, code, inst);
+    case 103:
+      return aarch64_ins_sve_index (self, info, code, inst);
+    case 104:
+    case 106:
+      return aarch64_ins_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
 }
diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index f291495..c045f9e 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -745,6 +745,33 @@ aarch64_ins_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return NULL;
 }
 
+/* Encode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
+   array specifies which field to use for Zn.  MM is encoded in the
+   concatenation of imm5 and SVE_tszh, with imm5 being the less
+   significant part.  */
+const char *
+aarch64_ins_sve_index (const aarch64_operand *self,
+		       const aarch64_opnd_info *info, aarch64_insn *code,
+		       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  unsigned int esize = aarch64_get_qualifier_esize (info->qualifier);
+  insert_field (self->fields[0], code, info->reglane.regno, 0);
+  insert_fields (code, (info->reglane.index * 2 + 1) * esize, 0,
+		 2, FLD_imm5, FLD_SVE_tszh);
+  return NULL;
+}
+
+/* Encode {Zn.<T> - Zm.<T>}.  The fields array specifies which field
+   to use for Zn.  */
+const char *
+aarch64_ins_sve_reglist (const aarch64_operand *self,
+			 const aarch64_opnd_info *info, aarch64_insn *code,
+			 const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  insert_field (self->fields[0], code, info->reglist.first_regno, 0);
+  return NULL;
+}
+
 /* Miscellaneous encoding functions.  */
 
 /* Encode size[0], i.e. bit 22, for
diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
index 3211aff..ede366c 100644
--- a/opcodes/aarch64-asm.h
+++ b/opcodes/aarch64-asm.h
@@ -69,6 +69,8 @@ AARCH64_DECL_OPD_INSERTER (ins_hint);
 AARCH64_DECL_OPD_INSERTER (ins_prfop);
 AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
 AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
+AARCH64_DECL_OPD_INSERTER (ins_sve_index);
+AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
 
 #undef AARCH64_DECL_OPD_INSERTER
 
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index a86a84d..6ea010b 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10426,6 +10426,21 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
+    case 89:
+    case 90:
+    case 91:
+    case 92:
+    case 93:
+    case 94:
+    case 95:
+    case 96:
+    case 97:
+    case 98:
+    case 99:
+    case 100:
+    case 101:
+    case 102:
+    case 105:
       return aarch64_ext_regno (self, info, code, inst);
     case 8:
       return aarch64_ext_regrt_sysins (self, info, code, inst);
@@ -10519,6 +10534,11 @@ aarch64_extract_operand (const aarch64_operand *self,
       return aarch64_ext_prfop (self, info, code, inst);
     case 88:
       return aarch64_ext_hint (self, info, code, inst);
+    case 103:
+      return aarch64_ext_sve_index (self, info, code, inst);
+    case 104:
+    case 106:
+      return aarch64_ext_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
 }
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index 4c3b521..ab93234 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -1185,6 +1185,40 @@ aarch64_ext_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
 
   return 1;
 }
+
+/* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
+   array specifies which field to use for Zn.  MM is encoded in the
+   concatenation of imm5 and SVE_tszh, with imm5 being the less
+   significant part.  */
+int
+aarch64_ext_sve_index (const aarch64_operand *self,
+		       aarch64_opnd_info *info, aarch64_insn code,
+		       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int val;
+
+  info->reglane.regno = extract_field (self->fields[0], code, 0);
+  val = extract_fields (code, 0, 2, FLD_SVE_tszh, FLD_imm5);
+  if ((val & 15) == 0)
+    return 0;
+  while ((val & 1) == 0)
+    val /= 2;
+  info->reglane.index = val / 2;
+  return 1;
+}
+
+/* Decode {Zn.<T> - Zm.<T>}.  The fields array specifies which field
+   to use for Zn.  The opcode-dependent value specifies the number
+   of registers in the list.  */
+int
+aarch64_ext_sve_reglist (const aarch64_operand *self,
+			 aarch64_opnd_info *info, aarch64_insn code,
+			 const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  info->reglist.first_regno = extract_field (self->fields[0], code, 0);
+  info->reglist.num_regs = get_opcode_dependent_value (inst->opcode);
+  return 1;
+}
 \f
 /* Bitfields that are commonly used to encode certain operands' information
    may be partially used as part of the base opcode in some instructions.
diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
index 1f10157..5efb904 100644
--- a/opcodes/aarch64-dis.h
+++ b/opcodes/aarch64-dis.h
@@ -91,6 +91,8 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_hint);
 AARCH64_DECL_OPD_EXTRACTOR (ext_prfop);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
 
 #undef AARCH64_DECL_OPD_EXTRACTOR
 
diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
index b53bb5c..f8a7079 100644
--- a/opcodes/aarch64-opc-2.c
+++ b/opcodes/aarch64-opc-2.c
@@ -113,6 +113,24 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
   {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
+  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pd}, "an SVE predicate register"},
+  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg3", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg3}, "an SVE predicate register"},
+  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg4_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg4_5}, "an SVE predicate register"},
+  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg4_10", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg4_10}, "an SVE predicate register"},
+  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg4_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg4_16}, "an SVE predicate register"},
+  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pm}, "an SVE predicate register"},
+  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pn}, "an SVE predicate register"},
+  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pt", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pt}, "an SVE predicate register"},
+  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_5}, "an SVE vector register"},
+  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_16}, "an SVE vector register"},
+  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zd}, "an SVE vector register"},
+  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zm_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zm_5}, "an SVE vector register"},
+  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zm_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zm_16}, "an SVE vector register"},
+  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an SVE vector register"},
+  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zn_INDEX", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an indexed SVE vector register"},
+  {AARCH64_OPND_CLASS_SVE_REG, "SVE_ZnxN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "a list of SVE vector registers"},
+  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zt", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zt}, "an SVE vector register"},
+  {AARCH64_OPND_CLASS_SVE_REG, "SVE_ZtxN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zt}, "a list of SVE vector registers"},
   {AARCH64_OPND_CLASS_NIL, "", 0, {0}, "DUMMY"},
 };
 
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 30501fc..56a0169 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -199,6 +199,22 @@ const aarch64_field fields[] =
     { 31,  1 },	/* b5: in the test bit and branch instructions.  */
     { 19,  5 },	/* b40: in the test bit and branch instructions.  */
     { 10,  6 },	/* scale: in the fixed-point scalar to fp converting inst.  */
+    {  0,  4 }, /* SVE_Pd: p0-p15, bits [3,0].  */
+    { 10,  3 }, /* SVE_Pg3: p0-p7, bits [12,10].  */
+    {  5,  4 }, /* SVE_Pg4_5: p0-p15, bits [8,5].  */
+    { 10,  4 }, /* SVE_Pg4_10: p0-p15, bits [13,10].  */
+    { 16,  4 }, /* SVE_Pg4_16: p0-p15, bits [19,16].  */
+    { 16,  4 }, /* SVE_Pm: p0-p15, bits [19,16].  */
+    {  5,  4 }, /* SVE_Pn: p0-p15, bits [8,5].  */
+    {  0,  4 }, /* SVE_Pt: p0-p15, bits [3,0].  */
+    {  5,  5 }, /* SVE_Za_5: SVE vector register, bits [9,5].  */
+    { 16,  5 }, /* SVE_Za_16: SVE vector register, bits [20,16].  */
+    {  0,  5 }, /* SVE_Zd: SVE vector register. bits [4,0].  */
+    {  5,  5 }, /* SVE_Zm_5: SVE vector register, bits [9,5].  */
+    { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
+    {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
+    {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
+    { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
 };
 
 enum aarch64_operand_class
@@ -1332,6 +1348,43 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	}
       break;
 
+    case AARCH64_OPND_CLASS_SVE_REG:
+      switch (type)
+	{
+	case AARCH64_OPND_SVE_Zn_INDEX:
+	  size = aarch64_get_qualifier_esize (opnd->qualifier);
+	  if (!value_in_range_p (opnd->reglane.index, 0, 64 / size - 1))
+	    {
+	      set_elem_idx_out_of_range_error (mismatch_detail, idx,
+					       0, 64 / size - 1);
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_ZnxN:
+	case AARCH64_OPND_SVE_ZtxN:
+	  if (opnd->reglist.num_regs != get_opcode_dependent_value (opcode))
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("invalid register list"));
+	      return 0;
+	    }
+	  break;
+
+	default:
+	  break;
+	}
+      break;
+
+    case AARCH64_OPND_CLASS_PRED_REG:
+      if (opnd->reg.regno >= 8
+	  && get_operand_fields_width (get_operand_from_code (type)) == 3)
+	{
+	  set_other_error (mismatch_detail, idx, _("p0-p7 expected"));
+	  return 0;
+	}
+      break;
+
     case AARCH64_OPND_CLASS_COND:
       if (type == AARCH64_OPND_COND1
 	  && (opnds[idx].cond->value & 0xe) == 0xe)
@@ -2560,6 +2613,46 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
       print_register_list (buf, size, opnd, "v");
       break;
 
+    case AARCH64_OPND_SVE_Pd:
+    case AARCH64_OPND_SVE_Pg3:
+    case AARCH64_OPND_SVE_Pg4_5:
+    case AARCH64_OPND_SVE_Pg4_10:
+    case AARCH64_OPND_SVE_Pg4_16:
+    case AARCH64_OPND_SVE_Pm:
+    case AARCH64_OPND_SVE_Pn:
+    case AARCH64_OPND_SVE_Pt:
+      if (opnd->qualifier == AARCH64_OPND_QLF_NIL)
+	snprintf (buf, size, "p%d", opnd->reg.regno);
+      else
+	snprintf (buf, size, "p%d.%s", opnd->reg.regno,
+		  aarch64_get_qualifier_name (opnd->qualifier));
+      break;
+
+    case AARCH64_OPND_SVE_Za_5:
+    case AARCH64_OPND_SVE_Za_16:
+    case AARCH64_OPND_SVE_Zd:
+    case AARCH64_OPND_SVE_Zm_5:
+    case AARCH64_OPND_SVE_Zm_16:
+    case AARCH64_OPND_SVE_Zn:
+    case AARCH64_OPND_SVE_Zt:
+      if (opnd->qualifier == AARCH64_OPND_QLF_NIL)
+	snprintf (buf, size, "z%d", opnd->reg.regno);
+      else
+	snprintf (buf, size, "z%d.%s", opnd->reg.regno,
+		  aarch64_get_qualifier_name (opnd->qualifier));
+      break;
+
+    case AARCH64_OPND_SVE_ZnxN:
+    case AARCH64_OPND_SVE_ZtxN:
+      print_register_list (buf, size, opnd, "z");
+      break;
+
+    case AARCH64_OPND_SVE_Zn_INDEX:
+      snprintf (buf, size, "z%d.%s[%" PRIi64 "]", opnd->reglane.regno,
+		aarch64_get_qualifier_name (opnd->qualifier),
+		opnd->reglane.index);
+      break;
+
     case AARCH64_OPND_Cn:
     case AARCH64_OPND_Cm:
       snprintf (buf, size, "C%d", opnd->reg.regno);
diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
index 08494c6..cc3dbef 100644
--- a/opcodes/aarch64-opc.h
+++ b/opcodes/aarch64-opc.h
@@ -91,6 +91,22 @@ enum aarch64_field_kind
   FLD_b5,
   FLD_b40,
   FLD_scale,
+  FLD_SVE_Pd,
+  FLD_SVE_Pg3,
+  FLD_SVE_Pg4_5,
+  FLD_SVE_Pg4_10,
+  FLD_SVE_Pg4_16,
+  FLD_SVE_Pm,
+  FLD_SVE_Pn,
+  FLD_SVE_Pt,
+  FLD_SVE_Za_5,
+  FLD_SVE_Za_16,
+  FLD_SVE_Zd,
+  FLD_SVE_Zm_5,
+  FLD_SVE_Zm_16,
+  FLD_SVE_Zn,
+  FLD_SVE_Zt,
+  FLD_SVE_tszh,
 };
 
 /* Field description.  */
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index 8f1c9b2..9dbe0c0 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2819,4 +2819,40 @@ struct aarch64_opcode aarch64_opcode_table[] =
     Y(SYSTEM, prfop, "PRFOP", 0, F(),					\
       "a prefetch operation specifier")					\
     Y (SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
-      "the PSB option name CSYNC")
+      "the PSB option name CSYNC")					\
+    Y(PRED_REG, regno, "SVE_Pd", 0, F(FLD_SVE_Pd),			\
+      "an SVE predicate register")					\
+    Y(PRED_REG, regno, "SVE_Pg3", 0, F(FLD_SVE_Pg3),			\
+      "an SVE predicate register")					\
+    Y(PRED_REG, regno, "SVE_Pg4_5", 0, F(FLD_SVE_Pg4_5),		\
+      "an SVE predicate register")					\
+    Y(PRED_REG, regno, "SVE_Pg4_10", 0, F(FLD_SVE_Pg4_10),		\
+      "an SVE predicate register")					\
+    Y(PRED_REG, regno, "SVE_Pg4_16", 0, F(FLD_SVE_Pg4_16),		\
+      "an SVE predicate register")					\
+    Y(PRED_REG, regno, "SVE_Pm", 0, F(FLD_SVE_Pm),			\
+      "an SVE predicate register")					\
+    Y(PRED_REG, regno, "SVE_Pn", 0, F(FLD_SVE_Pn),			\
+      "an SVE predicate register")					\
+    Y(PRED_REG, regno, "SVE_Pt", 0, F(FLD_SVE_Pt),			\
+      "an SVE predicate register")					\
+    Y(SVE_REG, regno, "SVE_Za_5", 0, F(FLD_SVE_Za_5),			\
+      "an SVE vector register")						\
+    Y(SVE_REG, regno, "SVE_Za_16", 0, F(FLD_SVE_Za_16),			\
+      "an SVE vector register")						\
+    Y(SVE_REG, regno, "SVE_Zd", 0, F(FLD_SVE_Zd),			\
+      "an SVE vector register")						\
+    Y(SVE_REG, regno, "SVE_Zm_5", 0, F(FLD_SVE_Zm_5),			\
+      "an SVE vector register")						\
+    Y(SVE_REG, regno, "SVE_Zm_16", 0, F(FLD_SVE_Zm_16),			\
+      "an SVE vector register")						\
+    Y(SVE_REG, regno, "SVE_Zn", 0, F(FLD_SVE_Zn),			\
+      "an SVE vector register")						\
+    Y(SVE_REG, sve_index, "SVE_Zn_INDEX", 0, F(FLD_SVE_Zn),		\
+      "an indexed SVE vector register")					\
+    Y(SVE_REG, sve_reglist, "SVE_ZnxN", 0, F(FLD_SVE_Zn),		\
+      "a list of SVE vector registers")					\
+    Y(SVE_REG, regno, "SVE_Zt", 0, F(FLD_SVE_Zt),			\
+      "an SVE vector register")						\
+    Y(SVE_REG, sve_reglist, "SVE_ZtxN", 0, F(FLD_SVE_Zt),		\
+      "a list of SVE vector registers")

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 20/32] Add support for tied operands
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (18 preceding siblings ...)
  2016-08-23  9:17 ` [AArch64][SVE 19/32] Refactor address-printing code Richard Sandiford
@ 2016-08-23  9:18 ` Richard Sandiford
  2016-08-25 13:59   ` Richard Earnshaw (lists)
  2016-08-23  9:18 ` [AArch64][SVE 21/32] Add Zn and Pn registers Richard Sandiford
                   ` (12 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:18 UTC (permalink / raw)
  To: binutils

SVE has some instructions in which the same register appears twice
in the assembly string, once as an input and once as an output.
This patch adds a general mechanism for that.

The patch needs to add new information to the instruction entries.
One option would have been to extend the flags field of the opcode
to 64 bits (since we already rely on 64-bit integers being available
on the host).  However, the *_INSN macros mean that it's easy to add
new information as top-level fields without affecting the existing
table entries too much.  Going for that option seemed to give slightly
neater code.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (aarch64_opcode): Add a tied_operand field.
	(AARCH64_OPDE_UNTIED_OPERAND): New aarch64_operand_error_kind.

opcodes/
	* aarch64-tbl.h (CORE_INSN, __FP_INSN, SIMD_INSN, CRYP_INSN)
	(_CRC_INSN, _LSE_INSN, _LOR_INSN, RDMA_INSN, FP16_INSN, SF16_INSN)
	(V8_2_INSN, aarch64_opcode_table): Initialize tied_operand field.
	* aarch64-opc.c (aarch64_match_operands_constraint): Check for
	tied operands.

gas/
	* config/tc-aarch64.c (output_operand_error_record): Handle
	AARCH64_OPDE_UNTIED_OPERAND.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 9591704..37f7d26 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -4419,6 +4419,11 @@ output_operand_error_record (const operand_error_record *record, char *str)
 	}
       break;
 
+    case AARCH64_OPDE_UNTIED_OPERAND:
+      as_bad (_("operand %d must be the same register as operand 1 -- `%s'"),
+	      detail->index + 1, str);
+      break;
+
     case AARCH64_OPDE_OUT_OF_RANGE:
       if (detail->data[0] != detail->data[1])
 	as_bad (_("%s out of range %d to %d at operand %d -- `%s'"),
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index 24a2ddb..d39f10d 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -539,6 +539,10 @@ struct aarch64_opcode
   /* Flags providing information about this instruction */
   uint32_t flags;
 
+  /* If nonzero, this operand and operand 0 are both registers and
+     are required to have the same register number.  */
+  unsigned char tied_operand;
+
   /* If non-NULL, a function to verify that a given instruction is valid.  */
   bfd_boolean (* verifier) (const struct aarch64_opcode *, const aarch64_insn);
 };
@@ -872,6 +876,10 @@ typedef struct aarch64_inst aarch64_inst;
      No syntax error, but the operands are not a valid combination, e.g.
      FMOV D0,S0
 
+   AARCH64_OPDE_UNTIED_OPERAND
+     The asm failed to use the same register for a destination operand
+     and a tied source operand.
+
    AARCH64_OPDE_OUT_OF_RANGE
      Error about some immediate value out of a valid range.
 
@@ -908,6 +916,7 @@ enum aarch64_operand_error_kind
   AARCH64_OPDE_SYNTAX_ERROR,
   AARCH64_OPDE_FATAL_SYNTAX_ERROR,
   AARCH64_OPDE_INVALID_VARIANT,
+  AARCH64_OPDE_UNTIED_OPERAND,
   AARCH64_OPDE_OUT_OF_RANGE,
   AARCH64_OPDE_UNALIGNED,
   AARCH64_OPDE_REG_LIST,
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 7a73c7e..30501fc 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -2058,6 +2058,23 @@ aarch64_match_operands_constraint (aarch64_inst *inst,
 
   DEBUG_TRACE ("enter");
 
+  /* Check for cases where a source register needs to be the same as the
+     destination register.  Do this before matching qualifiers since if
+     an instruction has both invalid tying and invalid qualifiers,
+     the error about qualifiers would suggest several alternative
+     instructions that also have invalid tying.  */
+  i = inst->opcode->tied_operand;
+  if (i > 0 && (inst->operands[0].reg.regno != inst->operands[i].reg.regno))
+    {
+      if (mismatch_detail)
+	{
+	  mismatch_detail->kind = AARCH64_OPDE_UNTIED_OPERAND;
+	  mismatch_detail->index = i;
+	  mismatch_detail->error = NULL;
+	}
+      return 0;
+    }
+
   /* Match operands' qualifier.
      *INST has already had qualifier establish for some, if not all, of
      its operands; we need to find out whether these established
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index 9a831e4..8f1c9b2 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -1393,27 +1393,27 @@ static const aarch64_feature_set aarch64_feature_stat_profile =
 #define ARMV8_2		&aarch64_feature_v8_2
 
 #define CORE_INSN(NAME,OPCODE,MASK,CLASS,OP,OPS,QUALS,FLAGS) \
-  { NAME, OPCODE, MASK, CLASS, OP, CORE, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, OP, CORE, OPS, QUALS, FLAGS, 0, NULL }
 #define __FP_INSN(NAME,OPCODE,MASK,CLASS,OP,OPS,QUALS,FLAGS) \
-  { NAME, OPCODE, MASK, CLASS, OP, FP, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, OP, FP, OPS, QUALS, FLAGS, 0, NULL }
 #define SIMD_INSN(NAME,OPCODE,MASK,CLASS,OP,OPS,QUALS,FLAGS) \
-  { NAME, OPCODE, MASK, CLASS, OP, SIMD, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, OP, SIMD, OPS, QUALS, FLAGS, 0, NULL }
 #define CRYP_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
-  { NAME, OPCODE, MASK, CLASS, 0, CRYPTO, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, 0, CRYPTO, OPS, QUALS, FLAGS, 0, NULL }
 #define _CRC_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
-  { NAME, OPCODE, MASK, CLASS, 0, CRC, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, 0, CRC, OPS, QUALS, FLAGS, 0, NULL }
 #define _LSE_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
-  { NAME, OPCODE, MASK, CLASS, 0, LSE, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, 0, LSE, OPS, QUALS, FLAGS, 0, NULL }
 #define _LOR_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
-  { NAME, OPCODE, MASK, CLASS, 0, LOR, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, 0, LOR, OPS, QUALS, FLAGS, 0, NULL }
 #define RDMA_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
-  { NAME, OPCODE, MASK, CLASS, 0, RDMA, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, 0, RDMA, OPS, QUALS, FLAGS, 0, NULL }
 #define FF16_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
-  { NAME, OPCODE, MASK, CLASS, 0, FP_F16, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, 0, FP_F16, OPS, QUALS, FLAGS, 0, NULL }
 #define SF16_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS)		\
-  { NAME, OPCODE, MASK, CLASS, 0, SIMD_F16, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, 0, SIMD_F16, OPS, QUALS, FLAGS, 0, NULL }
 #define V8_2_INSN(NAME,OPCODE,MASK,CLASS,OP,OPS,QUALS,FLAGS) \
-  { NAME, OPCODE, MASK, CLASS, OP, ARMV8_2, OPS, QUALS, FLAGS, NULL }
+  { NAME, OPCODE, MASK, CLASS, OP, ARMV8_2, OPS, QUALS, FLAGS, 0, NULL }
 
 struct aarch64_opcode aarch64_opcode_table[] =
 {
@@ -2389,13 +2389,13 @@ struct aarch64_opcode aarch64_opcode_table[] =
   CORE_INSN ("ldp", 0x29400000, 0x7ec00000, ldstpair_off, 0, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_R, F_SF),
   CORE_INSN ("stp", 0x2d000000, 0x3fc00000, ldstpair_off, 0, OP3 (Ft, Ft2, ADDR_SIMM7), QL_LDST_PAIR_FP, 0),
   CORE_INSN ("ldp", 0x2d400000, 0x3fc00000, ldstpair_off, 0, OP3 (Ft, Ft2, ADDR_SIMM7), QL_LDST_PAIR_FP, 0),
-  {"ldpsw", 0x69400000, 0xffc00000, ldstpair_off, 0, CORE, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_X32, 0, VERIFIER (ldpsw)},
+  {"ldpsw", 0x69400000, 0xffc00000, ldstpair_off, 0, CORE, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_X32, 0, 0, VERIFIER (ldpsw)},
   /* Load/store register pair (indexed).  */
   CORE_INSN ("stp", 0x28800000, 0x7ec00000, ldstpair_indexed, 0, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_R, F_SF),
   CORE_INSN ("ldp", 0x28c00000, 0x7ec00000, ldstpair_indexed, 0, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_R, F_SF),
   CORE_INSN ("stp", 0x2c800000, 0x3ec00000, ldstpair_indexed, 0, OP3 (Ft, Ft2, ADDR_SIMM7), QL_LDST_PAIR_FP, 0),
   CORE_INSN ("ldp", 0x2cc00000, 0x3ec00000, ldstpair_indexed, 0, OP3 (Ft, Ft2, ADDR_SIMM7), QL_LDST_PAIR_FP, 0),
-  {"ldpsw", 0x68c00000, 0xfec00000, ldstpair_indexed, 0, CORE, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_X32, 0, VERIFIER (ldpsw)},
+  {"ldpsw", 0x68c00000, 0xfec00000, ldstpair_indexed, 0, CORE, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_X32, 0, 0, VERIFIER (ldpsw)},
   /* Load register (literal).  */
   CORE_INSN ("ldr",   0x18000000, 0xbf000000, loadlit, OP_LDR_LIT,   OP2 (Rt, ADDR_PCREL19),    QL_R_PCREL, F_GPRSIZE_IN_Q),
   CORE_INSN ("ldr",   0x1c000000, 0x3f000000, loadlit, OP_LDRV_LIT,  OP2 (Ft, ADDR_PCREL19),    QL_FP_PCREL, 0),
@@ -2613,8 +2613,8 @@ struct aarch64_opcode aarch64_opcode_table[] =
   CORE_INSN ("wfi", 0xd503207f, 0xffffffff, ic_system, 0, OP0 (), {}, F_ALIAS),
   CORE_INSN ("sev", 0xd503209f, 0xffffffff, ic_system, 0, OP0 (), {}, F_ALIAS),
   CORE_INSN ("sevl",0xd50320bf, 0xffffffff, ic_system, 0, OP0 (), {}, F_ALIAS),
-  {"esb", 0xd503221f, 0xffffffff, ic_system, 0, RAS, OP0 (), {}, F_ALIAS, NULL},
-  {"psb", 0xd503223f, 0xffffffff, ic_system, 0, STAT_PROFILE, OP1 (BARRIER_PSB), {}, F_ALIAS, NULL},
+  {"esb", 0xd503221f, 0xffffffff, ic_system, 0, RAS, OP0 (), {}, F_ALIAS, 0, NULL},
+  {"psb", 0xd503223f, 0xffffffff, ic_system, 0, STAT_PROFILE, OP1 (BARRIER_PSB), {}, F_ALIAS, 0, NULL},
   CORE_INSN ("clrex", 0xd503305f, 0xfffff0ff, ic_system, 0, OP1 (UIMM4), {}, F_OPD0_OPT | F_DEFAULT (0xF)),
   CORE_INSN ("dsb", 0xd503309f, 0xfffff0ff, ic_system, 0, OP1 (BARRIER), {}, 0),
   CORE_INSN ("dmb", 0xd50330bf, 0xfffff0ff, ic_system, 0, OP1 (BARRIER), {}, 0),
@@ -2648,7 +2648,7 @@ struct aarch64_opcode aarch64_opcode_table[] =
   CORE_INSN ("bgt", 0x5400000c, 0xff00001f, condbranch, 0, OP1 (ADDR_PCREL19), QL_PCREL_NIL, F_ALIAS | F_PSEUDO),
   CORE_INSN ("ble", 0x5400000d, 0xff00001f, condbranch, 0, OP1 (ADDR_PCREL19), QL_PCREL_NIL, F_ALIAS | F_PSEUDO),
 
-  {0, 0, 0, 0, 0, 0, {}, {}, 0, NULL},
+  {0, 0, 0, 0, 0, 0, {}, {}, 0, 0, NULL},
 };
 
 #ifdef AARCH64_OPERANDS

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 22/32] Add qualifiers for merging and zeroing predication
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (20 preceding siblings ...)
  2016-08-23  9:18 ` [AArch64][SVE 21/32] Add Zn and Pn registers Richard Sandiford
@ 2016-08-23  9:19 ` Richard Sandiford
  2016-08-25 14:08   ` Richard Earnshaw (lists)
  2016-08-23  9:20 ` [AArch64][SVE 23/32] Add SVE pattern and prfop operands Richard Sandiford
                   ` (10 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:19 UTC (permalink / raw)
  To: binutils

This patch adds qualifiers to represent /z and /m suffixes on
predicate registers.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_QLF_P_Z): New aarch64_opnd_qualifier.
	(AARCH64_OPND_QLF_P_M): Likewise.

opcodes/
	* aarch64-opc.c (aarch64_opnd_qualifiers): Add entries for
	AARCH64_OPND_QLF_P_[ZM].
	(aarch64_print_operand): Print /z and /m where appropriate.

gas/
	* config/tc-aarch64.c (vector_el_type): Add NT_zero and NT_merge.
	(parse_vector_type_for_operand): Assert that the skipped character
	is a '.'.
	(parse_predication_for_operand): New function.
	(parse_typed_reg): Parse /z and /m suffixes for predicate registers.
	(vectype_to_qualifier): Handle NT_zero and NT_merge.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 53e602f..ed4933b 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -83,7 +83,9 @@ enum vector_el_type
   NT_h,
   NT_s,
   NT_d,
-  NT_q
+  NT_q,
+  NT_zero,
+  NT_merge
 };
 
 /* Bits for DEFINED field in vector_type_el.  */
@@ -780,6 +782,7 @@ parse_vector_type_for_operand (aarch64_reg_type reg_type,
   enum vector_el_type type;
 
   /* skip '.' */
+  gas_assert (*ptr == '.');
   ptr++;
 
   if (reg_type == REG_TYPE_ZN || reg_type == REG_TYPE_PN || !ISDIGIT (*ptr))
@@ -846,6 +849,38 @@ elt_size:
   return TRUE;
 }
 
+/* *STR contains an SVE zero/merge predication suffix.  Parse it into
+   *PARSED_TYPE and point *STR at the end of the suffix.  */
+
+static bfd_boolean
+parse_predication_for_operand (struct vector_type_el *parsed_type, char **str)
+{
+  char *ptr = *str;
+
+  /* Skip '/'.  */
+  gas_assert (*ptr == '/');
+  ptr++;
+  switch (TOLOWER (*ptr))
+    {
+    case 'z':
+      parsed_type->type = NT_zero;
+      break;
+    case 'm':
+      parsed_type->type = NT_merge;
+      break;
+    default:
+      if (*ptr != '\0' && *ptr != ',')
+	first_error_fmt (_("unexpected character `%c' in predication type"),
+			 *ptr);
+      else
+	first_error (_("missing predication type"));
+      return FALSE;
+    }
+  parsed_type->width = 0;
+  *str = ptr + 1;
+  return TRUE;
+}
+
 /* Parse a register of the type TYPE.
 
    Return PARSE_FAIL if the string pointed by *CCP is not a valid register
@@ -890,10 +925,18 @@ parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
   type = reg->type;
 
   if ((type == REG_TYPE_VN || type == REG_TYPE_ZN || type == REG_TYPE_PN)
-      && *str == '.')
+      && (*str == '.' || (type == REG_TYPE_PN && *str == '/')))
     {
-      if (!parse_vector_type_for_operand (type, &parsetype, &str))
-	return PARSE_FAIL;
+      if (*str == '.')
+	{
+	  if (!parse_vector_type_for_operand (type, &parsetype, &str))
+	    return PARSE_FAIL;
+	}
+      else
+	{
+	  if (!parse_predication_for_operand (&parsetype, &str))
+	    return PARSE_FAIL;
+	}
 
       /* Register if of the form Vn.[bhsdq].  */
       is_typed_vecreg = TRUE;
@@ -4706,6 +4749,11 @@ vectype_to_qualifier (const struct vector_type_el *vectype)
   if (!vectype->defined || vectype->type == NT_invtype)
     goto vectype_conversion_fail;
 
+  if (vectype->type == NT_zero)
+    return AARCH64_OPND_QLF_P_Z;
+  if (vectype->type == NT_merge)
+    return AARCH64_OPND_QLF_P_M;
+
   gas_assert (vectype->type >= NT_b && vectype->type <= NT_q);
 
   if (vectype->defined & (NTA_HASINDEX | NTA_HASVARWIDTH))
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index b0eb617..8eae0b9 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -315,6 +315,9 @@ enum aarch64_opnd_qualifier
   AARCH64_OPND_QLF_V_2D,
   AARCH64_OPND_QLF_V_1Q,
 
+  AARCH64_OPND_QLF_P_Z,
+  AARCH64_OPND_QLF_P_M,
+
   /* Constraint on value.  */
   AARCH64_OPND_QLF_imm_0_7,
   AARCH64_OPND_QLF_imm_0_15,
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 56a0169..41c058f 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -603,6 +603,9 @@ struct operand_qualifier_data aarch64_opnd_qualifiers[] =
   {8, 2, 0x7, "2d", OQK_OPD_VARIANT},
   {16, 1, 0x8, "1q", OQK_OPD_VARIANT},
 
+  {0, 0, 0, "z", OQK_OPD_VARIANT},
+  {0, 0, 0, "m", OQK_OPD_VARIANT},
+
   /* Qualifiers constraining the value range.
      First 3 fields:
      Lower bound, higher bound, unused.  */
@@ -2623,6 +2626,10 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
     case AARCH64_OPND_SVE_Pt:
       if (opnd->qualifier == AARCH64_OPND_QLF_NIL)
 	snprintf (buf, size, "p%d", opnd->reg.regno);
+      else if (opnd->qualifier == AARCH64_OPND_QLF_P_Z
+	       || opnd->qualifier == AARCH64_OPND_QLF_P_M)
+	snprintf (buf, size, "p%d/%s", opnd->reg.regno,
+		  aarch64_get_qualifier_name (opnd->qualifier));
       else
 	snprintf (buf, size, "p%d.%s", opnd->reg.regno,
 		  aarch64_get_qualifier_name (opnd->qualifier));

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 23/32] Add SVE pattern and prfop operands
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (21 preceding siblings ...)
  2016-08-23  9:19 ` [AArch64][SVE 22/32] Add qualifiers for merging and zeroing predication Richard Sandiford
@ 2016-08-23  9:20 ` Richard Sandiford
  2016-08-25 14:12   ` Richard Earnshaw (lists)
  2016-08-23  9:21 ` [AArch64][SVE 25/32] Add support for SVE addressing modes Richard Sandiford
                   ` (9 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:20 UTC (permalink / raw)
  To: binutils

The SVE instructions have two enumerated operands: one to select a
vector pattern and another to select a prefetch operation.  The latter
is a cut-down version of the base AArch64 prefetch operation.

Both types of operand can also be specified as raw enum values such as #31.
Reserved values can only be specified this way.

If it hadn't been for the pattern operand, I would have been tempted
to use the existing parsing for prefetch operations and add extra
checks for SVE.  However, since the patterns needed new enum parsing
code anyway, it seeemed cleaner to reuse it for the prefetches too.

Because of the small number of enum values, I don't think we'd gain
anything by using hash tables.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_SVE_PATTERN): New aarch64_opnd.
	(AARCH64_OPND_SVE_PRFOP): Likewise.
	(aarch64_sve_pattern_array): Declare.
	(aarch64_sve_prfop_array): Likewise.

opcodes/
	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for
	AARCH64_OPND_SVE_PATTERN and AARCH64_OPND_SVE_PRFOP.
	* aarch64-opc.h (FLD_SVE_pattern): New aarch64_field_kind.
	(FLD_SVE_prfop): Likewise.
	* aarch64-opc.c: Include libiberty.h.
	(aarch64_sve_pattern_array): New variable.
	(aarch64_sve_prfop_array): Likewise.
	(fields): Add entries for FLD_SVE_pattern and FLD_SVE_prfop.
	(aarch64_print_operand): Handle AARCH64_OPND_SVE_PATTERN and
	AARCH64_OPND_SVE_PRFOP.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis-2.c: Likewise.
	* aarch64-opc-2.c: Likewise.

gas/
	* config/tc-aarch64.c (parse_enum_string): New function.
	(po_enum_or_fail): New macro.
	(parse_operands): Handle AARCH64_OPND_SVE_PATTERN and
	AARCH64_OPND_SVE_PRFOP.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index ed4933b..9d1e3ec 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -3634,6 +3634,52 @@ parse_adrp (char **str)
 
 /* Miscellaneous. */
 
+/* Parse a symbolic operand such as "pow2" at *STR.  ARRAY is an array
+   of SIZE tokens in which index I gives the token for field value I,
+   or is null if field value I is invalid.  REG_TYPE says which register
+   names should be treated as registers rather than as symbolic immediates.
+
+   Return true on success, moving *STR past the operand and storing the
+   field value in *VAL.  */
+
+static int
+parse_enum_string (char **str, int64_t *val, const char *const *array,
+		   size_t size, aarch64_reg_type reg_type)
+{
+  expressionS exp;
+  char *p, *q;
+  size_t i;
+
+  /* Match C-like tokens.  */
+  p = q = *str;
+  while (ISALNUM (*q))
+    q++;
+
+  for (i = 0; i < size; ++i)
+    if (array[i]
+	&& strncasecmp (array[i], p, q - p) == 0
+	&& array[i][q - p] == 0)
+      {
+	*val = i;
+	*str = q;
+	return TRUE;
+      }
+
+  if (!parse_immediate_expression (&p, &exp, reg_type))
+    return FALSE;
+
+  if (exp.X_op == O_constant
+      && (uint64_t) exp.X_add_number < size)
+    {
+      *val = exp.X_add_number;
+      *str = p;
+      return TRUE;
+    }
+
+  /* Use the default error for this operand.  */
+  return FALSE;
+}
+
 /* Parse an option for a preload instruction.  Returns the encoding for the
    option, or PARSE_FAIL.  */
 
@@ -3844,6 +3890,12 @@ parse_sys_ins_reg (char **str, struct hash_control *sys_ins_regs)
       }								\
   } while (0)
 
+#define po_enum_or_fail(array) do {				\
+    if (!parse_enum_string (&str, &val, array,			\
+			    ARRAY_SIZE (array), imm_reg_type))	\
+      goto failure;						\
+  } while (0)
+
 #define po_misc_or_fail(expr) do {				\
     if (!expr)							\
       goto failure;						\
@@ -4857,6 +4909,8 @@ process_omitted_operand (enum aarch64_opnd type, const aarch64_opcode *opcode,
     case AARCH64_OPND_WIDTH:
     case AARCH64_OPND_UIMM7:
     case AARCH64_OPND_NZCV:
+    case AARCH64_OPND_SVE_PATTERN:
+    case AARCH64_OPND_SVE_PRFOP:
       operand->imm.value = default_value;
       break;
 
@@ -5365,6 +5419,16 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  info->imm.value = val;
 	  break;
 
+	case AARCH64_OPND_SVE_PATTERN:
+	  po_enum_or_fail (aarch64_sve_pattern_array);
+	  info->imm.value = val;
+	  break;
+
+	case AARCH64_OPND_SVE_PRFOP:
+	  po_enum_or_fail (aarch64_sve_prfop_array);
+	  info->imm.value = val;
+	  break;
+
 	case AARCH64_OPND_UIMM7:
 	  po_imm_or_fail (0, 127);
 	  info->imm.value = val;
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index 8eae0b9..dd191cf 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -244,6 +244,8 @@ enum aarch64_opnd
   AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
   AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
 
+  AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
+  AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
   AARCH64_OPND_SVE_Pd,		/* SVE p0-p15 in Pd.  */
   AARCH64_OPND_SVE_Pg3,		/* SVE p0-p7 in Pg.  */
   AARCH64_OPND_SVE_Pg4_5,	/* SVE p0-p15 in Pg, bits [8,5].  */
@@ -1037,6 +1039,9 @@ aarch64_verbose (const char *, ...) __attribute__ ((format (printf, 1, 2)));
 #define DEBUG_TRACE_IF(C, M, ...) ;
 #endif /* DEBUG_AARCH64 */
 
+extern const char *const aarch64_sve_pattern_array[32];
+extern const char *const aarch64_sve_prfop_array[16];
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index 9c797b2..0a6e476 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -480,8 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 89:
-    case 90:
     case 91:
     case 92:
     case 93:
@@ -494,7 +492,9 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 100:
     case 101:
     case 102:
-    case 105:
+    case 103:
+    case 104:
+    case 107:
       return aarch64_ins_regno (self, info, code, inst);
     case 12:
       return aarch64_ins_reg_extended (self, info, code, inst);
@@ -531,6 +531,8 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 68:
     case 69:
     case 70:
+    case 89:
+    case 90:
       return aarch64_ins_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -581,10 +583,10 @@ aarch64_insert_operand (const aarch64_operand *self,
       return aarch64_ins_prfop (self, info, code, inst);
     case 88:
       return aarch64_ins_hint (self, info, code, inst);
-    case 103:
+    case 105:
       return aarch64_ins_sve_index (self, info, code, inst);
-    case 104:
     case 106:
+    case 108:
       return aarch64_ins_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index 6ea010b..9f936f0 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10426,8 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 89:
-    case 90:
     case 91:
     case 92:
     case 93:
@@ -10440,7 +10438,9 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 100:
     case 101:
     case 102:
-    case 105:
+    case 103:
+    case 104:
+    case 107:
       return aarch64_ext_regno (self, info, code, inst);
     case 8:
       return aarch64_ext_regrt_sysins (self, info, code, inst);
@@ -10482,6 +10482,8 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 68:
     case 69:
     case 70:
+    case 89:
+    case 90:
       return aarch64_ext_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -10534,10 +10536,10 @@ aarch64_extract_operand (const aarch64_operand *self,
       return aarch64_ext_prfop (self, info, code, inst);
     case 88:
       return aarch64_ext_hint (self, info, code, inst);
-    case 103:
+    case 105:
       return aarch64_ext_sve_index (self, info, code, inst);
-    case 104:
     case 106:
+    case 108:
       return aarch64_ext_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
index f8a7079..3905053 100644
--- a/opcodes/aarch64-opc-2.c
+++ b/opcodes/aarch64-opc-2.c
@@ -113,6 +113,8 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
   {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pd}, "an SVE predicate register"},
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg3", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg3}, "an SVE predicate register"},
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg4_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg4_5}, "an SVE predicate register"},
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 41c058f..934c14d 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -27,6 +27,7 @@
 #include <inttypes.h>
 
 #include "opintl.h"
+#include "libiberty.h"
 
 #include "aarch64-opc.h"
 
@@ -34,6 +35,70 @@
 int debug_dump = FALSE;
 #endif /* DEBUG_AARCH64 */
 
+/* The enumeration strings associated with each value of a 5-bit SVE
+   pattern operand.  A null entry indicates a reserved meaning.  */
+const char *const aarch64_sve_pattern_array[32] = {
+  /* 0-7.  */
+  "pow2",
+  "vl1",
+  "vl2",
+  "vl3",
+  "vl4",
+  "vl5",
+  "vl6",
+  "vl7",
+  /* 8-15.  */
+  "vl8",
+  "vl16",
+  "vl32",
+  "vl64",
+  "vl128",
+  "vl256",
+  0,
+  0,
+  /* 16-23.  */
+  0,
+  0,
+  0,
+  0,
+  0,
+  0,
+  0,
+  0,
+  /* 24-31.  */
+  0,
+  0,
+  0,
+  0,
+  0,
+  "mul4",
+  "mul3",
+  "all"
+};
+
+/* The enumeration strings associated with each value of a 4-bit SVE
+   prefetch operand.  A null entry indicates a reserved meaning.  */
+const char *const aarch64_sve_prfop_array[16] = {
+  /* 0-7.  */
+  "pldl1keep",
+  "pldl1strm",
+  "pldl2keep",
+  "pldl2strm",
+  "pldl3keep",
+  "pldl3strm",
+  0,
+  0,
+  /* 8-15.  */
+  "pstl1keep",
+  "pstl1strm",
+  "pstl2keep",
+  "pstl2strm",
+  "pstl3keep",
+  "pstl3strm",
+  0,
+  0
+};
+
 /* Helper functions to determine which operand to be used to encode/decode
    the size:Q fields for AdvSIMD instructions.  */
 
@@ -214,6 +279,8 @@ const aarch64_field fields[] =
     { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
     {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
     {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
+    {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
+    {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
     { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
 };
 
@@ -2489,7 +2556,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
   const char *name = NULL;
   const aarch64_opnd_info *opnd = opnds + idx;
   enum aarch64_modifier_kind kind;
-  uint64_t addr;
+  uint64_t addr, enum_value;
 
   buf[0] = '\0';
   if (pcrel_p)
@@ -2681,6 +2748,27 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
       snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
       break;
 
+    case AARCH64_OPND_SVE_PATTERN:
+      if (optional_operand_p (opcode, idx)
+	  && opnd->imm.value == get_optional_operand_default_value (opcode))
+	break;
+      enum_value = opnd->imm.value;
+      assert (enum_value < ARRAY_SIZE (aarch64_sve_pattern_array));
+      if (aarch64_sve_pattern_array[enum_value])
+	snprintf (buf, size, "%s", aarch64_sve_pattern_array[enum_value]);
+      else
+	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
+      break;
+
+    case AARCH64_OPND_SVE_PRFOP:
+      enum_value = opnd->imm.value;
+      assert (enum_value < ARRAY_SIZE (aarch64_sve_prfop_array));
+      if (aarch64_sve_prfop_array[enum_value])
+	snprintf (buf, size, "%s", aarch64_sve_prfop_array[enum_value]);
+      else
+	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
+      break;
+
     case AARCH64_OPND_IMM_MOV:
       switch (aarch64_get_qualifier_esize (opnds[0].qualifier))
 	{
diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
index cc3dbef..b54f35e 100644
--- a/opcodes/aarch64-opc.h
+++ b/opcodes/aarch64-opc.h
@@ -106,6 +106,8 @@ enum aarch64_field_kind
   FLD_SVE_Zm_16,
   FLD_SVE_Zn,
   FLD_SVE_Zt,
+  FLD_SVE_pattern,
+  FLD_SVE_prfop,
   FLD_SVE_tszh,
 };
 
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index 9dbe0c0..73415f7 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2820,6 +2820,10 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "a prefetch operation specifier")					\
     Y (SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
       "the PSB option name CSYNC")					\
+    Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
+      "an enumeration value such as POW2")				\
+    Y(IMMEDIATE, imm, "SVE_PRFOP", 0, F(FLD_SVE_prfop),			\
+      "an enumeration value such as PLDL1KEEP")				\
     Y(PRED_REG, regno, "SVE_Pd", 0, F(FLD_SVE_Pd),			\
       "an SVE predicate register")					\
     Y(PRED_REG, regno, "SVE_Pg3", 0, F(FLD_SVE_Pg3),			\

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 24/32] Add AARCH64_OPND_SVE_PATTERN_SCALED
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (23 preceding siblings ...)
  2016-08-23  9:21 ` [AArch64][SVE 25/32] Add support for SVE addressing modes Richard Sandiford
@ 2016-08-23  9:21 ` Richard Sandiford
  2016-08-25 14:28   ` Richard Earnshaw (lists)
  2016-08-23  9:23 ` [AArch64][SVE 26/32] Add SVE MUL VL addressing modes Richard Sandiford
                   ` (7 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:21 UTC (permalink / raw)
  To: binutils

Some SVE instructions count the number of elements in a given vector
pattern and allow a scale factor of [1, 16] to be applied to the result.
This scale factor is written ", MUL #n", where "MUL" is a new operator.
E.g.:

	UQINCD	X0, POW2, MUL #2

This patch adds support for this kind of operand.

All existing operators were shifts of some kind, so there was a natural
range of [0, 63] regardless of context.  This was then narrowered further
by later checks (e.g. to [0, 31] when used for 32-bit values).

In contrast, MUL doesn't really have a natural context-independent range.
Rather than pick one arbitrarily, it seemed better to make the "shift"
amount a full 64-bit value and leave the range test to the usual
operand-checking code.  I've rearranged the fields of aarch64_opnd_info
so that this doesn't increase the size of the structure (although I don't
think its size is critical anyway).

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_SVE_PATTERN_SCALED): New aarch64_opnd.
	(AARCH64_MOD_MUL): New aarch64_modifier_kind.
	(aarch64_opnd_info): Make shifter.amount an int64_t and
	rearrange the fields.

opcodes/
	* aarch64-tbl.h (AARCH64_OPERANDS): Add an entry for
	AARCH64_OPND_SVE_PATTERN_SCALED.
	* aarch64-opc.h (FLD_SVE_imm4): New aarch64_field_kind.
	* aarch64-opc.c (fields): Add a corresponding entry.
	(set_multiplier_out_of_range_error): New function.
	(aarch64_operand_modifiers): Add entry for AARCH64_MOD_MUL.
	(operand_general_constraint_met_p): Handle
	AARCH64_OPND_SVE_PATTERN_SCALED.
	(print_register_offset_address): Use PRIi64 to print the
	shift amount.
	(aarch64_print_operand): Likewise.  Handle
	AARCH64_OPND_SVE_PATTERN_SCALED.
	* aarch64-opc-2.c: Regenerate.
	* aarch64-asm.h (ins_sve_scale): New inserter.
	* aarch64-asm.c (aarch64_ins_sve_scale): New function.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis.h (ext_sve_scale): New inserter.
	* aarch64-dis.c (aarch64_ext_sve_scale): New function.
	* aarch64-dis-2.c: Regenerate.

gas/
	* config/tc-aarch64.c (SHIFTED_MUL): New parse_shift_mode.
	(parse_shift): Handle it.  Reject AARCH64_MOD_MUL for all other
	shift modes.  Skip range tests for AARCH64_MOD_MUL.
	(process_omitted_operand): Handle AARCH64_OPND_SVE_PATTERN_SCALED.
	(parse_operands): Likewise.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 9d1e3ec..079f1c9 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -2912,6 +2912,7 @@ enum parse_shift_mode
   SHIFTED_LOGIC_IMM,		/* "rn{,lsl|lsr|asl|asr|ror #n}" or
 				   "#imm"  */
   SHIFTED_LSL,			/* bare "lsl #n"  */
+  SHIFTED_MUL,			/* bare "mul #n"  */
   SHIFTED_LSL_MSL,		/* "lsl|msl #n"  */
   SHIFTED_REG_OFFSET		/* [su]xtw|sxtx {#n} or lsl #n  */
 };
@@ -2953,6 +2954,13 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
       return FALSE;
     }
 
+  if (kind == AARCH64_MOD_MUL
+      && mode != SHIFTED_MUL)
+    {
+      set_syntax_error (_("invalid use of 'MUL'"));
+      return FALSE;
+    }
+
   switch (mode)
     {
     case SHIFTED_LOGIC_IMM:
@@ -2979,6 +2987,14 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
 	}
       break;
 
+    case SHIFTED_MUL:
+      if (kind != AARCH64_MOD_MUL)
+	{
+	  set_syntax_error (_("only 'MUL' is permitted"));
+	  return FALSE;
+	}
+      break;
+
     case SHIFTED_REG_OFFSET:
       if (kind != AARCH64_MOD_UXTW && kind != AARCH64_MOD_LSL
 	  && kind != AARCH64_MOD_SXTW && kind != AARCH64_MOD_SXTX)
@@ -3031,7 +3047,11 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
       set_syntax_error (_("constant shift amount required"));
       return FALSE;
     }
-  else if (exp.X_add_number < 0 || exp.X_add_number > 63)
+  /* For parsing purposes, MUL #n has no inherent range.  The range
+     depends on the operand and will be checked by operand-specific
+     routines.  */
+  else if (kind != AARCH64_MOD_MUL
+	   && (exp.X_add_number < 0 || exp.X_add_number > 63))
     {
       set_fatal_syntax_error (_("shift amount out of range 0 to 63"));
       return FALSE;
@@ -4914,6 +4934,12 @@ process_omitted_operand (enum aarch64_opnd type, const aarch64_opcode *opcode,
       operand->imm.value = default_value;
       break;
 
+    case AARCH64_OPND_SVE_PATTERN_SCALED:
+      operand->imm.value = default_value;
+      operand->shifter.kind = AARCH64_MOD_MUL;
+      operand->shifter.amount = 1;
+      break;
+
     case AARCH64_OPND_EXCEPTION:
       inst.reloc.type = BFD_RELOC_UNUSED;
       break;
@@ -5424,6 +5450,20 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  info->imm.value = val;
 	  break;
 
+	case AARCH64_OPND_SVE_PATTERN_SCALED:
+	  po_enum_or_fail (aarch64_sve_pattern_array);
+	  info->imm.value = val;
+	  if (skip_past_comma (&str)
+	      && !parse_shift (&str, info, SHIFTED_MUL))
+	    goto failure;
+	  if (!info->shifter.operator_present)
+	    {
+	      gas_assert (info->shifter.kind == AARCH64_MOD_NONE);
+	      info->shifter.kind = AARCH64_MOD_MUL;
+	      info->shifter.amount = 1;
+	    }
+	  break;
+
 	case AARCH64_OPND_SVE_PRFOP:
 	  po_enum_or_fail (aarch64_sve_prfop_array);
 	  info->imm.value = val;
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index dd191cf..49b4413 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -245,6 +245,7 @@ enum aarch64_opnd
   AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
 
   AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
+  AARCH64_OPND_SVE_PATTERN_SCALED, /* Likewise, with additional MUL factor.  */
   AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
   AARCH64_OPND_SVE_Pd,		/* SVE p0-p15 in Pd.  */
   AARCH64_OPND_SVE_Pg3,		/* SVE p0-p7 in Pg.  */
@@ -745,6 +746,7 @@ enum aarch64_modifier_kind
   AARCH64_MOD_SXTH,
   AARCH64_MOD_SXTW,
   AARCH64_MOD_SXTX,
+  AARCH64_MOD_MUL,
 };
 
 bfd_boolean
@@ -836,10 +838,10 @@ struct aarch64_opnd_info
   struct
     {
       enum aarch64_modifier_kind kind;
-      int amount;
       unsigned operator_present: 1;	/* Only valid during encoding.  */
       /* Value of the 'S' field in ld/st reg offset; used only in decoding.  */
       unsigned amount_present: 1;
+      int64_t amount;
     } shifter;
 
   unsigned skip:1;	/* Operand is not completed if there is a fixup needed
diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index 0a6e476..039b9be 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -480,7 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 91:
     case 92:
     case 93:
     case 94:
@@ -494,7 +493,8 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 102:
     case 103:
     case 104:
-    case 107:
+    case 105:
+    case 108:
       return aarch64_ins_regno (self, info, code, inst);
     case 12:
       return aarch64_ins_reg_extended (self, info, code, inst);
@@ -532,7 +532,7 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 69:
     case 70:
     case 89:
-    case 90:
+    case 91:
       return aarch64_ins_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -583,10 +583,12 @@ aarch64_insert_operand (const aarch64_operand *self,
       return aarch64_ins_prfop (self, info, code, inst);
     case 88:
       return aarch64_ins_hint (self, info, code, inst);
-    case 105:
-      return aarch64_ins_sve_index (self, info, code, inst);
+    case 90:
+      return aarch64_ins_sve_scale (self, info, code, inst);
     case 106:
-    case 108:
+      return aarch64_ins_sve_index (self, info, code, inst);
+    case 107:
+    case 109:
       return aarch64_ins_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index c045f9e..117a3c6 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -772,6 +772,19 @@ aarch64_ins_sve_reglist (const aarch64_operand *self,
   return NULL;
 }
 
+/* Encode <pattern>{, MUL #<amount>}.  The fields array specifies which
+   fields to use for <pattern>.  <amount> - 1 is encoded in the SVE_imm4
+   field.  */
+const char *
+aarch64_ins_sve_scale (const aarch64_operand *self,
+		       const aarch64_opnd_info *info, aarch64_insn *code,
+		       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  insert_all_fields (self, code, info->imm.value);
+  insert_field (FLD_SVE_imm4, code, info->shifter.amount - 1, 0);
+  return NULL;
+}
+
 /* Miscellaneous encoding functions.  */
 
 /* Encode size[0], i.e. bit 22, for
diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
index ede366c..ac5faeb 100644
--- a/opcodes/aarch64-asm.h
+++ b/opcodes/aarch64-asm.h
@@ -71,6 +71,7 @@ AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
 AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
 AARCH64_DECL_OPD_INSERTER (ins_sve_index);
 AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
+AARCH64_DECL_OPD_INSERTER (ins_sve_scale);
 
 #undef AARCH64_DECL_OPD_INSERTER
 
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index 9f936f0..124385d 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10426,7 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 91:
     case 92:
     case 93:
     case 94:
@@ -10440,7 +10439,8 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 102:
     case 103:
     case 104:
-    case 107:
+    case 105:
+    case 108:
       return aarch64_ext_regno (self, info, code, inst);
     case 8:
       return aarch64_ext_regrt_sysins (self, info, code, inst);
@@ -10483,7 +10483,7 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 69:
     case 70:
     case 89:
-    case 90:
+    case 91:
       return aarch64_ext_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -10536,10 +10536,12 @@ aarch64_extract_operand (const aarch64_operand *self,
       return aarch64_ext_prfop (self, info, code, inst);
     case 88:
       return aarch64_ext_hint (self, info, code, inst);
-    case 105:
-      return aarch64_ext_sve_index (self, info, code, inst);
+    case 90:
+      return aarch64_ext_sve_scale (self, info, code, inst);
     case 106:
-    case 108:
+      return aarch64_ext_sve_index (self, info, code, inst);
+    case 107:
+    case 109:
       return aarch64_ext_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index ab93234..1d00c0a 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -1219,6 +1219,26 @@ aarch64_ext_sve_reglist (const aarch64_operand *self,
   info->reglist.num_regs = get_opcode_dependent_value (inst->opcode);
   return 1;
 }
+
+/* Decode <pattern>{, MUL #<amount>}.  The fields array specifies which
+   fields to use for <pattern>.  <amount> - 1 is encoded in the SVE_imm4
+   field.  */
+int
+aarch64_ext_sve_scale (const aarch64_operand *self,
+		       aarch64_opnd_info *info, aarch64_insn code,
+		       const aarch64_inst *inst)
+{
+  int val;
+
+  if (!aarch64_ext_imm (self, info, code, inst))
+    return 0;
+  val = extract_field (FLD_SVE_imm4, code, 0);
+  info->shifter.kind = AARCH64_MOD_MUL;
+  info->shifter.amount = val + 1;
+  info->shifter.operator_present = (val != 0);
+  info->shifter.amount_present = (val != 0);
+  return 1;
+}
 \f
 /* Bitfields that are commonly used to encode certain operands' information
    may be partially used as part of the base opcode in some instructions.
diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
index 5efb904..92f5ad4 100644
--- a/opcodes/aarch64-dis.h
+++ b/opcodes/aarch64-dis.h
@@ -93,6 +93,7 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_scale);
 
 #undef AARCH64_DECL_OPD_EXTRACTOR
 
diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
index 3905053..8f221b8 100644
--- a/opcodes/aarch64-opc-2.c
+++ b/opcodes/aarch64-opc-2.c
@@ -114,6 +114,7 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN_SCALED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pd}, "an SVE predicate register"},
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg3", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg3}, "an SVE predicate register"},
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 934c14d..326b94e 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -279,6 +279,7 @@ const aarch64_field fields[] =
     { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
     {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
     {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
+    { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
     {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
     {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
     { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
@@ -359,6 +360,7 @@ const struct aarch64_name_value_pair aarch64_operand_modifiers [] =
     {"sxth", 0x5},
     {"sxtw", 0x6},
     {"sxtx", 0x7},
+    {"mul", 0x0},
     {NULL, 0},
 };
 
@@ -1303,6 +1305,18 @@ set_sft_amount_out_of_range_error (aarch64_operand_error *mismatch_detail,
 			  _("shift amount"));
 }
 
+/* Report that the MUL modifier in operand IDX should be in the range
+   [LOWER_BOUND, UPPER_BOUND].  */
+static inline void
+set_multiplier_out_of_range_error (aarch64_operand_error *mismatch_detail,
+				   int idx, int lower_bound, int upper_bound)
+{
+  if (mismatch_detail == NULL)
+    return;
+  set_out_of_range_error (mismatch_detail, idx, lower_bound, upper_bound,
+			  _("multiplier"));
+}
+
 static inline void
 set_unaligned_error (aarch64_operand_error *mismatch_detail, int idx,
 		     int alignment)
@@ -2001,6 +2015,15 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	    }
 	  break;
 
+	case AARCH64_OPND_SVE_PATTERN_SCALED:
+	  assert (opnd->shifter.kind == AARCH64_MOD_MUL);
+	  if (!value_in_range_p (opnd->shifter.amount, 1, 16))
+	    {
+	      set_multiplier_out_of_range_error (mismatch_detail, idx, 1, 16);
+	      return 0;
+	    }
+	  break;
+
 	default:
 	  break;
 	}
@@ -2525,7 +2548,8 @@ print_register_offset_address (char *buf, size_t size,
   if (print_extend_p)
     {
       if (print_amount_p)
-	snprintf (tb, sizeof (tb), ",%s #%d", shift_name, opnd->shifter.amount);
+	snprintf (tb, sizeof (tb), ",%s #%" PRIi64, shift_name,
+		  opnd->shifter.amount);
       else
 	snprintf (tb, sizeof (tb), ",%s", shift_name);
     }
@@ -2620,7 +2644,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
 	    }
 	}
       if (opnd->shifter.amount)
-	snprintf (buf, size, "%s, %s #%d",
+	snprintf (buf, size, "%s, %s #%" PRIi64,
 		  get_int_reg_name (opnd->reg.regno, opnd->qualifier, 0),
 		  aarch64_operand_modifiers[kind].name,
 		  opnd->shifter.amount);
@@ -2637,7 +2661,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
 	snprintf (buf, size, "%s",
 		  get_int_reg_name (opnd->reg.regno, opnd->qualifier, 0));
       else
-	snprintf (buf, size, "%s, %s #%d",
+	snprintf (buf, size, "%s, %s #%" PRIi64,
 		  get_int_reg_name (opnd->reg.regno, opnd->qualifier, 0),
 		  aarch64_operand_modifiers[opnd->shifter.kind].name,
 		  opnd->shifter.amount);
@@ -2760,6 +2784,26 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
 	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
       break;
 
+    case AARCH64_OPND_SVE_PATTERN_SCALED:
+      if (optional_operand_p (opcode, idx)
+	  && !opnd->shifter.operator_present
+	  && opnd->imm.value == get_optional_operand_default_value (opcode))
+	break;
+      enum_value = opnd->imm.value;
+      assert (enum_value < ARRAY_SIZE (aarch64_sve_pattern_array));
+      if (aarch64_sve_pattern_array[opnd->imm.value])
+	snprintf (buf, size, "%s", aarch64_sve_pattern_array[opnd->imm.value]);
+      else
+	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
+      if (opnd->shifter.operator_present)
+	{
+	  size_t len = strlen (buf);
+	  snprintf (buf + len, size - len, ", %s #%" PRIi64,
+		    aarch64_operand_modifiers[opnd->shifter.kind].name,
+		    opnd->shifter.amount);
+	}
+      break;
+
     case AARCH64_OPND_SVE_PRFOP:
       enum_value = opnd->imm.value;
       assert (enum_value < ARRAY_SIZE (aarch64_sve_prfop_array));
@@ -2794,7 +2838,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
     case AARCH64_OPND_AIMM:
     case AARCH64_OPND_HALF:
       if (opnd->shifter.amount)
-	snprintf (buf, size, "#0x%" PRIx64 ", lsl #%d", opnd->imm.value,
+	snprintf (buf, size, "#0x%" PRIx64 ", lsl #%" PRIi64, opnd->imm.value,
 		  opnd->shifter.amount);
       else
 	snprintf (buf, size, "#0x%" PRIx64, opnd->imm.value);
@@ -2806,7 +2850,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
 	  || opnd->shifter.kind == AARCH64_MOD_NONE)
 	snprintf (buf, size, "#0x%" PRIx64, opnd->imm.value);
       else
-	snprintf (buf, size, "#0x%" PRIx64 ", %s #%d", opnd->imm.value,
+	snprintf (buf, size, "#0x%" PRIx64 ", %s #%" PRIi64, opnd->imm.value,
 		  aarch64_operand_modifiers[opnd->shifter.kind].name,
 		  opnd->shifter.amount);
       break;
diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
index b54f35e..3406f6e 100644
--- a/opcodes/aarch64-opc.h
+++ b/opcodes/aarch64-opc.h
@@ -106,6 +106,7 @@ enum aarch64_field_kind
   FLD_SVE_Zm_16,
   FLD_SVE_Zn,
   FLD_SVE_Zt,
+  FLD_SVE_imm4,
   FLD_SVE_pattern,
   FLD_SVE_prfop,
   FLD_SVE_tszh,
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index 73415f7..491235f 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2822,6 +2822,8 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "the PSB option name CSYNC")					\
     Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
       "an enumeration value such as POW2")				\
+    Y(IMMEDIATE, sve_scale, "SVE_PATTERN_SCALED", 0,			\
+      F(FLD_SVE_pattern), "an enumeration value such as POW2")		\
     Y(IMMEDIATE, imm, "SVE_PRFOP", 0, F(FLD_SVE_prfop),			\
       "an enumeration value such as PLDL1KEEP")				\
     Y(PRED_REG, regno, "SVE_Pd", 0, F(FLD_SVE_Pd),			\

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 25/32] Add support for SVE addressing modes
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (22 preceding siblings ...)
  2016-08-23  9:20 ` [AArch64][SVE 23/32] Add SVE pattern and prfop operands Richard Sandiford
@ 2016-08-23  9:21 ` Richard Sandiford
  2016-08-25 14:38   ` Richard Earnshaw (lists)
  2016-08-23  9:21 ` [AArch64][SVE 24/32] Add AARCH64_OPND_SVE_PATTERN_SCALED Richard Sandiford
                   ` (8 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:21 UTC (permalink / raw)
  To: binutils

This patch adds most of the new SVE addressing modes and associated
operands.  A follow-on patch adds MUL VL, since handling it separately
makes the changes easier to read.

The patch also introduces a new "operand-dependent data" field to the
operand flags, based closely on the existing one for opcode flags.
For SVE this new field needs only 2 bits, but it could be widened
in future if necessary.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_U6): New aarch64_opnd.
	(AARCH64_OPND_SVE_ADDR_RI_U6x2, AARCH64_OPND_SVE_ADDR_RI_U6x4)
	(AARCH64_OPND_SVE_ADDR_RI_U6x8, AARCH64_OPND_SVE_ADDR_RR)
	(AARCH64_OPND_SVE_ADDR_RR_LSL1, AARCH64_OPND_SVE_ADDR_RR_LSL2)
	(AARCH64_OPND_SVE_ADDR_RR_LSL3, AARCH64_OPND_SVE_ADDR_RX)
	(AARCH64_OPND_SVE_ADDR_RX_LSL1, AARCH64_OPND_SVE_ADDR_RX_LSL2)
	(AARCH64_OPND_SVE_ADDR_RX_LSL3, AARCH64_OPND_SVE_ADDR_RZ)
	(AARCH64_OPND_SVE_ADDR_RZ_LSL1, AARCH64_OPND_SVE_ADDR_RZ_LSL2)
	(AARCH64_OPND_SVE_ADDR_RZ_LSL3, AARCH64_OPND_SVE_ADDR_RZ_XTW_14)
	(AARCH64_OPND_SVE_ADDR_RZ_XTW_22, AARCH64_OPND_SVE_ADDR_RZ_XTW1_14)
	(AARCH64_OPND_SVE_ADDR_RZ_XTW1_22, AARCH64_OPND_SVE_ADDR_RZ_XTW2_14)
	(AARCH64_OPND_SVE_ADDR_RZ_XTW2_22, AARCH64_OPND_SVE_ADDR_RZ_XTW3_14)
	(AARCH64_OPND_SVE_ADDR_RZ_XTW3_22, AARCH64_OPND_SVE_ADDR_ZI_U5)
	(AARCH64_OPND_SVE_ADDR_ZI_U5x2, AARCH64_OPND_SVE_ADDR_ZI_U5x4)
	(AARCH64_OPND_SVE_ADDR_ZI_U5x8, AARCH64_OPND_SVE_ADDR_ZZ_LSL)
	(AARCH64_OPND_SVE_ADDR_ZZ_SXTW, AARCH64_OPND_SVE_ADDR_ZZ_UXTW):
	Likewise.

opcodes/
	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE
	address operands.
	* aarch64-opc.h (FLD_SVE_imm6, FLD_SVE_msz, FLD_SVE_xs_14)
	(FLD_SVE_xs_22): New aarch64_field_kinds.
	(OPD_F_OD_MASK, OPD_F_OD_LSB, OPD_F_NO_ZR): New flags.
	(get_operand_specific_data): New function.
	* aarch64-opc.c (fields): Add entries for FLD_SVE_imm6, FLD_SVE_msz,
	FLD_SVE_xs_14 and FLD_SVE_xs_22.
	(operand_general_constraint_met_p): Handle the new SVE address
	operands.
	(sve_reg): New array.
	(get_addr_sve_reg_name): New function.
	(aarch64_print_operand): Handle the new SVE address operands.
	* aarch64-opc-2.c: Regenerate.
	* aarch64-asm.h (ins_sve_addr_ri_u6, ins_sve_addr_rr_lsl)
	(ins_sve_addr_rz_xtw, ins_sve_addr_zi_u5, ins_sve_addr_zz_lsl)
	(ins_sve_addr_zz_sxtw, ins_sve_addr_zz_uxtw): New inserters.
	* aarch64-asm.c (aarch64_ins_sve_addr_ri_u6): New function.
	(aarch64_ins_sve_addr_rr_lsl): Likewise.
	(aarch64_ins_sve_addr_rz_xtw): Likewise.
	(aarch64_ins_sve_addr_zi_u5): Likewise.
	(aarch64_ins_sve_addr_zz): Likewise.
	(aarch64_ins_sve_addr_zz_lsl): Likewise.
	(aarch64_ins_sve_addr_zz_sxtw): Likewise.
	(aarch64_ins_sve_addr_zz_uxtw): Likewise.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis.h (ext_sve_addr_ri_u6, ext_sve_addr_rr_lsl)
	(ext_sve_addr_rz_xtw, ext_sve_addr_zi_u5, ext_sve_addr_zz_lsl)
	(ext_sve_addr_zz_sxtw, ext_sve_addr_zz_uxtw): New extractors.
	* aarch64-dis.c (aarch64_ext_sve_add_reg_imm): New function.
	(aarch64_ext_sve_addr_ri_u6): Likewise.
	(aarch64_ext_sve_addr_rr_lsl): Likewise.
	(aarch64_ext_sve_addr_rz_xtw): Likewise.
	(aarch64_ext_sve_addr_zi_u5): Likewise.
	(aarch64_ext_sve_addr_zz): Likewise.
	(aarch64_ext_sve_addr_zz_lsl): Likewise.
	(aarch64_ext_sve_addr_zz_sxtw): Likewise.
	(aarch64_ext_sve_addr_zz_uxtw): Likewise.
	* aarch64-dis-2.c: Regenerate.

gas/
	* config/tc-aarch64.c (aarch64_addr_reg_parse): New function,
	split out from aarch64_reg_parse_32_64.  Handle Z registers too.
	(aarch64_reg_parse_32_64): Call it.
	(parse_address_main): Add base_qualifier, offset_qualifier
	and accept_sve parameters.  Handle SVE base and offset registers.
	(parse_address): Update call to parse_address_main.
	(parse_address_reloc): Likewise.
	(parse_sve_address): New function.
	(parse_operands): Parse the new SVE address operands.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 079f1c9..f9d89ce 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -705,7 +705,8 @@ aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
 }
 
 /* Try to parse a base or offset register.  ACCEPT_SP says whether {W}SP
-   should be considered valid and ACCEPT_RZ says whether zero registers
+   should be considered valid, ACCEPT_RZ says whether zero registers
+   should be considered valid, and ACCEPT_SVE says whether SVE registers
    should be considered valid.
 
    Return the register number on success, setting *QUALIFIER to the
@@ -715,10 +716,10 @@ aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
    Note that this function does not issue any diagnostics.  */
 
 static int
-aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
-			 bfd_boolean accept_rz,
-			 aarch64_opnd_qualifier_t *qualifier,
-			 bfd_boolean *isregzero)
+aarch64_addr_reg_parse (char **ccp, bfd_boolean accept_sp,
+			bfd_boolean accept_rz, bfd_boolean accept_sve,
+			aarch64_opnd_qualifier_t *qualifier,
+			bfd_boolean *isregzero)
 {
   char *str = *ccp;
   const reg_entry *reg = parse_reg (&str);
@@ -726,9 +727,6 @@ aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
   if (reg == NULL)
     return PARSE_FAIL;
 
-  if (! aarch64_check_reg_type (reg, REG_TYPE_R_Z_SP))
-    return PARSE_FAIL;
-
   switch (reg->type)
     {
     case REG_TYPE_SP_32:
@@ -756,6 +754,23 @@ aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
 		    : AARCH64_OPND_QLF_X);
       *isregzero = TRUE;
       break;
+    case REG_TYPE_ZN:
+      if (!accept_sve || str[0] != '.')
+	return PARSE_FAIL;
+      switch (TOLOWER (str[1]))
+	{
+	case 's':
+	  *qualifier = AARCH64_OPND_QLF_S_S;
+	  break;
+	case 'd':
+	  *qualifier = AARCH64_OPND_QLF_S_D;
+	  break;
+	default:
+	  return PARSE_FAIL;
+	}
+      str += 2;
+      *isregzero = FALSE;
+      break;
     default:
       return PARSE_FAIL;
     }
@@ -765,6 +780,26 @@ aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
   return reg->number;
 }
 
+/* Try to parse a scalar base or offset register.  ACCEPT_SP says whether
+   {W}SP should be considered valid and ACCEPT_RZ says whether zero
+   registers should be considered valid.
+
+   Return the register number on success, setting *QUALIFIER to the
+   register qualifier and *ISREGZERO to whether the register is a zero
+   register.  Return PARSE_FAIL otherwise.
+
+   Note that this function does not issue any diagnostics.  */
+
+static int
+aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
+			 bfd_boolean accept_rz,
+			 aarch64_opnd_qualifier_t *qualifier,
+			 bfd_boolean *isregzero)
+{
+  return aarch64_addr_reg_parse (ccp, accept_sp, accept_rz, FALSE,
+				 qualifier, isregzero);
+}
+
 /* Parse the qualifier of a vector register or vector element of type
    REG_TYPE.  Fill in *PARSED_TYPE and return TRUE if the parsing
    succeeds; otherwise return FALSE.
@@ -3240,8 +3275,8 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
    The A64 instruction set has the following addressing modes:
 
    Offset
-     [base]			// in SIMD ld/st structure
-     [base{,#0}]		// in ld/st exclusive
+     [base]			 // in SIMD ld/st structure
+     [base{,#0}]		 // in ld/st exclusive
      [base{,#imm}]
      [base,Xm{,LSL #imm}]
      [base,Xm,SXTX {#imm}]
@@ -3250,10 +3285,18 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
      [base,#imm]!
    Post-indexed
      [base],#imm
-     [base],Xm			// in SIMD ld/st structure
+     [base],Xm			 // in SIMD ld/st structure
    PC-relative (literal)
      label
-     =immediate
+   SVE:
+     [base,Zm.D{,LSL #imm}]
+     [base,Zm.S,(S|U)XTW {#imm}]
+     [base,Zm.D,(S|U)XTW {#imm}] // ignores top 32 bits of Zm.D elements
+     [Zn.S,#imm]
+     [Zn.D,#imm]
+     [Zn.S,Zm.S{,LSL #imm}]      // }
+     [Zn.D,Zm.D{,LSL #imm}]      // } in ADR
+     [Zn.D,Zm.D,(S|U)XTW {#imm}] // }
 
    (As a convenience, the notation "=immediate" is permitted in conjunction
    with the pc-relative literal load instructions to automatically place an
@@ -3280,26 +3323,37 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
      .pcrel=1; .preind=1; .postind=0; .writeback=0
 
    The shift/extension information, if any, will be stored in .shifter.
+   The base and offset qualifiers will be stored in *BASE_QUALIFIER and
+   *OFFSET_QUALIFIER respectively, with NIL being used if there's no
+   corresponding register.
 
    RELOC says whether relocation operators should be accepted
    and ACCEPT_REG_POST_INDEX says whether post-indexed register
    addressing should be accepted.
 
+   Likewise ACCEPT_SVE says whether the SVE addressing modes should be
+   accepted.  We use context-dependent parsing for this case because
+   (for compatibility) we should accept symbolic constants like z0 and
+   z0.s in base AArch64 code.
+
    In all other cases, it is the caller's responsibility to check whether
    the addressing mode is supported by the instruction.  It is also the
    caller's responsibility to set inst.reloc.type.  */
 
 static bfd_boolean
-parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
-		    bfd_boolean accept_reg_post_index)
+parse_address_main (char **str, aarch64_opnd_info *operand,
+		    aarch64_opnd_qualifier_t *base_qualifier,
+		    aarch64_opnd_qualifier_t *offset_qualifier,
+		    bfd_boolean reloc, bfd_boolean accept_reg_post_index,
+		    bfd_boolean accept_sve)
 {
   char *p = *str;
   int reg;
-  aarch64_opnd_qualifier_t base_qualifier;
-  aarch64_opnd_qualifier_t offset_qualifier;
   bfd_boolean isregzero;
   expressionS *exp = &inst.reloc.exp;
 
+  *base_qualifier = AARCH64_OPND_QLF_NIL;
+  *offset_qualifier = AARCH64_OPND_QLF_NIL;
   if (! skip_past_char (&p, '['))
     {
       /* =immediate or label.  */
@@ -3375,8 +3429,14 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
   /* [ */
 
   /* Accept SP and reject ZR */
-  reg = aarch64_reg_parse_32_64 (&p, TRUE, FALSE, &base_qualifier, &isregzero);
-  if (reg == PARSE_FAIL || base_qualifier == AARCH64_OPND_QLF_W)
+  reg = aarch64_addr_reg_parse (&p, TRUE, FALSE, accept_sve, base_qualifier,
+				&isregzero);
+  if (reg == PARSE_FAIL)
+    {
+      set_syntax_error (_("base register expected"));
+      return FALSE;
+    }
+  else if (*base_qualifier == AARCH64_OPND_QLF_W)
     {
       set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_64)));
       return FALSE;
@@ -3390,8 +3450,8 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
       operand->addr.preind = 1;
 
       /* Reject SP and accept ZR */
-      reg = aarch64_reg_parse_32_64 (&p, FALSE, TRUE, &offset_qualifier,
-				     &isregzero);
+      reg = aarch64_addr_reg_parse (&p, FALSE, TRUE, accept_sve,
+				    offset_qualifier, &isregzero);
       if (reg != PARSE_FAIL)
 	{
 	  /* [Xn,Rm  */
@@ -3414,13 +3474,19 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
 	      || operand->shifter.kind == AARCH64_MOD_LSL
 	      || operand->shifter.kind == AARCH64_MOD_SXTX)
 	    {
-	      if (offset_qualifier == AARCH64_OPND_QLF_W)
+	      if (*offset_qualifier == AARCH64_OPND_QLF_W)
 		{
 		  set_syntax_error (_("invalid use of 32-bit register offset"));
 		  return FALSE;
 		}
+	      if (aarch64_get_qualifier_esize (*base_qualifier)
+		  != aarch64_get_qualifier_esize (*offset_qualifier))
+		{
+		  set_syntax_error (_("offset has different size from base"));
+		  return FALSE;
+		}
 	    }
-	  else if (offset_qualifier == AARCH64_OPND_QLF_X)
+	  else if (*offset_qualifier == AARCH64_OPND_QLF_X)
 	    {
 	      set_syntax_error (_("invalid use of 64-bit register offset"));
 	      return FALSE;
@@ -3465,12 +3531,20 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
 	      inst.reloc.type = entry->ldst_type;
 	      inst.reloc.pc_rel = entry->pc_rel;
 	    }
-	  else if (! my_get_expression (exp, &p, GE_OPT_PREFIX, 1))
+	  else
 	    {
-	      set_syntax_error (_("invalid expression in the address"));
-	      return FALSE;
+	      if (! my_get_expression (exp, &p, GE_OPT_PREFIX, 1))
+		{
+		  set_syntax_error (_("invalid expression in the address"));
+		  return FALSE;
+		}
+	      /* [Xn,<expr>  */
+	      if (accept_sve && exp->X_op != O_constant)
+		{
+		  set_syntax_error (_("constant offset required"));
+		  return FALSE;
+		}
 	    }
-	  /* [Xn,<expr>  */
 	}
     }
 
@@ -3505,11 +3579,11 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
 
       if (accept_reg_post_index
 	  && (reg = aarch64_reg_parse_32_64 (&p, FALSE, FALSE,
-					     &offset_qualifier,
+					     offset_qualifier,
 					     &isregzero)) != PARSE_FAIL)
 	{
 	  /* [Xn],Xm */
-	  if (offset_qualifier == AARCH64_OPND_QLF_W)
+	  if (*offset_qualifier == AARCH64_OPND_QLF_W)
 	    {
 	      set_syntax_error (_("invalid 32-bit register offset"));
 	      return FALSE;
@@ -3544,7 +3618,8 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
   return TRUE;
 }
 
-/* Parse an address that cannot contain relocation operators.
+/* Parse a base AArch64 address, i.e. one that cannot contain SVE base
+   registers or SVE offset registers.  Do not allow relocation operators.
    Look for and parse "[Xn], (Xm|#m)" as post-indexed addressing
    if ACCEPT_REG_POST_INDEX is true.
 
@@ -3553,17 +3628,34 @@ static bfd_boolean
 parse_address (char **str, aarch64_opnd_info *operand,
 	       bfd_boolean accept_reg_post_index)
 {
-  return parse_address_main (str, operand, FALSE, accept_reg_post_index);
+  aarch64_opnd_qualifier_t base_qualifier, offset_qualifier;
+  return parse_address_main (str, operand, &base_qualifier, &offset_qualifier,
+			     FALSE, accept_reg_post_index, FALSE);
 }
 
-/* Parse an address that can contain relocation operators.  Do not
-   accept post-indexed addressing.
+/* Parse a base AArch64 address, i.e. one that cannot contain SVE base
+   registers or SVE offset registers.  Allow relocation operators but
+   disallow post-indexed addressing.
 
    Return TRUE on success.  */
 static bfd_boolean
 parse_address_reloc (char **str, aarch64_opnd_info *operand)
 {
-  return parse_address_main (str, operand, TRUE, FALSE);
+  aarch64_opnd_qualifier_t base_qualifier, offset_qualifier;
+  return parse_address_main (str, operand, &base_qualifier, &offset_qualifier,
+			     TRUE, FALSE, FALSE);
+}
+
+/* Parse an address in which SVE vector registers are allowed.
+   The arguments have the same meaning as for parse_address_main.
+   Return TRUE on success.  */
+static bfd_boolean
+parse_sve_address (char **str, aarch64_opnd_info *operand,
+		   aarch64_opnd_qualifier_t *base_qualifier,
+		   aarch64_opnd_qualifier_t *offset_qualifier)
+{
+  return parse_address_main (str, operand, base_qualifier, offset_qualifier,
+			     FALSE, FALSE, TRUE);
 }
 
 /* Parse an operand for a MOVZ, MOVN or MOVK instruction.
@@ -5174,7 +5266,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
       int comma_skipped_p = 0;
       aarch64_reg_type rtype;
       struct vector_type_el vectype;
-      aarch64_opnd_qualifier_t qualifier;
+      aarch64_opnd_qualifier_t qualifier, base_qualifier, offset_qualifier;
       aarch64_opnd_info *info = &inst.base.operands[i];
       aarch64_reg_type reg_type;
 
@@ -5793,6 +5885,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	case AARCH64_OPND_ADDR_REGOFF:
 	  /* [<Xn|SP>, <R><m>{, <extend> {<amount>}}]  */
 	  po_misc_or_fail (parse_address (&str, info, FALSE));
+	regoff_addr:
 	  if (info->addr.pcrel || !info->addr.offset.is_reg
 	      || !info->addr.preind || info->addr.postind
 	      || info->addr.writeback)
@@ -5887,6 +5980,116 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  /* No qualifier.  */
 	  break;
 
+	case AARCH64_OPND_SVE_ADDR_RI_U6:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
+	  /* [X<n>{, #imm}]
+	     but recognizing SVE registers.  */
+	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
+					      &offset_qualifier));
+	  if (base_qualifier != AARCH64_OPND_QLF_X)
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	sve_regimm:
+	  if (info->addr.pcrel || info->addr.offset.is_reg
+	      || !info->addr.preind || info->addr.writeback)
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	  gas_assert (inst.reloc.exp.X_op == O_constant);
+	  info->addr.offset.imm = inst.reloc.exp.X_add_number;
+	  break;
+
+	case AARCH64_OPND_SVE_ADDR_RR:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL3:
+	case AARCH64_OPND_SVE_ADDR_RX:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL3:
+	  /* [<Xn|SP>, <R><m>{, lsl #<amount>}]
+	     but recognizing SVE registers.  */
+	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
+					      &offset_qualifier));
+	  if (base_qualifier != AARCH64_OPND_QLF_X
+	      || offset_qualifier != AARCH64_OPND_QLF_X)
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	  goto regoff_addr;
+
+	case AARCH64_OPND_SVE_ADDR_RZ:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
+	  /* [<Xn|SP>, Z<m>.D{, LSL #<amount>}]
+	     [<Xn|SP>, Z<m>.<T>, <extend> {#<amount>}]  */
+	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
+					      &offset_qualifier));
+	  if (base_qualifier != AARCH64_OPND_QLF_X
+	      || (offset_qualifier != AARCH64_OPND_QLF_S_S
+		  && offset_qualifier != AARCH64_OPND_QLF_S_D))
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	  info->qualifier = offset_qualifier;
+	  goto regoff_addr;
+
+	case AARCH64_OPND_SVE_ADDR_ZI_U5:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
+	  /* [Z<n>.<T>{, #imm}]  */
+	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
+					      &offset_qualifier));
+	  if (base_qualifier != AARCH64_OPND_QLF_S_S
+	      && base_qualifier != AARCH64_OPND_QLF_S_D)
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	  info->qualifier = base_qualifier;
+	  goto sve_regimm;
+
+	case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
+	case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
+	case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
+	  /* [Z<n>.<T>, Z<m>.<T>{, LSL #<amount>}]
+	     [Z<n>.D, Z<m>.D, <extend> {#<amount>}]
+
+	     We don't reject:
+
+	     [Z<n>.S, Z<m>.S, <extend> {#<amount>}]
+
+	     here since we get better error messages by leaving it to
+	     the qualifier checking routines.  */
+	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
+					      &offset_qualifier));
+	  if ((base_qualifier != AARCH64_OPND_QLF_S_S
+	       && base_qualifier != AARCH64_OPND_QLF_S_D)
+	      || offset_qualifier != base_qualifier)
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	  info->qualifier = base_qualifier;
+	  goto regoff_addr;
+
 	case AARCH64_OPND_SYSREG:
 	  if ((val = parse_sys_reg (&str, aarch64_sys_regs_hsh, 1, 0))
 	      == PARSE_FAIL)
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index 49b4413..e61ac9c 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -244,6 +244,45 @@ enum aarch64_opnd
   AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
   AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
 
+  AARCH64_OPND_SVE_ADDR_RI_U6,	    /* SVE [<Xn|SP>, #<uimm6>].  */
+  AARCH64_OPND_SVE_ADDR_RI_U6x2,    /* SVE [<Xn|SP>, #<uimm6>*2].  */
+  AARCH64_OPND_SVE_ADDR_RI_U6x4,    /* SVE [<Xn|SP>, #<uimm6>*4].  */
+  AARCH64_OPND_SVE_ADDR_RI_U6x8,    /* SVE [<Xn|SP>, #<uimm6>*8].  */
+  AARCH64_OPND_SVE_ADDR_RR,	    /* SVE [<Xn|SP>, <Xm|XZR>].  */
+  AARCH64_OPND_SVE_ADDR_RR_LSL1,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #1].  */
+  AARCH64_OPND_SVE_ADDR_RR_LSL2,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #2].  */
+  AARCH64_OPND_SVE_ADDR_RR_LSL3,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #3].  */
+  AARCH64_OPND_SVE_ADDR_RX,	    /* SVE [<Xn|SP>, <Xm>].  */
+  AARCH64_OPND_SVE_ADDR_RX_LSL1,    /* SVE [<Xn|SP>, <Xm>, LSL #1].  */
+  AARCH64_OPND_SVE_ADDR_RX_LSL2,    /* SVE [<Xn|SP>, <Xm>, LSL #2].  */
+  AARCH64_OPND_SVE_ADDR_RX_LSL3,    /* SVE [<Xn|SP>, <Xm>, LSL #3].  */
+  AARCH64_OPND_SVE_ADDR_RZ,	    /* SVE [<Xn|SP>, Zm.D].  */
+  AARCH64_OPND_SVE_ADDR_RZ_LSL1,    /* SVE [<Xn|SP>, Zm.D, LSL #1].  */
+  AARCH64_OPND_SVE_ADDR_RZ_LSL2,    /* SVE [<Xn|SP>, Zm.D, LSL #2].  */
+  AARCH64_OPND_SVE_ADDR_RZ_LSL3,    /* SVE [<Xn|SP>, Zm.D, LSL #3].  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW_14,  /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW].
+				       Bit 14 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW_22,  /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW].
+				       Bit 22 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW1_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #1].
+				       Bit 14 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW1_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #1].
+				       Bit 22 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW2_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #2].
+				       Bit 14 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW2_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #2].
+				       Bit 22 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW3_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #3].
+				       Bit 14 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW3_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #3].
+				       Bit 22 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_ZI_U5,	    /* SVE [Zn.<T>, #<uimm5>].  */
+  AARCH64_OPND_SVE_ADDR_ZI_U5x2,    /* SVE [Zn.<T>, #<uimm5>*2].  */
+  AARCH64_OPND_SVE_ADDR_ZI_U5x4,    /* SVE [Zn.<T>, #<uimm5>*4].  */
+  AARCH64_OPND_SVE_ADDR_ZI_U5x8,    /* SVE [Zn.<T>, #<uimm5>*8].  */
+  AARCH64_OPND_SVE_ADDR_ZZ_LSL,     /* SVE [Zn.<T>, Zm,<T>, LSL #<msz>].  */
+  AARCH64_OPND_SVE_ADDR_ZZ_SXTW,    /* SVE [Zn.<T>, Zm,<T>, SXTW #<msz>].  */
+  AARCH64_OPND_SVE_ADDR_ZZ_UXTW,    /* SVE [Zn.<T>, Zm,<T>, UXTW #<msz>].  */
   AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
   AARCH64_OPND_SVE_PATTERN_SCALED, /* Likewise, with additional MUL factor.  */
   AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index 039b9be..47a414c 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -480,21 +480,21 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 92:
-    case 93:
-    case 94:
-    case 95:
-    case 96:
-    case 97:
-    case 98:
-    case 99:
-    case 100:
-    case 101:
-    case 102:
-    case 103:
-    case 104:
-    case 105:
-    case 108:
+    case 123:
+    case 124:
+    case 125:
+    case 126:
+    case 127:
+    case 128:
+    case 129:
+    case 130:
+    case 131:
+    case 132:
+    case 133:
+    case 134:
+    case 135:
+    case 136:
+    case 139:
       return aarch64_ins_regno (self, info, code, inst);
     case 12:
       return aarch64_ins_reg_extended (self, info, code, inst);
@@ -531,8 +531,8 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 68:
     case 69:
     case 70:
-    case 89:
-    case 91:
+    case 120:
+    case 122:
       return aarch64_ins_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -583,12 +583,50 @@ aarch64_insert_operand (const aarch64_operand *self,
       return aarch64_ins_prfop (self, info, code, inst);
     case 88:
       return aarch64_ins_hint (self, info, code, inst);
+    case 89:
     case 90:
-      return aarch64_ins_sve_scale (self, info, code, inst);
+    case 91:
+    case 92:
+      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
+    case 93:
+    case 94:
+    case 95:
+    case 96:
+    case 97:
+    case 98:
+    case 99:
+    case 100:
+    case 101:
+    case 102:
+    case 103:
+    case 104:
+      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
+    case 105:
     case 106:
-      return aarch64_ins_sve_index (self, info, code, inst);
     case 107:
+    case 108:
     case 109:
+    case 110:
+    case 111:
+    case 112:
+      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
+    case 113:
+    case 114:
+    case 115:
+    case 116:
+      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
+    case 117:
+      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
+    case 118:
+      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
+    case 119:
+      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
+    case 121:
+      return aarch64_ins_sve_scale (self, info, code, inst);
+    case 137:
+      return aarch64_ins_sve_index (self, info, code, inst);
+    case 138:
+    case 140:
       return aarch64_ins_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index 117a3c6..0d3b2c7 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -745,6 +745,114 @@ aarch64_ins_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return NULL;
 }
 
+/* Encode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
+   is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
+   value.  fields[0] specifies the base register field.  */
+const char *
+aarch64_ins_sve_addr_ri_u6 (const aarch64_operand *self,
+			    const aarch64_opnd_info *info, aarch64_insn *code,
+			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int factor = 1 << get_operand_specific_data (self);
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (FLD_SVE_imm6, code, info->addr.offset.imm / factor, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [X<n>, X<m>{, LSL #<shift>}], where <shift>
+   is SELF's operand-dependent value.  fields[0] specifies the base
+   register field and fields[1] specifies the offset register field.  */
+const char *
+aarch64_ins_sve_addr_rr_lsl (const aarch64_operand *self,
+			     const aarch64_opnd_info *info, aarch64_insn *code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [X<n>, Z<m>.<T>, (S|U)XTW {#<shift>}], where
+   <shift> is SELF's operand-dependent value.  fields[0] specifies the
+   base register field, fields[1] specifies the offset register field and
+   fields[2] is a single-bit field that selects SXTW over UXTW.  */
+const char *
+aarch64_ins_sve_addr_rz_xtw (const aarch64_operand *self,
+			     const aarch64_opnd_info *info, aarch64_insn *code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
+  if (info->shifter.kind == AARCH64_MOD_UXTW)
+    insert_field (self->fields[2], code, 0, 0);
+  else
+    insert_field (self->fields[2], code, 1, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [Z<n>.<T>, #<imm5> << <shift>], where <imm5> is a
+   5-bit unsigned number and where <shift> is SELF's operand-dependent value.
+   fields[0] specifies the base register field.  */
+const char *
+aarch64_ins_sve_addr_zi_u5 (const aarch64_operand *self,
+			    const aarch64_opnd_info *info, aarch64_insn *code,
+			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int factor = 1 << get_operand_specific_data (self);
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (FLD_imm5, code, info->addr.offset.imm / factor, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>{, <modifier> {#<msz>}}],
+   where <modifier> is fixed by the instruction and where <msz> is a
+   2-bit unsigned number.  fields[0] specifies the base register field
+   and fields[1] specifies the offset register field.  */
+static const char *
+aarch64_ext_sve_addr_zz (const aarch64_operand *self,
+			 const aarch64_opnd_info *info, aarch64_insn *code)
+{
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
+  insert_field (FLD_SVE_msz, code, info->shifter.amount, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>{, LSL #<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+const char *
+aarch64_ins_sve_addr_zz_lsl (const aarch64_operand *self,
+			     const aarch64_opnd_info *info, aarch64_insn *code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code);
+}
+
+/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>, SXTW {#<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+const char *
+aarch64_ins_sve_addr_zz_sxtw (const aarch64_operand *self,
+			      const aarch64_opnd_info *info,
+			      aarch64_insn *code,
+			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code);
+}
+
+/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>, UXTW {#<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+const char *
+aarch64_ins_sve_addr_zz_uxtw (const aarch64_operand *self,
+			      const aarch64_opnd_info *info,
+			      aarch64_insn *code,
+			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code);
+}
+
 /* Encode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
    array specifies which field to use for Zn.  MM is encoded in the
    concatenation of imm5 and SVE_tszh, with imm5 being the less
diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
index ac5faeb..b81cfa1 100644
--- a/opcodes/aarch64-asm.h
+++ b/opcodes/aarch64-asm.h
@@ -69,6 +69,13 @@ AARCH64_DECL_OPD_INSERTER (ins_hint);
 AARCH64_DECL_OPD_INSERTER (ins_prfop);
 AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
 AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_u6);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rr_lsl);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rz_xtw);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zi_u5);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_lsl);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_sxtw);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_uxtw);
 AARCH64_DECL_OPD_INSERTER (ins_sve_index);
 AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
 AARCH64_DECL_OPD_INSERTER (ins_sve_scale);
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index 124385d..3dd714f 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10426,21 +10426,21 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 92:
-    case 93:
-    case 94:
-    case 95:
-    case 96:
-    case 97:
-    case 98:
-    case 99:
-    case 100:
-    case 101:
-    case 102:
-    case 103:
-    case 104:
-    case 105:
-    case 108:
+    case 123:
+    case 124:
+    case 125:
+    case 126:
+    case 127:
+    case 128:
+    case 129:
+    case 130:
+    case 131:
+    case 132:
+    case 133:
+    case 134:
+    case 135:
+    case 136:
+    case 139:
       return aarch64_ext_regno (self, info, code, inst);
     case 8:
       return aarch64_ext_regrt_sysins (self, info, code, inst);
@@ -10482,8 +10482,8 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 68:
     case 69:
     case 70:
-    case 89:
-    case 91:
+    case 120:
+    case 122:
       return aarch64_ext_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -10536,12 +10536,50 @@ aarch64_extract_operand (const aarch64_operand *self,
       return aarch64_ext_prfop (self, info, code, inst);
     case 88:
       return aarch64_ext_hint (self, info, code, inst);
+    case 89:
     case 90:
-      return aarch64_ext_sve_scale (self, info, code, inst);
+    case 91:
+    case 92:
+      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
+    case 93:
+    case 94:
+    case 95:
+    case 96:
+    case 97:
+    case 98:
+    case 99:
+    case 100:
+    case 101:
+    case 102:
+    case 103:
+    case 104:
+      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
+    case 105:
     case 106:
-      return aarch64_ext_sve_index (self, info, code, inst);
     case 107:
+    case 108:
     case 109:
+    case 110:
+    case 111:
+    case 112:
+      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
+    case 113:
+    case 114:
+    case 115:
+    case 116:
+      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
+    case 117:
+      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
+    case 118:
+      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
+    case 119:
+      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
+    case 121:
+      return aarch64_ext_sve_scale (self, info, code, inst);
+    case 137:
+      return aarch64_ext_sve_index (self, info, code, inst);
+    case 138:
+    case 140:
       return aarch64_ext_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index 1d00c0a..ed77b4d 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -1186,6 +1186,152 @@ aarch64_ext_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return 1;
 }
 
+/* Decode an SVE address [<base>, #<offset> << <shift>], where <offset>
+   is given by the OFFSET parameter and where <shift> is SELF's operand-
+   dependent value.  fields[0] specifies the base register field <base>.  */
+static int
+aarch64_ext_sve_addr_reg_imm (const aarch64_operand *self,
+			      aarch64_opnd_info *info, aarch64_insn code,
+			      int64_t offset)
+{
+  info->addr.base_regno = extract_field (self->fields[0], code, 0);
+  info->addr.offset.imm = offset * (1 << get_operand_specific_data (self));
+  info->addr.offset.is_reg = FALSE;
+  info->addr.writeback = FALSE;
+  info->addr.preind = TRUE;
+  info->shifter.operator_present = FALSE;
+  info->shifter.amount_present = FALSE;
+  return 1;
+}
+
+/* Decode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
+   is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
+   value.  fields[0] specifies the base register field.  */
+int
+aarch64_ext_sve_addr_ri_u6 (const aarch64_operand *self,
+			    aarch64_opnd_info *info, aarch64_insn code,
+			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int offset = extract_field (FLD_SVE_imm6, code, 0);
+  return aarch64_ext_sve_addr_reg_imm (self, info, code, offset);
+}
+
+/* Decode an SVE address [X<n>, X<m>{, LSL #<shift>}], where <shift>
+   is SELF's operand-dependent value.  fields[0] specifies the base
+   register field and fields[1] specifies the offset register field.  */
+int
+aarch64_ext_sve_addr_rr_lsl (const aarch64_operand *self,
+			     aarch64_opnd_info *info, aarch64_insn code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int index;
+
+  index = extract_field (self->fields[1], code, 0);
+  if (index == 31 && (self->flags & OPD_F_NO_ZR) != 0)
+    return 0;
+
+  info->addr.base_regno = extract_field (self->fields[0], code, 0);
+  info->addr.offset.regno = index;
+  info->addr.offset.is_reg = TRUE;
+  info->addr.writeback = FALSE;
+  info->addr.preind = TRUE;
+  info->shifter.kind = AARCH64_MOD_LSL;
+  info->shifter.amount = get_operand_specific_data (self);
+  info->shifter.operator_present = (info->shifter.amount != 0);
+  info->shifter.amount_present = (info->shifter.amount != 0);
+  return 1;
+}
+
+/* Decode an SVE address [X<n>, Z<m>.<T>, (S|U)XTW {#<shift>}], where
+   <shift> is SELF's operand-dependent value.  fields[0] specifies the
+   base register field, fields[1] specifies the offset register field and
+   fields[2] is a single-bit field that selects SXTW over UXTW.  */
+int
+aarch64_ext_sve_addr_rz_xtw (const aarch64_operand *self,
+			     aarch64_opnd_info *info, aarch64_insn code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  info->addr.base_regno = extract_field (self->fields[0], code, 0);
+  info->addr.offset.regno = extract_field (self->fields[1], code, 0);
+  info->addr.offset.is_reg = TRUE;
+  info->addr.writeback = FALSE;
+  info->addr.preind = TRUE;
+  if (extract_field (self->fields[2], code, 0))
+    info->shifter.kind = AARCH64_MOD_SXTW;
+  else
+    info->shifter.kind = AARCH64_MOD_UXTW;
+  info->shifter.amount = get_operand_specific_data (self);
+  info->shifter.operator_present = TRUE;
+  info->shifter.amount_present = (info->shifter.amount != 0);
+  return 1;
+}
+
+/* Decode an SVE address [Z<n>.<T>, #<imm5> << <shift>], where <imm5> is a
+   5-bit unsigned number and where <shift> is SELF's operand-dependent value.
+   fields[0] specifies the base register field.  */
+int
+aarch64_ext_sve_addr_zi_u5 (const aarch64_operand *self,
+			    aarch64_opnd_info *info, aarch64_insn code,
+			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int offset = extract_field (FLD_imm5, code, 0);
+  return aarch64_ext_sve_addr_reg_imm (self, info, code, offset);
+}
+
+/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>{, <modifier> {#<msz>}}],
+   where <modifier> is given by KIND and where <msz> is a 2-bit unsigned
+   number.  fields[0] specifies the base register field and fields[1]
+   specifies the offset register field.  */
+static int
+aarch64_ext_sve_addr_zz (const aarch64_operand *self, aarch64_opnd_info *info,
+			 aarch64_insn code, enum aarch64_modifier_kind kind)
+{
+  info->addr.base_regno = extract_field (self->fields[0], code, 0);
+  info->addr.offset.regno = extract_field (self->fields[1], code, 0);
+  info->addr.offset.is_reg = TRUE;
+  info->addr.writeback = FALSE;
+  info->addr.preind = TRUE;
+  info->shifter.kind = kind;
+  info->shifter.amount = extract_field (FLD_SVE_msz, code, 0);
+  info->shifter.operator_present = (kind != AARCH64_MOD_LSL
+				    || info->shifter.amount != 0);
+  info->shifter.amount_present = (info->shifter.amount != 0);
+  return 1;
+}
+
+/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>{, LSL #<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+int
+aarch64_ext_sve_addr_zz_lsl (const aarch64_operand *self,
+			     aarch64_opnd_info *info, aarch64_insn code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_LSL);
+}
+
+/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>, SXTW {#<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+int
+aarch64_ext_sve_addr_zz_sxtw (const aarch64_operand *self,
+			      aarch64_opnd_info *info, aarch64_insn code,
+			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_SXTW);
+}
+
+/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>, UXTW {#<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+int
+aarch64_ext_sve_addr_zz_uxtw (const aarch64_operand *self,
+			      aarch64_opnd_info *info, aarch64_insn code,
+			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_UXTW);
+}
+
 /* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
    array specifies which field to use for Zn.  MM is encoded in the
    concatenation of imm5 and SVE_tszh, with imm5 being the less
diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
index 92f5ad4..0ce2d89 100644
--- a/opcodes/aarch64-dis.h
+++ b/opcodes/aarch64-dis.h
@@ -91,6 +91,13 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_hint);
 AARCH64_DECL_OPD_EXTRACTOR (ext_prfop);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_u6);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rr_lsl);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rz_xtw);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zi_u5);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_lsl);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_sxtw);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_uxtw);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_scale);
diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
index 8f221b8..ed2b70b 100644
--- a/opcodes/aarch64-opc-2.c
+++ b/opcodes/aarch64-opc-2.c
@@ -113,6 +113,37 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
   {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 2"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 4"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x8", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 8"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL1", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL2", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL3", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX", (0 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL1", (1 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL2", (2 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL3", (3 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL1", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL2", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL3", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW_14", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW_22", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW1_14", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW1_22", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW2_14", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW2_22", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW3_14", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW3_22", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 2"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 4"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x8", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 8"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_LSL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_SXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_UXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN_SCALED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 326b94e..6617e28 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -280,9 +280,13 @@ const aarch64_field fields[] =
     {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
     {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
     { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
+    { 16,  6 }, /* SVE_imm6: 6-bit immediate field.  */
+    { 10,  2 }, /* SVE_msz: 2-bit shift amount for ADR.  */
     {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
     {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
     { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
+    { 14,  1 }, /* SVE_xs_14: UXTW/SXTW select (bit 14).  */
+    { 22,  1 }  /* SVE_xs_22: UXTW/SXTW select (bit 22).  */
 };
 
 enum aarch64_operand_class
@@ -1368,9 +1372,9 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 				  const aarch64_opcode *opcode,
 				  aarch64_operand_error *mismatch_detail)
 {
-  unsigned num;
+  unsigned num, modifiers;
   unsigned char size;
-  int64_t imm;
+  int64_t imm, min_value, max_value;
   const aarch64_opnd_info *opnd = opnds + idx;
   aarch64_opnd_qualifier_t qualifier = opnd->qualifier;
 
@@ -1662,6 +1666,113 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	    }
 	  break;
 
+	case AARCH64_OPND_SVE_ADDR_RI_U6:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
+	  min_value = 0;
+	  max_value = 63;
+	sve_imm_offset:
+	  assert (!opnd->addr.offset.is_reg);
+	  assert (opnd->addr.preind);
+	  num = 1 << get_operand_specific_data (&aarch64_operands[type]);
+	  min_value *= num;
+	  max_value *= num;
+	  if (opnd->shifter.operator_present
+	      || opnd->shifter.amount_present)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("invalid addressing mode"));
+	      return 0;
+	    }
+	  if (!value_in_range_p (opnd->addr.offset.imm, min_value, max_value))
+	    {
+	      set_offset_out_of_range_error (mismatch_detail, idx,
+					     min_value, max_value);
+	      return 0;
+	    }
+	  if (!value_aligned_p (opnd->addr.offset.imm, num))
+	    {
+	      set_unaligned_error (mismatch_detail, idx, num);
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_ADDR_RR:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL3:
+	case AARCH64_OPND_SVE_ADDR_RX:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL3:
+	case AARCH64_OPND_SVE_ADDR_RZ:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
+	  modifiers = 1 << AARCH64_MOD_LSL;
+	sve_rr_operand:
+	  assert (opnd->addr.offset.is_reg);
+	  assert (opnd->addr.preind);
+	  if ((aarch64_operands[type].flags & OPD_F_NO_ZR) != 0
+	      && opnd->addr.offset.regno == 31)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("index register xzr is not allowed"));
+	      return 0;
+	    }
+	  if (((1 << opnd->shifter.kind) & modifiers) == 0
+	      || (opnd->shifter.amount
+		  != get_operand_specific_data (&aarch64_operands[type])))
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("invalid addressing mode"));
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
+	  modifiers = (1 << AARCH64_MOD_SXTW) | (1 << AARCH64_MOD_UXTW);
+	  goto sve_rr_operand;
+
+	case AARCH64_OPND_SVE_ADDR_ZI_U5:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
+	  min_value = 0;
+	  max_value = 31;
+	  goto sve_imm_offset;
+
+	case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
+	  modifiers = 1 << AARCH64_MOD_LSL;
+	sve_zz_operand:
+	  assert (opnd->addr.offset.is_reg);
+	  assert (opnd->addr.preind);
+	  if (((1 << opnd->shifter.kind) & modifiers) == 0
+	      || opnd->shifter.amount < 0
+	      || opnd->shifter.amount > 3)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("invalid addressing mode"));
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
+	  modifiers = (1 << AARCH64_MOD_SXTW);
+	  goto sve_zz_operand;
+
+	case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
+	  modifiers = 1 << AARCH64_MOD_UXTW;
+	  goto sve_zz_operand;
+
 	default:
 	  break;
 	}
@@ -2330,6 +2441,17 @@ static const char *int_reg[2][2][32] = {
 #undef R64
 #undef R32
 };
+
+/* Names of the SVE vector registers, first with .S suffixes,
+   then with .D suffixes.  */
+
+static const char *sve_reg[2][32] = {
+#define ZS(X) "z" #X ".s"
+#define ZD(X) "z" #X ".d"
+  BANK (ZS, ZS (31)), BANK (ZD, ZD (31))
+#undef ZD
+#undef ZS
+};
 #undef BANK
 
 /* Return the integer register name.
@@ -2373,6 +2495,17 @@ get_offset_int_reg_name (const aarch64_opnd_info *opnd)
     }
 }
 
+/* Get the name of the SVE vector offset register in OPND, using the operand
+   qualifier to decide whether the suffix should be .S or .D.  */
+
+static inline const char *
+get_addr_sve_reg_name (int regno, aarch64_opnd_qualifier_t qualifier)
+{
+  assert (qualifier == AARCH64_OPND_QLF_S_S
+	  || qualifier == AARCH64_OPND_QLF_S_D);
+  return sve_reg[qualifier == AARCH64_OPND_QLF_S_D][regno];
+}
+
 /* Types for expanding an encoded 8-bit value to a floating-point value.  */
 
 typedef union
@@ -2948,18 +3081,65 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
       break;
 
     case AARCH64_OPND_ADDR_REGOFF:
+    case AARCH64_OPND_SVE_ADDR_RR:
+    case AARCH64_OPND_SVE_ADDR_RR_LSL1:
+    case AARCH64_OPND_SVE_ADDR_RR_LSL2:
+    case AARCH64_OPND_SVE_ADDR_RR_LSL3:
+    case AARCH64_OPND_SVE_ADDR_RX:
+    case AARCH64_OPND_SVE_ADDR_RX_LSL1:
+    case AARCH64_OPND_SVE_ADDR_RX_LSL2:
+    case AARCH64_OPND_SVE_ADDR_RX_LSL3:
       print_register_offset_address
 	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1),
 	 get_offset_int_reg_name (opnd));
       break;
 
+    case AARCH64_OPND_SVE_ADDR_RZ:
+    case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
+    case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
+    case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
+      print_register_offset_address
+	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1),
+	 get_addr_sve_reg_name (opnd->addr.offset.regno, opnd->qualifier));
+      break;
+
     case AARCH64_OPND_ADDR_SIMM7:
     case AARCH64_OPND_ADDR_SIMM9:
     case AARCH64_OPND_ADDR_SIMM9_2:
+    case AARCH64_OPND_SVE_ADDR_RI_U6:
+    case AARCH64_OPND_SVE_ADDR_RI_U6x2:
+    case AARCH64_OPND_SVE_ADDR_RI_U6x4:
+    case AARCH64_OPND_SVE_ADDR_RI_U6x8:
       print_immediate_offset_address
 	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1));
       break;
 
+    case AARCH64_OPND_SVE_ADDR_ZI_U5:
+    case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
+    case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
+    case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
+      print_immediate_offset_address
+	(buf, size, opnd,
+	 get_addr_sve_reg_name (opnd->addr.base_regno, opnd->qualifier));
+      break;
+
+    case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
+    case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
+    case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
+      print_register_offset_address
+	(buf, size, opnd,
+	 get_addr_sve_reg_name (opnd->addr.base_regno, opnd->qualifier),
+	 get_addr_sve_reg_name (opnd->addr.offset.regno, opnd->qualifier));
+      break;
+
     case AARCH64_OPND_ADDR_UIMM12:
       name = get_64bit_int_reg_name (opnd->addr.base_regno, 1);
       if (opnd->addr.offset.imm)
diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
index 3406f6e..e823146 100644
--- a/opcodes/aarch64-opc.h
+++ b/opcodes/aarch64-opc.h
@@ -107,9 +107,13 @@ enum aarch64_field_kind
   FLD_SVE_Zn,
   FLD_SVE_Zt,
   FLD_SVE_imm4,
+  FLD_SVE_imm6,
+  FLD_SVE_msz,
   FLD_SVE_pattern,
   FLD_SVE_prfop,
   FLD_SVE_tszh,
+  FLD_SVE_xs_14,
+  FLD_SVE_xs_22,
 };
 
 /* Field description.  */
@@ -156,6 +160,9 @@ extern const aarch64_operand aarch64_operands[];
 						   value by 2 to get the value
 						   of an immediate operand.  */
 #define OPD_F_MAYBE_SP		0x00000010	/* May potentially be SP.  */
+#define OPD_F_OD_MASK		0x00000060	/* Operand-dependent data.  */
+#define OPD_F_OD_LSB		5
+#define OPD_F_NO_ZR		0x00000080	/* ZR index not allowed.  */
 
 static inline bfd_boolean
 operand_has_inserter (const aarch64_operand *operand)
@@ -187,6 +194,13 @@ operand_maybe_stack_pointer (const aarch64_operand *operand)
   return (operand->flags & OPD_F_MAYBE_SP) ? TRUE : FALSE;
 }
 
+/* Return the value of the operand-specific data field (OPD_F_OD_MASK).  */
+static inline unsigned int
+get_operand_specific_data (const aarch64_operand *operand)
+{
+  return (operand->flags & OPD_F_OD_MASK) >> OPD_F_OD_LSB;
+}
+
 /* Return the total width of the operand *OPERAND.  */
 static inline unsigned
 get_operand_fields_width (const aarch64_operand *operand)
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index 491235f..ef32e19 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2820,6 +2820,93 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "a prefetch operation specifier")					\
     Y (SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
       "the PSB option name CSYNC")					\
+    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6", 0 << OPD_F_OD_LSB,	\
+      F(FLD_Rn), "an address with a 6-bit unsigned offset")		\
+    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB,	\
+      F(FLD_Rn),							\
+      "an address with a 6-bit unsigned offset, multiplied by 2")	\
+    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB,	\
+      F(FLD_Rn),							\
+      "an address with a 6-bit unsigned offset, multiplied by 4")	\
+    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x8", 3 << OPD_F_OD_LSB,	\
+      F(FLD_Rn),							\
+      "an address with a 6-bit unsigned offset, multiplied by 8")	\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR", 0 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL1", 1 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL2", 2 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL3", 3 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX",				\
+      (0 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
+      "an address with a scalar register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL1",			\
+      (1 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
+      "an address with a scalar register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL2",			\
+      (2 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
+      "an address with a scalar register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL3",			\
+      (3 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
+      "an address with a scalar register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ", 0 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_SVE_Zm_16),						\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL1", 1 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_SVE_Zm_16),						\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL2", 2 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_SVE_Zm_16),						\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL3", 3 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_SVE_Zm_16),						\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW_14",			\
+      0 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW_22",			\
+      0 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW1_14",			\
+      1 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW1_22",			\
+      1 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW2_14",			\
+      2 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW2_22",			\
+      2 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW3_14",			\
+      3 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW3_22",			\
+      3 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5", 0 << OPD_F_OD_LSB,	\
+      F(FLD_SVE_Zn), "an address with a 5-bit unsigned offset")		\
+    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x2", 1 << OPD_F_OD_LSB,	\
+      F(FLD_SVE_Zn),							\
+      "an address with a 5-bit unsigned offset, multiplied by 2")	\
+    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x4", 2 << OPD_F_OD_LSB,	\
+      F(FLD_SVE_Zn),							\
+      "an address with a 5-bit unsigned offset, multiplied by 4")	\
+    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x8", 3 << OPD_F_OD_LSB,	\
+      F(FLD_SVE_Zn),							\
+      "an address with a 5-bit unsigned offset, multiplied by 8")	\
+    Y(ADDRESS, sve_addr_zz_lsl, "SVE_ADDR_ZZ_LSL", 0,			\
+      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_zz_sxtw, "SVE_ADDR_ZZ_SXTW", 0,			\
+      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_zz_uxtw, "SVE_ADDR_ZZ_UXTW", 0,			\
+      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
+      "an address with a vector register offset")			\
     Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
       "an enumeration value such as POW2")				\
     Y(IMMEDIATE, sve_scale, "SVE_PATTERN_SCALED", 0,			\

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 26/32] Add SVE MUL VL addressing modes
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (24 preceding siblings ...)
  2016-08-23  9:21 ` [AArch64][SVE 24/32] Add AARCH64_OPND_SVE_PATTERN_SCALED Richard Sandiford
@ 2016-08-23  9:23 ` Richard Sandiford
  2016-08-25 14:44   ` Richard Earnshaw (lists)
  2016-08-23  9:24 ` [AArch64][SVE 27/32] Add SVE integer immediate operands Richard Sandiford
                   ` (6 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:23 UTC (permalink / raw)
  To: binutils

This patch adds support for addresses of the form:

   [<base>, #<offset>, MUL VL]

This involves adding a new AARCH64_MOD_MUL_VL modifier, which is
why I split it out from the other addressing modes.

For LD2, LD3 and LD4, the offset must be a multiple of the structure
size, so for LD3 the possible values are 0, 3, 6, ....  The patch
therefore extends value_aligned_p to handle non-power-of-2 alignments.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_S4xVL): New aarch64_opnd.
	(AARCH64_OPND_SVE_ADDR_RI_S4x2xVL, AARCH64_OPND_SVE_ADDR_RI_S4x3xVL)
	(AARCH64_OPND_SVE_ADDR_RI_S4x4xVL, AARCH64_OPND_SVE_ADDR_RI_S6xVL)
	(AARCH64_OPND_SVE_ADDR_RI_S9xVL): Likewise.
	(AARCH64_MOD_MUL_VL): New aarch64_modifier_kind.

opcodes/
	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for new MUL VL
	operands.
	* aarch64-opc.c (aarch64_operand_modifiers): Initialize
	the AARCH64_MOD_MUL_VL entry.
	(value_aligned_p): Cope with non-power-of-two alignments.
	(operand_general_constraint_met_p): Handle the new MUL VL addresses.
	(print_immediate_offset_address): Likewise.
	(aarch64_print_operand): Likewise.
	* aarch64-opc-2.c: Regenerate.
	* aarch64-asm.h (ins_sve_addr_ri_s4xvl, ins_sve_addr_ri_s6xvl)
	(ins_sve_addr_ri_s9xvl): New inserters.
	* aarch64-asm.c (aarch64_ins_sve_addr_ri_s4xvl): New function.
	(aarch64_ins_sve_addr_ri_s6xvl): Likewise.
	(aarch64_ins_sve_addr_ri_s9xvl): Likewise.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis.h (ext_sve_addr_ri_s4xvl, ext_sve_addr_ri_s6xvl)
	(ext_sve_addr_ri_s9xvl): New extractors.
	* aarch64-dis.c (aarch64_ext_sve_addr_reg_mul_vl): New function.
	(aarch64_ext_sve_addr_ri_s4xvl): Likewise.
	(aarch64_ext_sve_addr_ri_s6xvl): Likewise.
	(aarch64_ext_sve_addr_ri_s9xvl): Likewise.
	* aarch64-dis-2.c: Regenerate.

gas/
	* config/tc-aarch64.c (SHIFTED_MUL_VL): New parse_shift_mode.
	(parse_shift): Handle it.
	(parse_address_main): Handle the new MUL VL addresses.
	(parse_operands): Likewise.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index f9d89ce..37fce5b 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -2949,6 +2949,7 @@ enum parse_shift_mode
   SHIFTED_LSL,			/* bare "lsl #n"  */
   SHIFTED_MUL,			/* bare "mul #n"  */
   SHIFTED_LSL_MSL,		/* "lsl|msl #n"  */
+  SHIFTED_MUL_VL,		/* "mul vl"  */
   SHIFTED_REG_OFFSET		/* [su]xtw|sxtx {#n} or lsl #n  */
 };
 
@@ -2990,7 +2991,8 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
     }
 
   if (kind == AARCH64_MOD_MUL
-      && mode != SHIFTED_MUL)
+      && mode != SHIFTED_MUL
+      && mode != SHIFTED_MUL_VL)
     {
       set_syntax_error (_("invalid use of 'MUL'"));
       return FALSE;
@@ -3030,6 +3032,20 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
 	}
       break;
 
+    case SHIFTED_MUL_VL:
+      if (kind == AARCH64_MOD_MUL)
+	{
+	  skip_whitespace (p);
+	  if (strncasecmp (p, "vl", 2) == 0 && !ISALPHA (p[2]))
+	    {
+	      p += 2;
+	      kind = AARCH64_MOD_MUL_VL;
+	      break;
+	    }
+	}
+      set_syntax_error (_("only 'MUL VL' is permitted"));
+      return FALSE;
+
     case SHIFTED_REG_OFFSET:
       if (kind != AARCH64_MOD_UXTW && kind != AARCH64_MOD_LSL
 	  && kind != AARCH64_MOD_SXTW && kind != AARCH64_MOD_SXTX)
@@ -3057,7 +3073,7 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
 
   /* Parse shift amount.  */
   exp_has_prefix = 0;
-  if (mode == SHIFTED_REG_OFFSET && *p == ']')
+  if ((mode == SHIFTED_REG_OFFSET && *p == ']') || kind == AARCH64_MOD_MUL_VL)
     exp.X_op = O_absent;
   else
     {
@@ -3068,7 +3084,11 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
 	}
       my_get_expression (&exp, &p, GE_NO_PREFIX, 0);
     }
-  if (exp.X_op == O_absent)
+  if (kind == AARCH64_MOD_MUL_VL)
+    /* For consistency, give MUL VL the same shift amount as an implicit
+       MUL #1.  */
+    operand->shifter.amount = 1;
+  else if (exp.X_op == O_absent)
     {
       if (aarch64_extend_operator_p (kind) == FALSE || exp_has_prefix)
 	{
@@ -3289,6 +3309,7 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
    PC-relative (literal)
      label
    SVE:
+     [base,#imm,MUL VL]
      [base,Zm.D{,LSL #imm}]
      [base,Zm.S,(S|U)XTW {#imm}]
      [base,Zm.D,(S|U)XTW {#imm}] // ignores top 32 bits of Zm.D elements
@@ -3334,7 +3355,9 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
    Likewise ACCEPT_SVE says whether the SVE addressing modes should be
    accepted.  We use context-dependent parsing for this case because
    (for compatibility) we should accept symbolic constants like z0 and
-   z0.s in base AArch64 code.
+   z0.s in base AArch64 code.  Also, the error message "only 'MUL VL'
+   is permitted" is likely to be confusing in non-SVE addresses, where
+   no immediate modifiers are permitted.
 
    In all other cases, it is the caller's responsibility to check whether
    the addressing mode is supported by the instruction.  It is also the
@@ -3544,6 +3567,10 @@ parse_address_main (char **str, aarch64_opnd_info *operand,
 		  set_syntax_error (_("constant offset required"));
 		  return FALSE;
 		}
+	      if (accept_sve && skip_past_comma (&p))
+		/* [Xn,<expr>,MUL VL  */
+		if (! parse_shift (&p, operand, SHIFTED_MUL_VL))
+		  return FALSE;
 	    }
 	}
     }
@@ -3619,9 +3646,9 @@ parse_address_main (char **str, aarch64_opnd_info *operand,
 }
 
 /* Parse a base AArch64 address, i.e. one that cannot contain SVE base
-   registers or SVE offset registers.  Do not allow relocation operators.
-   Look for and parse "[Xn], (Xm|#m)" as post-indexed addressing
-   if ACCEPT_REG_POST_INDEX is true.
+   registers, SVE offset registers, or MUL VL.  Do not allow relocation
+   operators.  Look for and parse "[Xn], (Xm|#m)" as post-indexed
+   addressing if ACCEPT_REG_POST_INDEX is true.
 
    Return TRUE on success.  */
 static bfd_boolean
@@ -3634,8 +3661,8 @@ parse_address (char **str, aarch64_opnd_info *operand,
 }
 
 /* Parse a base AArch64 address, i.e. one that cannot contain SVE base
-   registers or SVE offset registers.  Allow relocation operators but
-   disallow post-indexed addressing.
+   registers, SVE offset registers, or MUL VL.  Allow relocation operators
+   but disallow post-indexed addressing.
 
    Return TRUE on success.  */
 static bfd_boolean
@@ -3646,7 +3673,7 @@ parse_address_reloc (char **str, aarch64_opnd_info *operand)
 			     TRUE, FALSE, FALSE);
 }
 
-/* Parse an address in which SVE vector registers are allowed.
+/* Parse an address in which SVE vector registers and MUL VL are allowed.
    The arguments have the same meaning as for parse_address_main.
    Return TRUE on success.  */
 static bfd_boolean
@@ -5980,11 +6007,18 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  /* No qualifier.  */
 	  break;
 
+	case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
 	case AARCH64_OPND_SVE_ADDR_RI_U6:
 	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
 	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
 	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
-	  /* [X<n>{, #imm}]
+	  /* [X<n>{, #imm, MUL VL}]
+	     [X<n>{, #imm}]
 	     but recognizing SVE registers.  */
 	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
 					      &offset_qualifier));
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index e61ac9c..837d6bd 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -244,6 +244,12 @@ enum aarch64_opnd
   AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
   AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
 
+  AARCH64_OPND_SVE_ADDR_RI_S4xVL,   /* SVE [<Xn|SP>, #<simm4>, MUL VL].  */
+  AARCH64_OPND_SVE_ADDR_RI_S4x2xVL, /* SVE [<Xn|SP>, #<simm4>*2, MUL VL].  */
+  AARCH64_OPND_SVE_ADDR_RI_S4x3xVL, /* SVE [<Xn|SP>, #<simm4>*3, MUL VL].  */
+  AARCH64_OPND_SVE_ADDR_RI_S4x4xVL, /* SVE [<Xn|SP>, #<simm4>*4, MUL VL].  */
+  AARCH64_OPND_SVE_ADDR_RI_S6xVL,   /* SVE [<Xn|SP>, #<simm6>, MUL VL].  */
+  AARCH64_OPND_SVE_ADDR_RI_S9xVL,   /* SVE [<Xn|SP>, #<simm9>, MUL VL].  */
   AARCH64_OPND_SVE_ADDR_RI_U6,	    /* SVE [<Xn|SP>, #<uimm6>].  */
   AARCH64_OPND_SVE_ADDR_RI_U6x2,    /* SVE [<Xn|SP>, #<uimm6>*2].  */
   AARCH64_OPND_SVE_ADDR_RI_U6x4,    /* SVE [<Xn|SP>, #<uimm6>*4].  */
@@ -786,6 +792,7 @@ enum aarch64_modifier_kind
   AARCH64_MOD_SXTW,
   AARCH64_MOD_SXTX,
   AARCH64_MOD_MUL,
+  AARCH64_MOD_MUL_VL,
 };
 
 bfd_boolean
diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index 47a414c..da590ca 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -480,12 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 123:
-    case 124:
-    case 125:
-    case 126:
-    case 127:
-    case 128:
     case 129:
     case 130:
     case 131:
@@ -494,7 +488,13 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 134:
     case 135:
     case 136:
+    case 137:
+    case 138:
     case 139:
+    case 140:
+    case 141:
+    case 142:
+    case 145:
       return aarch64_ins_regno (self, info, code, inst);
     case 12:
       return aarch64_ins_reg_extended (self, info, code, inst);
@@ -531,8 +531,8 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 68:
     case 69:
     case 70:
-    case 120:
-    case 122:
+    case 126:
+    case 128:
       return aarch64_ins_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -587,46 +587,55 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 90:
     case 91:
     case 92:
-      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
+      return aarch64_ins_sve_addr_ri_s4xvl (self, info, code, inst);
     case 93:
+      return aarch64_ins_sve_addr_ri_s6xvl (self, info, code, inst);
     case 94:
+      return aarch64_ins_sve_addr_ri_s9xvl (self, info, code, inst);
     case 95:
     case 96:
     case 97:
     case 98:
+      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
     case 99:
     case 100:
     case 101:
     case 102:
     case 103:
     case 104:
-      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
     case 105:
     case 106:
     case 107:
     case 108:
     case 109:
     case 110:
+      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
     case 111:
     case 112:
-      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
     case 113:
     case 114:
     case 115:
     case 116:
-      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
     case 117:
-      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
     case 118:
-      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
+      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
     case 119:
-      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
+    case 120:
     case 121:
+    case 122:
+      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
+    case 123:
+      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
+    case 124:
+      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
+    case 125:
+      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
+    case 127:
       return aarch64_ins_sve_scale (self, info, code, inst);
-    case 137:
+    case 143:
       return aarch64_ins_sve_index (self, info, code, inst);
-    case 138:
-    case 140:
+    case 144:
+    case 146:
       return aarch64_ins_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index 0d3b2c7..944a9eb 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -745,6 +745,56 @@ aarch64_ins_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return NULL;
 }
 
+/* Encode an SVE address [<base>, #<simm4>*<factor>, MUL VL],
+   where <simm4> is a 4-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm4> is encoded in the SVE_imm4 field.  */
+const char *
+aarch64_ins_sve_addr_ri_s4xvl (const aarch64_operand *self,
+			       const aarch64_opnd_info *info,
+			       aarch64_insn *code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int factor = 1 + get_operand_specific_data (self);
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (FLD_SVE_imm4, code, info->addr.offset.imm / factor, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [<base>, #<simm6>*<factor>, MUL VL],
+   where <simm6> is a 6-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm6> is encoded in the SVE_imm6 field.  */
+const char *
+aarch64_ins_sve_addr_ri_s6xvl (const aarch64_operand *self,
+			       const aarch64_opnd_info *info,
+			       aarch64_insn *code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int factor = 1 + get_operand_specific_data (self);
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (FLD_SVE_imm6, code, info->addr.offset.imm / factor, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [<base>, #<simm9>*<factor>, MUL VL],
+   where <simm9> is a 9-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm9> is encoded in the concatenation of the SVE_imm6
+   and imm3 fields, with imm3 being the less-significant part.  */
+const char *
+aarch64_ins_sve_addr_ri_s9xvl (const aarch64_operand *self,
+			       const aarch64_opnd_info *info,
+			       aarch64_insn *code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int factor = 1 + get_operand_specific_data (self);
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_fields (code, info->addr.offset.imm / factor, 0,
+		 2, FLD_imm3, FLD_SVE_imm6);
+  return NULL;
+}
+
 /* Encode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
    is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
    value.  fields[0] specifies the base register field.  */
diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
index b81cfa1..5e13de0 100644
--- a/opcodes/aarch64-asm.h
+++ b/opcodes/aarch64-asm.h
@@ -69,6 +69,9 @@ AARCH64_DECL_OPD_INSERTER (ins_hint);
 AARCH64_DECL_OPD_INSERTER (ins_prfop);
 AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
 AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s4xvl);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s6xvl);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s9xvl);
 AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_u6);
 AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rr_lsl);
 AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rz_xtw);
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index 3dd714f..48d6ce7 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10426,12 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 123:
-    case 124:
-    case 125:
-    case 126:
-    case 127:
-    case 128:
     case 129:
     case 130:
     case 131:
@@ -10440,7 +10434,13 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 134:
     case 135:
     case 136:
+    case 137:
+    case 138:
     case 139:
+    case 140:
+    case 141:
+    case 142:
+    case 145:
       return aarch64_ext_regno (self, info, code, inst);
     case 8:
       return aarch64_ext_regrt_sysins (self, info, code, inst);
@@ -10482,8 +10482,8 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 68:
     case 69:
     case 70:
-    case 120:
-    case 122:
+    case 126:
+    case 128:
       return aarch64_ext_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -10540,46 +10540,55 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 90:
     case 91:
     case 92:
-      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
+      return aarch64_ext_sve_addr_ri_s4xvl (self, info, code, inst);
     case 93:
+      return aarch64_ext_sve_addr_ri_s6xvl (self, info, code, inst);
     case 94:
+      return aarch64_ext_sve_addr_ri_s9xvl (self, info, code, inst);
     case 95:
     case 96:
     case 97:
     case 98:
+      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
     case 99:
     case 100:
     case 101:
     case 102:
     case 103:
     case 104:
-      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
     case 105:
     case 106:
     case 107:
     case 108:
     case 109:
     case 110:
+      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
     case 111:
     case 112:
-      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
     case 113:
     case 114:
     case 115:
     case 116:
-      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
     case 117:
-      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
     case 118:
-      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
+      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
     case 119:
-      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
+    case 120:
     case 121:
+    case 122:
+      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
+    case 123:
+      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
+    case 124:
+      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
+    case 125:
+      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
+    case 127:
       return aarch64_ext_sve_scale (self, info, code, inst);
-    case 137:
+    case 143:
       return aarch64_ext_sve_index (self, info, code, inst);
-    case 138:
-    case 140:
+    case 144:
+    case 146:
       return aarch64_ext_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index ed77b4d..ba6befd 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -1186,6 +1186,78 @@ aarch64_ext_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return 1;
 }
 
+/* Decode an SVE address [<base>, #<offset>*<factor>, MUL VL],
+   where <offset> is given by the OFFSET parameter and where <factor> is
+   1 plus SELF's operand-dependent value.  fields[0] specifies the field
+   that holds <base>.  */
+static int
+aarch64_ext_sve_addr_reg_mul_vl (const aarch64_operand *self,
+				 aarch64_opnd_info *info, aarch64_insn code,
+				 int64_t offset)
+{
+  info->addr.base_regno = extract_field (self->fields[0], code, 0);
+  info->addr.offset.imm = offset * (1 + get_operand_specific_data (self));
+  info->addr.offset.is_reg = FALSE;
+  info->addr.writeback = FALSE;
+  info->addr.preind = TRUE;
+  if (offset != 0)
+    info->shifter.kind = AARCH64_MOD_MUL_VL;
+  info->shifter.amount = 1;
+  info->shifter.operator_present = (info->addr.offset.imm != 0);
+  info->shifter.amount_present = FALSE;
+  return 1;
+}
+
+/* Decode an SVE address [<base>, #<simm4>*<factor>, MUL VL],
+   where <simm4> is a 4-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm4> is encoded in the SVE_imm4 field.  */
+int
+aarch64_ext_sve_addr_ri_s4xvl (const aarch64_operand *self,
+			       aarch64_opnd_info *info, aarch64_insn code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int offset;
+
+  offset = extract_field (FLD_SVE_imm4, code, 0);
+  offset = ((offset + 8) & 15) - 8;
+  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
+}
+
+/* Decode an SVE address [<base>, #<simm6>*<factor>, MUL VL],
+   where <simm6> is a 6-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm6> is encoded in the SVE_imm6 field.  */
+int
+aarch64_ext_sve_addr_ri_s6xvl (const aarch64_operand *self,
+			       aarch64_opnd_info *info, aarch64_insn code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int offset;
+
+  offset = extract_field (FLD_SVE_imm6, code, 0);
+  offset = (((offset + 32) & 63) - 32);
+  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
+}
+
+/* Decode an SVE address [<base>, #<simm9>*<factor>, MUL VL],
+   where <simm9> is a 9-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm9> is encoded in the concatenation of the SVE_imm6
+   and imm3 fields, with imm3 being the less-significant part.  */
+int
+aarch64_ext_sve_addr_ri_s9xvl (const aarch64_operand *self,
+			       aarch64_opnd_info *info,
+			       aarch64_insn code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int offset;
+
+  offset = extract_fields (code, 0, 2, FLD_SVE_imm6, FLD_imm3);
+  offset = (((offset + 256) & 511) - 256);
+  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
+}
+
 /* Decode an SVE address [<base>, #<offset> << <shift>], where <offset>
    is given by the OFFSET parameter and where <shift> is SELF's operand-
    dependent value.  fields[0] specifies the base register field <base>.  */
diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
index 0ce2d89..5619877 100644
--- a/opcodes/aarch64-dis.h
+++ b/opcodes/aarch64-dis.h
@@ -91,6 +91,9 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_hint);
 AARCH64_DECL_OPD_EXTRACTOR (ext_prfop);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s4xvl);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s6xvl);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s9xvl);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_u6);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rr_lsl);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rz_xtw);
diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
index ed2b70b..a72f577 100644
--- a/opcodes/aarch64-opc-2.c
+++ b/opcodes/aarch64-opc-2.c
@@ -113,6 +113,12 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
   {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by VL"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x2xVL", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 2*VL"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x3xVL", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 3*VL"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x4xVL", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 4*VL"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S6xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit signed offset, multiplied by VL"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S9xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 9-bit signed offset, multiplied by VL"},
   {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset"},
   {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 2"},
   {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 4"},
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 6617e28..d0959b5 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -365,6 +365,7 @@ const struct aarch64_name_value_pair aarch64_operand_modifiers [] =
     {"sxtw", 0x6},
     {"sxtx", 0x7},
     {"mul", 0x0},
+    {"mul vl", 0x0},
     {NULL, 0},
 };
 
@@ -486,10 +487,11 @@ value_in_range_p (int64_t value, int low, int high)
   return (value >= low && value <= high) ? 1 : 0;
 }
 
+/* Return true if VALUE is a multiple of ALIGN.  */
 static inline int
 value_aligned_p (int64_t value, int align)
 {
-  return ((value & (align - 1)) == 0) ? 1 : 0;
+  return (value % align) == 0;
 }
 
 /* A signed value fits in a field.  */
@@ -1666,6 +1668,49 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	    }
 	  break;
 
+	case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
+	  min_value = -8;
+	  max_value = 7;
+	sve_imm_offset_vl:
+	  assert (!opnd->addr.offset.is_reg);
+	  assert (opnd->addr.preind);
+	  num = 1 + get_operand_specific_data (&aarch64_operands[type]);
+	  min_value *= num;
+	  max_value *= num;
+	  if ((opnd->addr.offset.imm != 0 && !opnd->shifter.operator_present)
+	      || (opnd->shifter.operator_present
+		  && opnd->shifter.kind != AARCH64_MOD_MUL_VL))
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("invalid addressing mode"));
+	      return 0;
+	    }
+	  if (!value_in_range_p (opnd->addr.offset.imm, min_value, max_value))
+	    {
+	      set_offset_out_of_range_error (mismatch_detail, idx,
+					     min_value, max_value);
+	      return 0;
+	    }
+	  if (!value_aligned_p (opnd->addr.offset.imm, num))
+	    {
+	      set_unaligned_error (mismatch_detail, idx, num);
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
+	  min_value = -32;
+	  max_value = 31;
+	  goto sve_imm_offset_vl;
+
+	case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
+	  min_value = -256;
+	  max_value = 255;
+	  goto sve_imm_offset_vl;
+
 	case AARCH64_OPND_SVE_ADDR_RI_U6:
 	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
 	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
@@ -2645,7 +2690,13 @@ print_immediate_offset_address (char *buf, size_t size,
     }
   else
     {
-      if (opnd->addr.offset.imm)
+      if (opnd->shifter.operator_present)
+	{
+	  assert (opnd->shifter.kind == AARCH64_MOD_MUL_VL);
+	  snprintf (buf, size, "[%s,#%d,mul vl]",
+		    base, opnd->addr.offset.imm);
+	}
+      else if (opnd->addr.offset.imm)
 	snprintf (buf, size, "[%s,#%d]", base, opnd->addr.offset.imm);
       else
 	snprintf (buf, size, "[%s]", base);
@@ -3114,6 +3165,12 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
     case AARCH64_OPND_ADDR_SIMM7:
     case AARCH64_OPND_ADDR_SIMM9:
     case AARCH64_OPND_ADDR_SIMM9_2:
+    case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
+    case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
+    case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
+    case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
+    case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
+    case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
     case AARCH64_OPND_SVE_ADDR_RI_U6:
     case AARCH64_OPND_SVE_ADDR_RI_U6x2:
     case AARCH64_OPND_SVE_ADDR_RI_U6x4:
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index ef32e19..ac7ccf0 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2820,6 +2820,24 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "a prefetch operation specifier")					\
     Y (SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
       "the PSB option name CSYNC")					\
+    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4xVL",			\
+      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 4-bit signed offset, multiplied by VL")	\
+    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x2xVL",		\
+      1 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 4-bit signed offset, multiplied by 2*VL")	\
+    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x3xVL",		\
+      2 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 4-bit signed offset, multiplied by 3*VL")	\
+    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x4xVL",		\
+      3 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 4-bit signed offset, multiplied by 4*VL")	\
+    Y(ADDRESS, sve_addr_ri_s6xvl, "SVE_ADDR_RI_S6xVL",			\
+      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 6-bit signed offset, multiplied by VL")	\
+    Y(ADDRESS, sve_addr_ri_s9xvl, "SVE_ADDR_RI_S9xVL",			\
+      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 9-bit signed offset, multiplied by VL")	\
     Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6", 0 << OPD_F_OD_LSB,	\
       F(FLD_Rn), "an address with a 6-bit unsigned offset")		\
     Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB,	\

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 27/32] Add SVE integer immediate operands
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (25 preceding siblings ...)
  2016-08-23  9:23 ` [AArch64][SVE 26/32] Add SVE MUL VL addressing modes Richard Sandiford
@ 2016-08-23  9:24 ` Richard Sandiford
  2016-08-25 14:51   ` Richard Earnshaw (lists)
  2016-08-23  9:25 ` [AArch64][SVE 29/32] Add new SVE core & FP register operands Richard Sandiford
                   ` (5 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:24 UTC (permalink / raw)
  To: binutils

This patch adds the new SVE integer immediate operands.  There are
three kinds:

- simple signed and unsigned ranges, but with new widths and positions.

- 13-bit logical immediates.  These have the same form as in base AArch64,
  but at a different bit position.

  In the case of the "MOV Zn.<T>, #<limm>" alias of DUPM, the logical
  immediate <limm> is not allowed to be a valid DUP immediate, since DUP
  is preferred over DUPM for constants that both instructions can handle.

- a new 9-bit arithmetic immediate, of the form "<imm8>{, LSL #8}".
  In some contexts the operand is signed and in others it's unsigned.
  As an extension, we allow shifted immediates to be written as a single
  integer, e.g. "#256" is equivalent to "#1, LSL #8".  We also use the
  shiftless form as the preferred disassembly, except for the special
  case of "#0, LSL #8" (a redundant encoding of 0).

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_SIMM5): New aarch64_opnd.
	(AARCH64_OPND_SVE_AIMM, AARCH64_OPND_SVE_ASIMM)
	(AARCH64_OPND_SVE_INV_LIMM, AARCH64_OPND_SVE_LIMM)
	(AARCH64_OPND_SVE_LIMM_MOV, AARCH64_OPND_SVE_SHLIMM_PRED)
	(AARCH64_OPND_SVE_SHLIMM_UNPRED, AARCH64_OPND_SVE_SHRIMM_PRED)
	(AARCH64_OPND_SVE_SHRIMM_UNPRED, AARCH64_OPND_SVE_SIMM5)
	(AARCH64_OPND_SVE_SIMM5B, AARCH64_OPND_SVE_SIMM6)
	(AARCH64_OPND_SVE_SIMM8, AARCH64_OPND_SVE_UIMM3)
	(AARCH64_OPND_SVE_UIMM7, AARCH64_OPND_SVE_UIMM8)
	(AARCH64_OPND_SVE_UIMM8_53): Likewise.
	(aarch64_sve_dupm_mov_immediate_p): Declare.

opcodes/
	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE
	integer immediate operands.
	* aarch64-opc.h (FLD_SVE_immN, FLD_SVE_imm3, FLD_SVE_imm5)
	(FLD_SVE_imm5b, FLD_SVE_imm7, FLD_SVE_imm8, FLD_SVE_imm9)
	(FLD_SVE_immr, FLD_SVE_imms, FLD_SVE_tszh): New aarch64_field_kinds.
	* aarch64-opc.c (fields): Add corresponding entries.
	(operand_general_constraint_met_p): Handle the new SVE integer
	immediate operands.
	(aarch64_print_operand): Likewise.
	(aarch64_sve_dupm_mov_immediate_p): New function.
	* aarch64-opc-2.c: Regenerate.
	* aarch64-asm.h (ins_inv_limm, ins_sve_aimm, ins_sve_asimm)
	(ins_sve_limm_mov, ins_sve_shlimm, ins_sve_shrimm): New inserters.
	* aarch64-asm.c (aarch64_ins_limm_1): New function, split out from...
	(aarch64_ins_limm): ...here.
	(aarch64_ins_inv_limm): New function.
	(aarch64_ins_sve_aimm): Likewise.
	(aarch64_ins_sve_asimm): Likewise.
	(aarch64_ins_sve_limm_mov): Likewise.
	(aarch64_ins_sve_shlimm): Likewise.
	(aarch64_ins_sve_shrimm): Likewise.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis.h (ext_inv_limm, ext_sve_aimm, ext_sve_asimm)
	(ext_sve_limm_mov, ext_sve_shlimm, ext_sve_shrimm): New extractors.
	* aarch64-dis.c (decode_limm): New function, split out from...
	(aarch64_ext_limm): ...here.
	(aarch64_ext_inv_limm): New function.
	(decode_sve_aimm): Likewise.
	(aarch64_ext_sve_aimm): Likewise.
	(aarch64_ext_sve_asimm): Likewise.
	(aarch64_ext_sve_limm_mov): Likewise.
	(aarch64_top_bit): Likewise.
	(aarch64_ext_sve_shlimm): Likewise.
	(aarch64_ext_sve_shrimm): Likewise.
	* aarch64-dis-2.c: Regenerate.

gas/
	* config/tc-aarch64.c (parse_operands): Handle the new SVE integer
	immediate operands.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 37fce5b..cb39cf8 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -5553,6 +5553,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  break;
 
 	case AARCH64_OPND_CCMP_IMM:
+	case AARCH64_OPND_SIMM5:
 	case AARCH64_OPND_FBITS:
 	case AARCH64_OPND_UIMM4:
 	case AARCH64_OPND_UIMM3_OP1:
@@ -5560,10 +5561,36 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	case AARCH64_OPND_IMM_VLSL:
 	case AARCH64_OPND_IMM:
 	case AARCH64_OPND_WIDTH:
+	case AARCH64_OPND_SVE_INV_LIMM:
+	case AARCH64_OPND_SVE_LIMM:
+	case AARCH64_OPND_SVE_LIMM_MOV:
+	case AARCH64_OPND_SVE_SHLIMM_PRED:
+	case AARCH64_OPND_SVE_SHLIMM_UNPRED:
+	case AARCH64_OPND_SVE_SHRIMM_PRED:
+	case AARCH64_OPND_SVE_SHRIMM_UNPRED:
+	case AARCH64_OPND_SVE_SIMM5:
+	case AARCH64_OPND_SVE_SIMM5B:
+	case AARCH64_OPND_SVE_SIMM6:
+	case AARCH64_OPND_SVE_SIMM8:
+	case AARCH64_OPND_SVE_UIMM3:
+	case AARCH64_OPND_SVE_UIMM7:
+	case AARCH64_OPND_SVE_UIMM8:
+	case AARCH64_OPND_SVE_UIMM8_53:
 	  po_imm_nc_or_fail ();
 	  info->imm.value = val;
 	  break;
 
+	case AARCH64_OPND_SVE_AIMM:
+	case AARCH64_OPND_SVE_ASIMM:
+	  po_imm_nc_or_fail ();
+	  info->imm.value = val;
+	  skip_whitespace (str);
+	  if (skip_past_comma (&str))
+	    po_misc_or_fail (parse_shift (&str, info, SHIFTED_LSL));
+	  else
+	    inst.base.operands[i].shifter.kind = AARCH64_MOD_LSL;
+	  break;
+
 	case AARCH64_OPND_SVE_PATTERN:
 	  po_enum_or_fail (aarch64_sve_pattern_array);
 	  info->imm.value = val;
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index 837d6bd..36e95b4 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -200,6 +200,7 @@ enum aarch64_opnd
   AARCH64_OPND_BIT_NUM,	/* Immediate.  */
   AARCH64_OPND_EXCEPTION,/* imm16 operand in exception instructions.  */
   AARCH64_OPND_CCMP_IMM,/* Immediate in conditional compare instructions.  */
+  AARCH64_OPND_SIMM5,	/* 5-bit signed immediate in the imm5 field.  */
   AARCH64_OPND_NZCV,	/* Flag bit specifier giving an alternative value for
 			   each condition flag.  */
 
@@ -289,6 +290,11 @@ enum aarch64_opnd
   AARCH64_OPND_SVE_ADDR_ZZ_LSL,     /* SVE [Zn.<T>, Zm,<T>, LSL #<msz>].  */
   AARCH64_OPND_SVE_ADDR_ZZ_SXTW,    /* SVE [Zn.<T>, Zm,<T>, SXTW #<msz>].  */
   AARCH64_OPND_SVE_ADDR_ZZ_UXTW,    /* SVE [Zn.<T>, Zm,<T>, UXTW #<msz>].  */
+  AARCH64_OPND_SVE_AIMM,	/* SVE unsigned arithmetic immediate.  */
+  AARCH64_OPND_SVE_ASIMM,	/* SVE signed arithmetic immediate.  */
+  AARCH64_OPND_SVE_INV_LIMM,	/* SVE inverted logical immediate.  */
+  AARCH64_OPND_SVE_LIMM,	/* SVE logical immediate.  */
+  AARCH64_OPND_SVE_LIMM_MOV,	/* SVE logical immediate for MOV.  */
   AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
   AARCH64_OPND_SVE_PATTERN_SCALED, /* Likewise, with additional MUL factor.  */
   AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
@@ -300,6 +306,18 @@ enum aarch64_opnd
   AARCH64_OPND_SVE_Pm,		/* SVE p0-p15 in Pm.  */
   AARCH64_OPND_SVE_Pn,		/* SVE p0-p15 in Pn.  */
   AARCH64_OPND_SVE_Pt,		/* SVE p0-p15 in Pt.  */
+  AARCH64_OPND_SVE_SHLIMM_PRED,	  /* SVE shift left amount (predicated).  */
+  AARCH64_OPND_SVE_SHLIMM_UNPRED, /* SVE shift left amount (unpredicated).  */
+  AARCH64_OPND_SVE_SHRIMM_PRED,	  /* SVE shift right amount (predicated).  */
+  AARCH64_OPND_SVE_SHRIMM_UNPRED, /* SVE shift right amount (unpredicated).  */
+  AARCH64_OPND_SVE_SIMM5,	/* SVE signed 5-bit immediate.  */
+  AARCH64_OPND_SVE_SIMM5B,	/* SVE secondary signed 5-bit immediate.  */
+  AARCH64_OPND_SVE_SIMM6,	/* SVE signed 6-bit immediate.  */
+  AARCH64_OPND_SVE_SIMM8,	/* SVE signed 8-bit immediate.  */
+  AARCH64_OPND_SVE_UIMM3,	/* SVE unsigned 3-bit immediate.  */
+  AARCH64_OPND_SVE_UIMM7,	/* SVE unsigned 7-bit immediate.  */
+  AARCH64_OPND_SVE_UIMM8,	/* SVE unsigned 8-bit immediate.  */
+  AARCH64_OPND_SVE_UIMM8_53,	/* SVE split unsigned 8-bit immediate.  */
   AARCH64_OPND_SVE_Za_5,	/* SVE vector register in Za, bits [9,5].  */
   AARCH64_OPND_SVE_Za_16,	/* SVE vector register in Za, bits [20,16].  */
   AARCH64_OPND_SVE_Zd,		/* SVE vector register in Zd.  */
@@ -1065,6 +1083,9 @@ aarch64_get_operand_name (enum aarch64_opnd);
 extern const char *
 aarch64_get_operand_desc (enum aarch64_opnd);
 
+extern bfd_boolean
+aarch64_sve_dupm_mov_immediate_p (uint64_t, int);
+
 #ifdef DEBUG_AARCH64
 extern int debug_dump;
 
diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index da590ca..491ea53 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -480,12 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 129:
-    case 130:
-    case 131:
-    case 132:
-    case 133:
-    case 134:
     case 135:
     case 136:
     case 137:
@@ -494,7 +488,13 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 140:
     case 141:
     case 142:
-    case 145:
+    case 155:
+    case 156:
+    case 157:
+    case 158:
+    case 159:
+    case 160:
+    case 163:
       return aarch64_ins_regno (self, info, code, inst);
     case 12:
       return aarch64_ins_reg_extended (self, info, code, inst);
@@ -527,12 +527,21 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 56:
     case 57:
     case 58:
-    case 67:
+    case 59:
     case 68:
     case 69:
     case 70:
-    case 126:
-    case 128:
+    case 71:
+    case 132:
+    case 134:
+    case 147:
+    case 148:
+    case 149:
+    case 150:
+    case 151:
+    case 152:
+    case 153:
+    case 154:
       return aarch64_ins_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -543,61 +552,61 @@ aarch64_insert_operand (const aarch64_operand *self,
       return aarch64_ins_advsimd_imm_modified (self, info, code, inst);
     case 46:
       return aarch64_ins_fpimm (self, info, code, inst);
-    case 59:
-      return aarch64_ins_limm (self, info, code, inst);
     case 60:
-      return aarch64_ins_aimm (self, info, code, inst);
+    case 130:
+      return aarch64_ins_limm (self, info, code, inst);
     case 61:
-      return aarch64_ins_imm_half (self, info, code, inst);
+      return aarch64_ins_aimm (self, info, code, inst);
     case 62:
+      return aarch64_ins_imm_half (self, info, code, inst);
+    case 63:
       return aarch64_ins_fbits (self, info, code, inst);
-    case 64:
     case 65:
+    case 66:
       return aarch64_ins_cond (self, info, code, inst);
-    case 71:
-    case 77:
-      return aarch64_ins_addr_simple (self, info, code, inst);
     case 72:
-      return aarch64_ins_addr_regoff (self, info, code, inst);
+    case 78:
+      return aarch64_ins_addr_simple (self, info, code, inst);
     case 73:
+      return aarch64_ins_addr_regoff (self, info, code, inst);
     case 74:
     case 75:
-      return aarch64_ins_addr_simm (self, info, code, inst);
     case 76:
+      return aarch64_ins_addr_simm (self, info, code, inst);
+    case 77:
       return aarch64_ins_addr_uimm12 (self, info, code, inst);
-    case 78:
-      return aarch64_ins_simd_addr_post (self, info, code, inst);
     case 79:
-      return aarch64_ins_sysreg (self, info, code, inst);
+      return aarch64_ins_simd_addr_post (self, info, code, inst);
     case 80:
-      return aarch64_ins_pstatefield (self, info, code, inst);
+      return aarch64_ins_sysreg (self, info, code, inst);
     case 81:
+      return aarch64_ins_pstatefield (self, info, code, inst);
     case 82:
     case 83:
     case 84:
-      return aarch64_ins_sysins_op (self, info, code, inst);
     case 85:
+      return aarch64_ins_sysins_op (self, info, code, inst);
     case 86:
-      return aarch64_ins_barrier (self, info, code, inst);
     case 87:
-      return aarch64_ins_prfop (self, info, code, inst);
+      return aarch64_ins_barrier (self, info, code, inst);
     case 88:
-      return aarch64_ins_hint (self, info, code, inst);
+      return aarch64_ins_prfop (self, info, code, inst);
     case 89:
+      return aarch64_ins_hint (self, info, code, inst);
     case 90:
     case 91:
     case 92:
-      return aarch64_ins_sve_addr_ri_s4xvl (self, info, code, inst);
     case 93:
-      return aarch64_ins_sve_addr_ri_s6xvl (self, info, code, inst);
+      return aarch64_ins_sve_addr_ri_s4xvl (self, info, code, inst);
     case 94:
-      return aarch64_ins_sve_addr_ri_s9xvl (self, info, code, inst);
+      return aarch64_ins_sve_addr_ri_s6xvl (self, info, code, inst);
     case 95:
+      return aarch64_ins_sve_addr_ri_s9xvl (self, info, code, inst);
     case 96:
     case 97:
     case 98:
-      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
     case 99:
+      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
     case 100:
     case 101:
     case 102:
@@ -609,8 +618,8 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 108:
     case 109:
     case 110:
-      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
     case 111:
+      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
     case 112:
     case 113:
     case 114:
@@ -618,24 +627,39 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 116:
     case 117:
     case 118:
-      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
     case 119:
+      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
     case 120:
     case 121:
     case 122:
-      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
     case 123:
-      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
+      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
     case 124:
-      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
+      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
     case 125:
+      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
+    case 126:
       return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
     case 127:
+      return aarch64_ins_sve_aimm (self, info, code, inst);
+    case 128:
+      return aarch64_ins_sve_asimm (self, info, code, inst);
+    case 129:
+      return aarch64_ins_inv_limm (self, info, code, inst);
+    case 131:
+      return aarch64_ins_sve_limm_mov (self, info, code, inst);
+    case 133:
       return aarch64_ins_sve_scale (self, info, code, inst);
     case 143:
-      return aarch64_ins_sve_index (self, info, code, inst);
     case 144:
+      return aarch64_ins_sve_shlimm (self, info, code, inst);
+    case 145:
     case 146:
+      return aarch64_ins_sve_shrimm (self, info, code, inst);
+    case 161:
+      return aarch64_ins_sve_index (self, info, code, inst);
+    case 162:
+    case 164:
       return aarch64_ins_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index 944a9eb..61d0d95 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -452,17 +452,18 @@ aarch64_ins_aimm (const aarch64_operand *self, const aarch64_opnd_info *info,
   return NULL;
 }
 
-/* Insert logical/bitmask immediate for e.g. the last operand in
-     ORR <Wd|WSP>, <Wn>, #<imm>.  */
-const char *
-aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
-		  aarch64_insn *code, const aarch64_inst *inst ATTRIBUTE_UNUSED)
+/* Common routine shared by aarch64_ins{,_inv}_limm.  INVERT_P says whether
+   the operand should be inverted before encoding.  */
+static const char *
+aarch64_ins_limm_1 (const aarch64_operand *self,
+		    const aarch64_opnd_info *info, aarch64_insn *code,
+		    const aarch64_inst *inst, bfd_boolean invert_p)
 {
   aarch64_insn value;
   uint64_t imm = info->imm.value;
   int esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
 
-  if (inst->opcode->op == OP_BIC)
+  if (invert_p)
     imm = ~imm;
   if (aarch64_logical_immediate_p (imm, esize, &value) == FALSE)
     /* The constraint check should have guaranteed this wouldn't happen.  */
@@ -473,6 +474,25 @@ aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
   return NULL;
 }
 
+/* Insert logical/bitmask immediate for e.g. the last operand in
+     ORR <Wd|WSP>, <Wn>, #<imm>.  */
+const char *
+aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
+		  aarch64_insn *code, const aarch64_inst *inst)
+{
+  return aarch64_ins_limm_1 (self, info, code, inst,
+			     inst->opcode->op == OP_BIC);
+}
+
+/* Insert a logical/bitmask immediate for the BIC alias of AND (etc.).  */
+const char *
+aarch64_ins_inv_limm (const aarch64_operand *self,
+		      const aarch64_opnd_info *info, aarch64_insn *code,
+		      const aarch64_inst *inst)
+{
+  return aarch64_ins_limm_1 (self, info, code, inst, TRUE);
+}
+
 /* Encode Ft for e.g. STR <Qt>, [<Xn|SP>, <R><m>{, <extend> {<amount>}}]
    or LDP <Qt1>, <Qt2>, [<Xn|SP>], #<imm>.  */
 const char *
@@ -903,6 +923,30 @@ aarch64_ins_sve_addr_zz_uxtw (const aarch64_operand *self,
   return aarch64_ext_sve_addr_zz (self, info, code);
 }
 
+/* Encode an SVE ADD/SUB immediate.  */
+const char *
+aarch64_ins_sve_aimm (const aarch64_operand *self,
+		      const aarch64_opnd_info *info, aarch64_insn *code,
+		      const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  if (info->shifter.amount == 8)
+    insert_all_fields (self, code, (info->imm.value & 0xff) | 256);
+  else if (info->imm.value != 0 && (info->imm.value & 0xff) == 0)
+    insert_all_fields (self, code, ((info->imm.value / 256) & 0xff) | 256);
+  else
+    insert_all_fields (self, code, info->imm.value & 0xff);
+  return NULL;
+}
+
+/* Encode an SVE CPY/DUP immediate.  */
+const char *
+aarch64_ins_sve_asimm (const aarch64_operand *self,
+		       const aarch64_opnd_info *info, aarch64_insn *code,
+		       const aarch64_inst *inst)
+{
+  return aarch64_ins_sve_aimm (self, info, code, inst);
+}
+
 /* Encode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
    array specifies which field to use for Zn.  MM is encoded in the
    concatenation of imm5 and SVE_tszh, with imm5 being the less
@@ -919,6 +963,15 @@ aarch64_ins_sve_index (const aarch64_operand *self,
   return NULL;
 }
 
+/* Encode a logical/bitmask immediate for the MOV alias of SVE DUPM.  */
+const char *
+aarch64_ins_sve_limm_mov (const aarch64_operand *self,
+			  const aarch64_opnd_info *info, aarch64_insn *code,
+			  const aarch64_inst *inst)
+{
+  return aarch64_ins_limm (self, info, code, inst);
+}
+
 /* Encode {Zn.<T> - Zm.<T>}.  The fields array specifies which field
    to use for Zn.  */
 const char *
@@ -943,6 +996,38 @@ aarch64_ins_sve_scale (const aarch64_operand *self,
   return NULL;
 }
 
+/* Encode an SVE shift left immediate.  */
+const char *
+aarch64_ins_sve_shlimm (const aarch64_operand *self,
+			const aarch64_opnd_info *info, aarch64_insn *code,
+			const aarch64_inst *inst)
+{
+  const aarch64_opnd_info *prev_operand;
+  unsigned int esize;
+
+  assert (info->idx > 0);
+  prev_operand = &inst->operands[info->idx - 1];
+  esize = aarch64_get_qualifier_esize (prev_operand->qualifier);
+  insert_all_fields (self, code, 8 * esize + info->imm.value);
+  return NULL;
+}
+
+/* Encode an SVE shift right immediate.  */
+const char *
+aarch64_ins_sve_shrimm (const aarch64_operand *self,
+			const aarch64_opnd_info *info, aarch64_insn *code,
+			const aarch64_inst *inst)
+{
+  const aarch64_opnd_info *prev_operand;
+  unsigned int esize;
+
+  assert (info->idx > 0);
+  prev_operand = &inst->operands[info->idx - 1];
+  esize = aarch64_get_qualifier_esize (prev_operand->qualifier);
+  insert_all_fields (self, code, 16 * esize - info->imm.value);
+  return NULL;
+}
+
 /* Miscellaneous encoding functions.  */
 
 /* Encode size[0], i.e. bit 22, for
diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
index 5e13de0..bbd320e 100644
--- a/opcodes/aarch64-asm.h
+++ b/opcodes/aarch64-asm.h
@@ -54,6 +54,7 @@ AARCH64_DECL_OPD_INSERTER (ins_fpimm);
 AARCH64_DECL_OPD_INSERTER (ins_fbits);
 AARCH64_DECL_OPD_INSERTER (ins_aimm);
 AARCH64_DECL_OPD_INSERTER (ins_limm);
+AARCH64_DECL_OPD_INSERTER (ins_inv_limm);
 AARCH64_DECL_OPD_INSERTER (ins_ft);
 AARCH64_DECL_OPD_INSERTER (ins_addr_simple);
 AARCH64_DECL_OPD_INSERTER (ins_addr_regoff);
@@ -79,9 +80,14 @@ AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zi_u5);
 AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_lsl);
 AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_sxtw);
 AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_uxtw);
+AARCH64_DECL_OPD_INSERTER (ins_sve_aimm);
+AARCH64_DECL_OPD_INSERTER (ins_sve_asimm);
 AARCH64_DECL_OPD_INSERTER (ins_sve_index);
+AARCH64_DECL_OPD_INSERTER (ins_sve_limm_mov);
 AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
 AARCH64_DECL_OPD_INSERTER (ins_sve_scale);
+AARCH64_DECL_OPD_INSERTER (ins_sve_shlimm);
+AARCH64_DECL_OPD_INSERTER (ins_sve_shrimm);
 
 #undef AARCH64_DECL_OPD_INSERTER
 
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index 48d6ce7..4527456 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10426,12 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 129:
-    case 130:
-    case 131:
-    case 132:
-    case 133:
-    case 134:
     case 135:
     case 136:
     case 137:
@@ -10440,7 +10434,13 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 140:
     case 141:
     case 142:
-    case 145:
+    case 155:
+    case 156:
+    case 157:
+    case 158:
+    case 159:
+    case 160:
+    case 163:
       return aarch64_ext_regno (self, info, code, inst);
     case 8:
       return aarch64_ext_regrt_sysins (self, info, code, inst);
@@ -10477,13 +10477,22 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 56:
     case 57:
     case 58:
-    case 66:
+    case 59:
     case 67:
     case 68:
     case 69:
     case 70:
-    case 126:
-    case 128:
+    case 71:
+    case 132:
+    case 134:
+    case 147:
+    case 148:
+    case 149:
+    case 150:
+    case 151:
+    case 152:
+    case 153:
+    case 154:
       return aarch64_ext_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -10496,61 +10505,61 @@ aarch64_extract_operand (const aarch64_operand *self,
       return aarch64_ext_shll_imm (self, info, code, inst);
     case 46:
       return aarch64_ext_fpimm (self, info, code, inst);
-    case 59:
-      return aarch64_ext_limm (self, info, code, inst);
     case 60:
-      return aarch64_ext_aimm (self, info, code, inst);
+    case 130:
+      return aarch64_ext_limm (self, info, code, inst);
     case 61:
-      return aarch64_ext_imm_half (self, info, code, inst);
+      return aarch64_ext_aimm (self, info, code, inst);
     case 62:
+      return aarch64_ext_imm_half (self, info, code, inst);
+    case 63:
       return aarch64_ext_fbits (self, info, code, inst);
-    case 64:
     case 65:
+    case 66:
       return aarch64_ext_cond (self, info, code, inst);
-    case 71:
-    case 77:
-      return aarch64_ext_addr_simple (self, info, code, inst);
     case 72:
-      return aarch64_ext_addr_regoff (self, info, code, inst);
+    case 78:
+      return aarch64_ext_addr_simple (self, info, code, inst);
     case 73:
+      return aarch64_ext_addr_regoff (self, info, code, inst);
     case 74:
     case 75:
-      return aarch64_ext_addr_simm (self, info, code, inst);
     case 76:
+      return aarch64_ext_addr_simm (self, info, code, inst);
+    case 77:
       return aarch64_ext_addr_uimm12 (self, info, code, inst);
-    case 78:
-      return aarch64_ext_simd_addr_post (self, info, code, inst);
     case 79:
-      return aarch64_ext_sysreg (self, info, code, inst);
+      return aarch64_ext_simd_addr_post (self, info, code, inst);
     case 80:
-      return aarch64_ext_pstatefield (self, info, code, inst);
+      return aarch64_ext_sysreg (self, info, code, inst);
     case 81:
+      return aarch64_ext_pstatefield (self, info, code, inst);
     case 82:
     case 83:
     case 84:
-      return aarch64_ext_sysins_op (self, info, code, inst);
     case 85:
+      return aarch64_ext_sysins_op (self, info, code, inst);
     case 86:
-      return aarch64_ext_barrier (self, info, code, inst);
     case 87:
-      return aarch64_ext_prfop (self, info, code, inst);
+      return aarch64_ext_barrier (self, info, code, inst);
     case 88:
-      return aarch64_ext_hint (self, info, code, inst);
+      return aarch64_ext_prfop (self, info, code, inst);
     case 89:
+      return aarch64_ext_hint (self, info, code, inst);
     case 90:
     case 91:
     case 92:
-      return aarch64_ext_sve_addr_ri_s4xvl (self, info, code, inst);
     case 93:
-      return aarch64_ext_sve_addr_ri_s6xvl (self, info, code, inst);
+      return aarch64_ext_sve_addr_ri_s4xvl (self, info, code, inst);
     case 94:
-      return aarch64_ext_sve_addr_ri_s9xvl (self, info, code, inst);
+      return aarch64_ext_sve_addr_ri_s6xvl (self, info, code, inst);
     case 95:
+      return aarch64_ext_sve_addr_ri_s9xvl (self, info, code, inst);
     case 96:
     case 97:
     case 98:
-      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
     case 99:
+      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
     case 100:
     case 101:
     case 102:
@@ -10562,8 +10571,8 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 108:
     case 109:
     case 110:
-      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
     case 111:
+      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
     case 112:
     case 113:
     case 114:
@@ -10571,24 +10580,39 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 116:
     case 117:
     case 118:
-      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
     case 119:
+      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
     case 120:
     case 121:
     case 122:
-      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
     case 123:
-      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
+      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
     case 124:
-      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
+      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
     case 125:
+      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
+    case 126:
       return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
     case 127:
+      return aarch64_ext_sve_aimm (self, info, code, inst);
+    case 128:
+      return aarch64_ext_sve_asimm (self, info, code, inst);
+    case 129:
+      return aarch64_ext_inv_limm (self, info, code, inst);
+    case 131:
+      return aarch64_ext_sve_limm_mov (self, info, code, inst);
+    case 133:
       return aarch64_ext_sve_scale (self, info, code, inst);
     case 143:
-      return aarch64_ext_sve_index (self, info, code, inst);
     case 144:
+      return aarch64_ext_sve_shlimm (self, info, code, inst);
+    case 145:
     case 146:
+      return aarch64_ext_sve_shrimm (self, info, code, inst);
+    case 161:
+      return aarch64_ext_sve_index (self, info, code, inst);
+    case 162:
+    case 164:
       return aarch64_ext_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index ba6befd..ed050cd 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -734,32 +734,21 @@ aarch64_ext_aimm (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return 1;
 }
 
-/* Decode logical immediate for e.g. ORR <Wd|WSP>, <Wn>, #<imm>.  */
-
-int
-aarch64_ext_limm (const aarch64_operand *self ATTRIBUTE_UNUSED,
-		  aarch64_opnd_info *info, const aarch64_insn code,
-		  const aarch64_inst *inst ATTRIBUTE_UNUSED)
+/* Return true if VALUE is a valid logical immediate encoding, storing the
+   decoded value in *RESULT if so.  ESIZE is the number of bytes in the
+   decoded immediate.  */
+static int
+decode_limm (uint32_t esize, aarch64_insn value, int64_t *result)
 {
   uint64_t imm, mask;
-  uint32_t sf;
   uint32_t N, R, S;
   unsigned simd_size;
-  aarch64_insn value;
-
-  value = extract_fields (code, 0, 3, FLD_N, FLD_immr, FLD_imms);
-  assert (inst->operands[0].qualifier == AARCH64_OPND_QLF_W
-	  || inst->operands[0].qualifier == AARCH64_OPND_QLF_X);
-  sf = aarch64_get_qualifier_esize (inst->operands[0].qualifier) != 4;
 
   /* value is N:immr:imms.  */
   S = value & 0x3f;
   R = (value >> 6) & 0x3f;
   N = (value >> 12) & 0x1;
 
-  if (sf == 0 && N == 1)
-    return 0;
-
   /* The immediate value is S+1 bits to 1, left rotated by SIMDsize - R
      (in other words, right rotated by R), then replicated.  */
   if (N != 0)
@@ -782,6 +771,10 @@ aarch64_ext_limm (const aarch64_operand *self ATTRIBUTE_UNUSED,
       /* Top bits are IGNORED.  */
       R &= simd_size - 1;
     }
+
+  if (simd_size > esize * 8)
+    return 0;
+
   /* NOTE: if S = simd_size - 1 we get 0xf..f which is rejected.  */
   if (S == simd_size - 1)
     return 0;
@@ -803,8 +796,35 @@ aarch64_ext_limm (const aarch64_operand *self ATTRIBUTE_UNUSED,
     default: assert (0); return 0;
     }
 
-  info->imm.value = sf ? imm : imm & 0xffffffff;
+  *result = imm & ~((uint64_t) -1 << (esize * 4) << (esize * 4));
+
+  return 1;
+}
+
+/* Decode a logical immediate for e.g. ORR <Wd|WSP>, <Wn>, #<imm>.  */
+int
+aarch64_ext_limm (const aarch64_operand *self,
+		  aarch64_opnd_info *info, const aarch64_insn code,
+		  const aarch64_inst *inst)
+{
+  uint32_t esize;
+  aarch64_insn value;
+
+  value = extract_fields (code, 0, 3, self->fields[0], self->fields[1],
+			  self->fields[2]);
+  esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
+  return decode_limm (esize, value, &info->imm.value);
+}
 
+/* Decode a logical immediate for the BIC alias of AND (etc.).  */
+int
+aarch64_ext_inv_limm (const aarch64_operand *self,
+		      aarch64_opnd_info *info, const aarch64_insn code,
+		      const aarch64_inst *inst)
+{
+  if (!aarch64_ext_limm (self, info, code, inst))
+    return 0;
+  info->imm.value = ~info->imm.value;
   return 1;
 }
 
@@ -1404,6 +1424,47 @@ aarch64_ext_sve_addr_zz_uxtw (const aarch64_operand *self,
   return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_UXTW);
 }
 
+/* Finish decoding an SVE arithmetic immediate, given that INFO already
+   has the raw field value and that the low 8 bits decode to VALUE.  */
+static int
+decode_sve_aimm (aarch64_opnd_info *info, int64_t value)
+{
+  info->shifter.kind = AARCH64_MOD_LSL;
+  info->shifter.amount = 0;
+  if (info->imm.value & 0x100)
+    {
+      if (value == 0)
+	/* Decode 0x100 as #0, LSL #8.  */
+	info->shifter.amount = 8;
+      else
+	value *= 256;
+    }
+  info->shifter.operator_present = (info->shifter.amount != 0);
+  info->shifter.amount_present = (info->shifter.amount != 0);
+  info->imm.value = value;
+  return 1;
+}
+
+/* Decode an SVE ADD/SUB immediate.  */
+int
+aarch64_ext_sve_aimm (const aarch64_operand *self,
+		      aarch64_opnd_info *info, const aarch64_insn code,
+		      const aarch64_inst *inst)
+{
+  return (aarch64_ext_imm (self, info, code, inst)
+	  && decode_sve_aimm (info, (uint8_t) info->imm.value));
+}
+
+/* Decode an SVE CPY/DUP immediate.  */
+int
+aarch64_ext_sve_asimm (const aarch64_operand *self,
+		       aarch64_opnd_info *info, const aarch64_insn code,
+		       const aarch64_inst *inst)
+{
+  return (aarch64_ext_imm (self, info, code, inst)
+	  && decode_sve_aimm (info, (int8_t) info->imm.value));
+}
+
 /* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
    array specifies which field to use for Zn.  MM is encoded in the
    concatenation of imm5 and SVE_tszh, with imm5 being the less
@@ -1425,6 +1486,17 @@ aarch64_ext_sve_index (const aarch64_operand *self,
   return 1;
 }
 
+/* Decode a logical immediate for the MOV alias of SVE DUPM.  */
+int
+aarch64_ext_sve_limm_mov (const aarch64_operand *self,
+			  aarch64_opnd_info *info, const aarch64_insn code,
+			  const aarch64_inst *inst)
+{
+  int esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
+  return (aarch64_ext_limm (self, info, code, inst)
+	  && aarch64_sve_dupm_mov_immediate_p (info->imm.value, esize));
+}
+
 /* Decode {Zn.<T> - Zm.<T>}.  The fields array specifies which field
    to use for Zn.  The opcode-dependent value specifies the number
    of registers in the list.  */
@@ -1457,6 +1529,44 @@ aarch64_ext_sve_scale (const aarch64_operand *self,
   info->shifter.amount_present = (val != 0);
   return 1;
 }
+
+/* Return the top set bit in VALUE, which is expected to be relatively
+   small.  */
+static uint64_t
+get_top_bit (uint64_t value)
+{
+  while ((value & -value) != value)
+    value -= value & -value;
+  return value;
+}
+
+/* Decode an SVE shift-left immediate.  */
+int
+aarch64_ext_sve_shlimm (const aarch64_operand *self,
+			aarch64_opnd_info *info, const aarch64_insn code,
+			const aarch64_inst *inst)
+{
+  if (!aarch64_ext_imm (self, info, code, inst)
+      || info->imm.value == 0)
+    return 0;
+
+  info->imm.value -= get_top_bit (info->imm.value);
+  return 1;
+}
+
+/* Decode an SVE shift-right immediate.  */
+int
+aarch64_ext_sve_shrimm (const aarch64_operand *self,
+			aarch64_opnd_info *info, const aarch64_insn code,
+			const aarch64_inst *inst)
+{
+  if (!aarch64_ext_imm (self, info, code, inst)
+      || info->imm.value == 0)
+    return 0;
+
+  info->imm.value = get_top_bit (info->imm.value) * 2 - info->imm.value;
+  return 1;
+}
 \f
 /* Bitfields that are commonly used to encode certain operands' information
    may be partially used as part of the base opcode in some instructions.
diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
index 5619877..10983d1 100644
--- a/opcodes/aarch64-dis.h
+++ b/opcodes/aarch64-dis.h
@@ -76,6 +76,7 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_fpimm);
 AARCH64_DECL_OPD_EXTRACTOR (ext_fbits);
 AARCH64_DECL_OPD_EXTRACTOR (ext_aimm);
 AARCH64_DECL_OPD_EXTRACTOR (ext_limm);
+AARCH64_DECL_OPD_EXTRACTOR (ext_inv_limm);
 AARCH64_DECL_OPD_EXTRACTOR (ext_ft);
 AARCH64_DECL_OPD_EXTRACTOR (ext_addr_simple);
 AARCH64_DECL_OPD_EXTRACTOR (ext_addr_regoff);
@@ -101,9 +102,14 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zi_u5);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_lsl);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_sxtw);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_uxtw);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_aimm);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_asimm);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_limm_mov);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_scale);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_shlimm);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_shrimm);
 
 #undef AARCH64_DECL_OPD_EXTRACTOR
 
diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
index a72f577..d86e7dc 100644
--- a/opcodes/aarch64-opc-2.c
+++ b/opcodes/aarch64-opc-2.c
@@ -82,6 +82,7 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_IMMEDIATE, "BIT_NUM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_b5, FLD_b40}, "the bit number to be tested"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "EXCEPTION", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm16}, "a 16-bit unsigned immediate"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "CCMP_IMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5}, "a 5-bit unsigned immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SIMM5", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5}, "a 5-bit signed immediate"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "NZCV", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_nzcv}, "a flag bit specifier giving an alternative value for each flag"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_N,FLD_immr,FLD_imms}, "Logical immediate"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "AIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_shift,FLD_imm12}, "a 12-bit unsigned immediate with optional left shift of 12 bits"},
@@ -150,6 +151,11 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_LSL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
   {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_SXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
   {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_UXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_AIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm9}, "a 9-bit unsigned arithmetic operand"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_ASIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm9}, "a 9-bit signed arithmetic operand"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_INV_LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "an inverted 13-bit logical immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "a 13-bit logical immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_LIMM_MOV", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "a 13-bit logical move immediate"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN_SCALED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
@@ -161,6 +167,18 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pm}, "an SVE predicate register"},
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pn}, "an SVE predicate register"},
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pt", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pt}, "an SVE predicate register"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHLIMM_PRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_SVE_imm5}, "a shift-left immediate operand"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHLIMM_UNPRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_imm5}, "a shift-left immediate operand"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHRIMM_PRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_SVE_imm5}, "a shift-right immediate operand"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHRIMM_UNPRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_imm5}, "a shift-right immediate operand"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM5", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm5}, "a 5-bit signed immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM5B", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm5b}, "a 5-bit signed immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM6", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imms}, "a 6-bit signed immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM8", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm8}, "an 8-bit signed immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM3", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm3}, "a 3-bit unsigned immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM7", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm7}, "a 7-bit unsigned immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM8", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm8}, "an 8-bit unsigned immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM8_53", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5,FLD_imm3}, "an 8-bit unsigned immediate"},
   {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_5}, "an SVE vector register"},
   {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_16}, "an SVE vector register"},
   {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zd}, "an SVE vector register"},
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index d0959b5..dec7e06 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -264,6 +264,7 @@ const aarch64_field fields[] =
     { 31,  1 },	/* b5: in the test bit and branch instructions.  */
     { 19,  5 },	/* b40: in the test bit and branch instructions.  */
     { 10,  6 },	/* scale: in the fixed-point scalar to fp converting inst.  */
+    { 17,  1 }, /* SVE_N: SVE equivalent of N.  */
     {  0,  4 }, /* SVE_Pd: p0-p15, bits [3,0].  */
     { 10,  3 }, /* SVE_Pg3: p0-p7, bits [12,10].  */
     {  5,  4 }, /* SVE_Pg4_5: p0-p15, bits [8,5].  */
@@ -279,8 +280,16 @@ const aarch64_field fields[] =
     { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
     {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
     {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
+    { 16,  3 }, /* SVE_imm3: 3-bit immediate field.  */
     { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
+    {  5,  5 }, /* SVE_imm5: 5-bit immediate field.  */
+    { 16,  5 }, /* SVE_imm5b: secondary 5-bit immediate field.  */
     { 16,  6 }, /* SVE_imm6: 6-bit immediate field.  */
+    { 14,  7 }, /* SVE_imm7: 7-bit immediate field.  */
+    {  5,  8 }, /* SVE_imm8: 8-bit immediate field.  */
+    {  5,  9 }, /* SVE_imm9: 9-bit immediate field.  */
+    { 11,  6 }, /* SVE_immr: SVE equivalent of immr.  */
+    {  5,  6 }, /* SVE_imms: SVE equivalent of imms.  */
     { 10,  2 }, /* SVE_msz: 2-bit shift amount for ADR.  */
     {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
     {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
@@ -1374,9 +1383,10 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 				  const aarch64_opcode *opcode,
 				  aarch64_operand_error *mismatch_detail)
 {
-  unsigned num, modifiers;
+  unsigned num, modifiers, shift;
   unsigned char size;
   int64_t imm, min_value, max_value;
+  uint64_t uvalue, mask;
   const aarch64_opnd_info *opnd = opnds + idx;
   aarch64_opnd_qualifier_t qualifier = opnd->qualifier;
 
@@ -1977,6 +1987,10 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	case AARCH64_OPND_UIMM7:
 	case AARCH64_OPND_UIMM3_OP1:
 	case AARCH64_OPND_UIMM3_OP2:
+	case AARCH64_OPND_SVE_UIMM3:
+	case AARCH64_OPND_SVE_UIMM7:
+	case AARCH64_OPND_SVE_UIMM8:
+	case AARCH64_OPND_SVE_UIMM8_53:
 	  size = get_operand_fields_width (get_operand_from_code (type));
 	  assert (size < 32);
 	  if (!value_fit_unsigned_field_p (opnd->imm.value, size))
@@ -1987,6 +2001,22 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	    }
 	  break;
 
+	case AARCH64_OPND_SIMM5:
+	case AARCH64_OPND_SVE_SIMM5:
+	case AARCH64_OPND_SVE_SIMM5B:
+	case AARCH64_OPND_SVE_SIMM6:
+	case AARCH64_OPND_SVE_SIMM8:
+	  size = get_operand_fields_width (get_operand_from_code (type));
+	  assert (size < 32);
+	  if (!value_fit_signed_field_p (opnd->imm.value, size))
+	    {
+	      set_imm_out_of_range_error (mismatch_detail, idx,
+					  -(1 << (size - 1)),
+					  (1 << (size - 1)) - 1);
+	      return 0;
+	    }
+	  break;
+
 	case AARCH64_OPND_WIDTH:
 	  assert (idx > 1 && opnds[idx-1].type == AARCH64_OPND_IMM
 		  && opnds[0].type == AARCH64_OPND_Rd);
@@ -2001,6 +2031,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	  break;
 
 	case AARCH64_OPND_LIMM:
+	case AARCH64_OPND_SVE_LIMM:
 	  {
 	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
 	    uint64_t uimm = opnd->imm.value;
@@ -2171,6 +2202,90 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	    }
 	  break;
 
+	case AARCH64_OPND_SVE_AIMM:
+	  min_value = 0;
+	sve_aimm:
+	  assert (opnd->shifter.kind == AARCH64_MOD_LSL);
+	  size = aarch64_get_qualifier_esize (opnds[0].qualifier);
+	  mask = ~((uint64_t) -1 << (size * 4) << (size * 4));
+	  uvalue = opnd->imm.value;
+	  shift = opnd->shifter.amount;
+	  if (size == 1)
+	    {
+	      if (shift != 0)
+		{
+		  set_other_error (mismatch_detail, idx,
+				   _("no shift amount allowed for"
+				     " 8-bit constants"));
+		  return 0;
+		}
+	    }
+	  else
+	    {
+	      if (shift != 0 && shift != 8)
+		{
+		  set_other_error (mismatch_detail, idx,
+				   _("shift amount should be 0 or 8"));
+		  return 0;
+		}
+	      if (shift == 0 && (uvalue & 0xff) == 0)
+		{
+		  shift = 8;
+		  uvalue = (int64_t) uvalue / 256;
+		}
+	    }
+	  mask >>= shift;
+	  if ((uvalue & mask) != uvalue && (uvalue | ~mask) != uvalue)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("immediate too big for element size"));
+	      return 0;
+	    }
+	  uvalue = (uvalue - min_value) & mask;
+	  if (uvalue > 0xff)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("invalid arithmetic immediate"));
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_ASIMM:
+	  min_value = -128;
+	  goto sve_aimm;
+
+	case AARCH64_OPND_SVE_INV_LIMM:
+	  {
+	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
+	    uint64_t uimm = ~opnd->imm.value;
+	    if (!aarch64_logical_immediate_p (uimm, esize, NULL))
+	      {
+		set_other_error (mismatch_detail, idx,
+				 _("immediate out of range"));
+		return 0;
+	      }
+	  }
+	  break;
+
+	case AARCH64_OPND_SVE_LIMM_MOV:
+	  {
+	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
+	    uint64_t uimm = opnd->imm.value;
+	    if (!aarch64_logical_immediate_p (uimm, esize, NULL))
+	      {
+		set_other_error (mismatch_detail, idx,
+				 _("immediate out of range"));
+		return 0;
+	      }
+	    if (!aarch64_sve_dupm_mov_immediate_p (uimm, esize))
+	      {
+		set_other_error (mismatch_detail, idx,
+				 _("invalid replicated MOV immediate"));
+		return 0;
+	      }
+	  }
+	  break;
+
 	case AARCH64_OPND_SVE_PATTERN_SCALED:
 	  assert (opnd->shifter.kind == AARCH64_MOD_MUL);
 	  if (!value_in_range_p (opnd->shifter.amount, 1, 16))
@@ -2180,6 +2295,27 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	    }
 	  break;
 
+	case AARCH64_OPND_SVE_SHLIMM_PRED:
+	case AARCH64_OPND_SVE_SHLIMM_UNPRED:
+	  size = aarch64_get_qualifier_esize (opnds[idx - 1].qualifier);
+	  if (!value_in_range_p (opnd->imm.value, 0, 8 * size - 1))
+	    {
+	      set_imm_out_of_range_error (mismatch_detail, idx,
+					  0, 8 * size - 1);
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_SHRIMM_PRED:
+	case AARCH64_OPND_SVE_SHRIMM_UNPRED:
+	  size = aarch64_get_qualifier_esize (opnds[idx - 1].qualifier);
+	  if (!value_in_range_p (opnd->imm.value, 1, 8 * size))
+	    {
+	      set_imm_out_of_range_error (mismatch_detail, idx, 1, 8 * size);
+	      return 0;
+	    }
+	  break;
+
 	default:
 	  break;
 	}
@@ -2953,6 +3089,19 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
     case AARCH64_OPND_IMMR:
     case AARCH64_OPND_IMMS:
     case AARCH64_OPND_FBITS:
+    case AARCH64_OPND_SIMM5:
+    case AARCH64_OPND_SVE_SHLIMM_PRED:
+    case AARCH64_OPND_SVE_SHLIMM_UNPRED:
+    case AARCH64_OPND_SVE_SHRIMM_PRED:
+    case AARCH64_OPND_SVE_SHRIMM_UNPRED:
+    case AARCH64_OPND_SVE_SIMM5:
+    case AARCH64_OPND_SVE_SIMM5B:
+    case AARCH64_OPND_SVE_SIMM6:
+    case AARCH64_OPND_SVE_SIMM8:
+    case AARCH64_OPND_SVE_UIMM3:
+    case AARCH64_OPND_SVE_UIMM7:
+    case AARCH64_OPND_SVE_UIMM8:
+    case AARCH64_OPND_SVE_UIMM8_53:
       snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
       break;
 
@@ -3021,6 +3170,9 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
     case AARCH64_OPND_LIMM:
     case AARCH64_OPND_AIMM:
     case AARCH64_OPND_HALF:
+    case AARCH64_OPND_SVE_INV_LIMM:
+    case AARCH64_OPND_SVE_LIMM:
+    case AARCH64_OPND_SVE_LIMM_MOV:
       if (opnd->shifter.amount)
 	snprintf (buf, size, "#0x%" PRIx64 ", lsl #%" PRIi64, opnd->imm.value,
 		  opnd->shifter.amount);
@@ -3039,6 +3191,15 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
 		  opnd->shifter.amount);
       break;
 
+    case AARCH64_OPND_SVE_AIMM:
+    case AARCH64_OPND_SVE_ASIMM:
+      if (opnd->shifter.amount)
+	snprintf (buf, size, "#%" PRIi64 ", lsl #%" PRIi64, opnd->imm.value,
+		  opnd->shifter.amount);
+      else
+	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
+      break;
+
     case AARCH64_OPND_FPIMM:
     case AARCH64_OPND_SIMD_FPIMM:
       switch (aarch64_get_qualifier_esize (opnds[0].qualifier))
@@ -3967,6 +4128,33 @@ verify_ldpsw (const struct aarch64_opcode * opcode ATTRIBUTE_UNUSED,
   return TRUE;
 }
 
+/* Return true if VALUE cannot be moved into an SVE register using DUP
+   (with any element size, not just ESIZE) and if using DUPM would
+   therefore be OK.  ESIZE is the number of bytes in the immediate.  */
+
+bfd_boolean
+aarch64_sve_dupm_mov_immediate_p (uint64_t uvalue, int esize)
+{
+  int64_t svalue = uvalue;
+  uint64_t upper = (uint64_t) -1 << (esize * 4) << (esize * 4);
+
+  if ((uvalue & ~upper) != uvalue && (uvalue | upper) != uvalue)
+    return FALSE;
+  if (esize <= 4 || (uint32_t) uvalue == (uint32_t) (uvalue >> 32))
+    {
+      svalue = (int32_t) uvalue;
+      if (esize <= 2 || (uint16_t) uvalue == (uint16_t) (uvalue >> 16))
+	{
+	  svalue = (int16_t) uvalue;
+	  if (esize == 1 || (uint8_t) uvalue == (uint8_t) (uvalue >> 8))
+	    return FALSE;
+	}
+    }
+  if ((svalue & 0xff) == 0)
+    svalue /= 256;
+  return svalue < -128 || svalue >= 128;
+}
+
 /* Include the opcode description table as well as the operand description
    table.  */
 #define VERIFIER(x) verify_##x
diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
index e823146..087376e 100644
--- a/opcodes/aarch64-opc.h
+++ b/opcodes/aarch64-opc.h
@@ -91,6 +91,7 @@ enum aarch64_field_kind
   FLD_b5,
   FLD_b40,
   FLD_scale,
+  FLD_SVE_N,
   FLD_SVE_Pd,
   FLD_SVE_Pg3,
   FLD_SVE_Pg4_5,
@@ -106,8 +107,16 @@ enum aarch64_field_kind
   FLD_SVE_Zm_16,
   FLD_SVE_Zn,
   FLD_SVE_Zt,
+  FLD_SVE_imm3,
   FLD_SVE_imm4,
+  FLD_SVE_imm5,
+  FLD_SVE_imm5b,
   FLD_SVE_imm6,
+  FLD_SVE_imm7,
+  FLD_SVE_imm8,
+  FLD_SVE_imm9,
+  FLD_SVE_immr,
+  FLD_SVE_imms,
   FLD_SVE_msz,
   FLD_SVE_pattern,
   FLD_SVE_prfop,
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index ac7ccf0..d743e3b 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2761,6 +2761,8 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "a 16-bit unsigned immediate")					\
     Y(IMMEDIATE, imm, "CCMP_IMM", 0, F(FLD_imm5),			\
       "a 5-bit unsigned immediate")					\
+    Y(IMMEDIATE, imm, "SIMM5", OPD_F_SEXT, F(FLD_imm5),			\
+      "a 5-bit signed immediate")					\
     Y(IMMEDIATE, imm, "NZCV", 0, F(FLD_nzcv),				\
       "a flag bit specifier giving an alternative value for each flag")	\
     Y(IMMEDIATE, limm, "LIMM", 0, F(FLD_N,FLD_immr,FLD_imms),		\
@@ -2925,6 +2927,19 @@ struct aarch64_opcode aarch64_opcode_table[] =
     Y(ADDRESS, sve_addr_zz_uxtw, "SVE_ADDR_ZZ_UXTW", 0,			\
       F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
       "an address with a vector register offset")			\
+    Y(IMMEDIATE, sve_aimm, "SVE_AIMM", 0, F(FLD_SVE_imm9),		\
+      "a 9-bit unsigned arithmetic operand")				\
+    Y(IMMEDIATE, sve_asimm, "SVE_ASIMM", 0, F(FLD_SVE_imm9),		\
+      "a 9-bit signed arithmetic operand")				\
+    Y(IMMEDIATE, inv_limm, "SVE_INV_LIMM", 0,				\
+      F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
+      "an inverted 13-bit logical immediate")				\
+    Y(IMMEDIATE, limm, "SVE_LIMM", 0,					\
+      F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
+      "a 13-bit logical immediate")					\
+    Y(IMMEDIATE, sve_limm_mov, "SVE_LIMM_MOV", 0,			\
+      F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
+      "a 13-bit logical move immediate")				\
     Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
       "an enumeration value such as POW2")				\
     Y(IMMEDIATE, sve_scale, "SVE_PATTERN_SCALED", 0,			\
@@ -2947,6 +2962,30 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "an SVE predicate register")					\
     Y(PRED_REG, regno, "SVE_Pt", 0, F(FLD_SVE_Pt),			\
       "an SVE predicate register")					\
+    Y(IMMEDIATE, sve_shlimm, "SVE_SHLIMM_PRED", 0,			\
+      F(FLD_SVE_tszh,FLD_SVE_imm5), "a shift-left immediate operand")	\
+    Y(IMMEDIATE, sve_shlimm, "SVE_SHLIMM_UNPRED", 0,			\
+      F(FLD_SVE_tszh,FLD_imm5), "a shift-left immediate operand")	\
+    Y(IMMEDIATE, sve_shrimm, "SVE_SHRIMM_PRED", 0,			\
+      F(FLD_SVE_tszh,FLD_SVE_imm5), "a shift-right immediate operand")	\
+    Y(IMMEDIATE, sve_shrimm, "SVE_SHRIMM_UNPRED", 0,			\
+      F(FLD_SVE_tszh,FLD_imm5), "a shift-right immediate operand")	\
+    Y(IMMEDIATE, imm, "SVE_SIMM5", OPD_F_SEXT, F(FLD_SVE_imm5),		\
+      "a 5-bit signed immediate")					\
+    Y(IMMEDIATE, imm, "SVE_SIMM5B", OPD_F_SEXT, F(FLD_SVE_imm5b),	\
+      "a 5-bit signed immediate")					\
+    Y(IMMEDIATE, imm, "SVE_SIMM6", OPD_F_SEXT, F(FLD_SVE_imms),		\
+      "a 6-bit signed immediate")					\
+    Y(IMMEDIATE, imm, "SVE_SIMM8", OPD_F_SEXT, F(FLD_SVE_imm8),		\
+      "an 8-bit signed immediate")					\
+    Y(IMMEDIATE, imm, "SVE_UIMM3", 0, F(FLD_SVE_imm3),			\
+      "a 3-bit unsigned immediate")					\
+    Y(IMMEDIATE, imm, "SVE_UIMM7", 0, F(FLD_SVE_imm7),			\
+      "a 7-bit unsigned immediate")					\
+    Y(IMMEDIATE, imm, "SVE_UIMM8", 0, F(FLD_SVE_imm8),			\
+      "an 8-bit unsigned immediate")					\
+    Y(IMMEDIATE, imm, "SVE_UIMM8_53", 0, F(FLD_imm5,FLD_imm3),		\
+      "an 8-bit unsigned immediate")					\
     Y(SVE_REG, regno, "SVE_Za_5", 0, F(FLD_SVE_Za_5),			\
       "an SVE vector register")						\
     Y(SVE_REG, regno, "SVE_Za_16", 0, F(FLD_SVE_Za_16),			\

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 28/32] Add SVE FP immediate operands
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (27 preceding siblings ...)
  2016-08-23  9:25 ` [AArch64][SVE 29/32] Add new SVE core & FP register operands Richard Sandiford
@ 2016-08-23  9:25 ` Richard Sandiford
  2016-08-25 14:59   ` Richard Earnshaw (lists)
  2016-08-23  9:26 ` [AArch64][SVE 30/32] Add SVE instruction classes Richard Sandiford
                   ` (3 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:25 UTC (permalink / raw)
  To: binutils

This patch adds support for the new SVE floating-point immediate
operands.  One operand uses the same 8-bit encoding as base AArch64,
but in a different position.  The others use a single bit to select
between two values.

One of the single-bit operands is a choice between 0 and 1, where 0
is not a valid 8-bit encoding.  I think the cleanest way of handling
these single-bit immediates is therefore to use the IEEE float encoding
itself as the immediate value and select between the two possible values
when encoding and decoding.

As described in the covering note for the patch that added F_STRICT,
we get better error messages by accepting unsuffixed vector registers
and leaving the qualifier matching code to report an error.  This means
that we carry on parsing the other operands, and so can try to parse FP
immediates for invalid instructions like:

	fcpy	z0, #2.5

In this case there is no suffix to tell us whether the immediate should
be treated as single or double precision.  Again, we get better error
messages by picking one (arbitrary) immediate size and reporting an error
for the missing suffix later.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_SVE_FPIMM8): New aarch64_opnd.
	(AARCH64_OPND_SVE_I1_HALF_ONE, AARCH64_OPND_SVE_I1_HALF_TWO)
	(AARCH64_OPND_SVE_I1_ZERO_ONE): Likewise.

opcodes/
	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE FP
	immediate operands.
	* aarch64-opc.h (FLD_SVE_i1): New aarch64_field_kind.
	* aarch64-opc.c (fields): Add corresponding entry.
	(operand_general_constraint_met_p): Handle the new SVE FP immediate
	operands.
	(aarch64_print_operand): Likewise.
	* aarch64-opc-2.c: Regenerate.
	* aarch64-asm.h (ins_sve_float_half_one, ins_sve_float_half_two)
	(ins_sve_float_zero_one): New inserters.
	* aarch64-asm.c (aarch64_ins_sve_float_half_one): New function.
	(aarch64_ins_sve_float_half_two): Likewise.
	(aarch64_ins_sve_float_zero_one): Likewise.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis.h (ext_sve_float_half_one, ext_sve_float_half_two)
	(ext_sve_float_zero_one): New extractors.
	* aarch64-dis.c (aarch64_ext_sve_float_half_one): New function.
	(aarch64_ext_sve_float_half_two): Likewise.
	(aarch64_ext_sve_float_zero_one): Likewise.
	* aarch64-dis-2.c: Regenerate.

gas/
	* config/tc-aarch64.c (double_precision_operand_p): New function.
	(parse_operands): Use it to calculate the dp_p input to
	parse_aarch64_imm_float.  Handle the new SVE FP immediate operands.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index cb39cf8..eddc6f8 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -2252,6 +2252,20 @@ can_convert_double_to_float (uint64_t imm, uint32_t *fpword)
   return TRUE;
 }
 
+/* Return true if we should treat OPERAND as a double-precision
+   floating-point operand rather than a single-precision one.  */
+static bfd_boolean
+double_precision_operand_p (const aarch64_opnd_info *operand)
+{
+  /* Check for unsuffixed SVE registers, which are allowed
+     for LDR and STR but not in instructions that require an
+     immediate.  We get better error messages if we arbitrarily
+     pick one size, parse the immediate normally, and then
+     report the match failure in the normal way.  */
+  return (operand->qualifier == AARCH64_OPND_QLF_NIL
+	  || aarch64_get_qualifier_esize (operand->qualifier) == 8);
+}
+
 /* Parse a floating-point immediate.  Return TRUE on success and return the
    value in *IMMED in the format of IEEE754 single-precision encoding.
    *CCP points to the start of the string; DP_P is TRUE when the immediate
@@ -5707,11 +5721,12 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 
 	case AARCH64_OPND_FPIMM:
 	case AARCH64_OPND_SIMD_FPIMM:
+	case AARCH64_OPND_SVE_FPIMM8:
 	  {
 	    int qfloat;
-	    bfd_boolean dp_p
-	      = (aarch64_get_qualifier_esize (inst.base.operands[0].qualifier)
-		 == 8);
+	    bfd_boolean dp_p;
+
+	    dp_p = double_precision_operand_p (&inst.base.operands[0]);
 	    if (!parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type)
 		|| !aarch64_imm_float_p (qfloat))
 	      {
@@ -5725,6 +5740,26 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  }
 	  break;
 
+	case AARCH64_OPND_SVE_I1_HALF_ONE:
+	case AARCH64_OPND_SVE_I1_HALF_TWO:
+	case AARCH64_OPND_SVE_I1_ZERO_ONE:
+	  {
+	    int qfloat;
+	    bfd_boolean dp_p;
+
+	    dp_p = double_precision_operand_p (&inst.base.operands[0]);
+	    if (!parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type))
+	      {
+		if (!error_p ())
+		  set_fatal_syntax_error (_("invalid floating-point"
+					    " constant"));
+		goto failure;
+	      }
+	    inst.base.operands[i].imm.value = qfloat;
+	    inst.base.operands[i].imm.is_fp = 1;
+	  }
+	  break;
+
 	case AARCH64_OPND_LIMM:
 	  po_misc_or_fail (parse_shifter_operand (&str, info,
 						  SHIFTED_LOGIC_IMM));
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index 36e95b4..9e7f5b5 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -292,6 +292,10 @@ enum aarch64_opnd
   AARCH64_OPND_SVE_ADDR_ZZ_UXTW,    /* SVE [Zn.<T>, Zm,<T>, UXTW #<msz>].  */
   AARCH64_OPND_SVE_AIMM,	/* SVE unsigned arithmetic immediate.  */
   AARCH64_OPND_SVE_ASIMM,	/* SVE signed arithmetic immediate.  */
+  AARCH64_OPND_SVE_FPIMM8,	/* SVE 8-bit floating-point immediate.  */
+  AARCH64_OPND_SVE_I1_HALF_ONE,	/* SVE choice between 0.5 and 1.0.  */
+  AARCH64_OPND_SVE_I1_HALF_TWO,	/* SVE choice between 0.5 and 2.0.  */
+  AARCH64_OPND_SVE_I1_ZERO_ONE,	/* SVE choice between 0.0 and 1.0.  */
   AARCH64_OPND_SVE_INV_LIMM,	/* SVE inverted logical immediate.  */
   AARCH64_OPND_SVE_LIMM,	/* SVE logical immediate.  */
   AARCH64_OPND_SVE_LIMM_MOV,	/* SVE logical immediate for MOV.  */
diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index 491ea53..d9d1981 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -480,21 +480,21 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 135:
-    case 136:
-    case 137:
-    case 138:
     case 139:
     case 140:
     case 141:
     case 142:
-    case 155:
-    case 156:
-    case 157:
-    case 158:
+    case 143:
+    case 144:
+    case 145:
+    case 146:
     case 159:
     case 160:
+    case 161:
+    case 162:
     case 163:
+    case 164:
+    case 167:
       return aarch64_ins_regno (self, info, code, inst);
     case 12:
       return aarch64_ins_reg_extended (self, info, code, inst);
@@ -532,16 +532,16 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 69:
     case 70:
     case 71:
-    case 132:
-    case 134:
-    case 147:
-    case 148:
-    case 149:
-    case 150:
+    case 136:
+    case 138:
     case 151:
     case 152:
     case 153:
     case 154:
+    case 155:
+    case 156:
+    case 157:
+    case 158:
       return aarch64_ins_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -551,9 +551,10 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 42:
       return aarch64_ins_advsimd_imm_modified (self, info, code, inst);
     case 46:
+    case 129:
       return aarch64_ins_fpimm (self, info, code, inst);
     case 60:
-    case 130:
+    case 134:
       return aarch64_ins_limm (self, info, code, inst);
     case 61:
       return aarch64_ins_aimm (self, info, code, inst);
@@ -644,22 +645,28 @@ aarch64_insert_operand (const aarch64_operand *self,
       return aarch64_ins_sve_aimm (self, info, code, inst);
     case 128:
       return aarch64_ins_sve_asimm (self, info, code, inst);
-    case 129:
-      return aarch64_ins_inv_limm (self, info, code, inst);
+    case 130:
+      return aarch64_ins_sve_float_half_one (self, info, code, inst);
     case 131:
-      return aarch64_ins_sve_limm_mov (self, info, code, inst);
+      return aarch64_ins_sve_float_half_two (self, info, code, inst);
+    case 132:
+      return aarch64_ins_sve_float_zero_one (self, info, code, inst);
     case 133:
+      return aarch64_ins_inv_limm (self, info, code, inst);
+    case 135:
+      return aarch64_ins_sve_limm_mov (self, info, code, inst);
+    case 137:
       return aarch64_ins_sve_scale (self, info, code, inst);
-    case 143:
-    case 144:
+    case 147:
+    case 148:
       return aarch64_ins_sve_shlimm (self, info, code, inst);
-    case 145:
-    case 146:
+    case 149:
+    case 150:
       return aarch64_ins_sve_shrimm (self, info, code, inst);
-    case 161:
+    case 165:
       return aarch64_ins_sve_index (self, info, code, inst);
-    case 162:
-    case 164:
+    case 166:
+    case 168:
       return aarch64_ins_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index 61d0d95..fd356f4 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -1028,6 +1028,51 @@ aarch64_ins_sve_shrimm (const aarch64_operand *self,
   return NULL;
 }
 
+/* Encode a single-bit immediate that selects between #0.5 and #1.0.
+   The fields array specifies which field to use.  */
+const char *
+aarch64_ins_sve_float_half_one (const aarch64_operand *self,
+				const aarch64_opnd_info *info,
+				aarch64_insn *code,
+				const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  if (info->imm.value == 0x3f000000)
+    insert_field (self->fields[0], code, 0, 0);
+  else
+    insert_field (self->fields[0], code, 1, 0);
+  return NULL;
+}
+
+/* Encode a single-bit immediate that selects between #0.5 and #2.0.
+   The fields array specifies which field to use.  */
+const char *
+aarch64_ins_sve_float_half_two (const aarch64_operand *self,
+				const aarch64_opnd_info *info,
+				aarch64_insn *code,
+				const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  if (info->imm.value == 0x3f000000)
+    insert_field (self->fields[0], code, 0, 0);
+  else
+    insert_field (self->fields[0], code, 1, 0);
+  return NULL;
+}
+
+/* Encode a single-bit immediate that selects between #0.0 and #1.0.
+   The fields array specifies which field to use.  */
+const char *
+aarch64_ins_sve_float_zero_one (const aarch64_operand *self,
+				const aarch64_opnd_info *info,
+				aarch64_insn *code,
+				const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  if (info->imm.value == 0)
+    insert_field (self->fields[0], code, 0, 0);
+  else
+    insert_field (self->fields[0], code, 1, 0);
+  return NULL;
+}
+
 /* Miscellaneous encoding functions.  */
 
 /* Encode size[0], i.e. bit 22, for
diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
index bbd320e..0cce71c 100644
--- a/opcodes/aarch64-asm.h
+++ b/opcodes/aarch64-asm.h
@@ -82,6 +82,9 @@ AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_sxtw);
 AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_uxtw);
 AARCH64_DECL_OPD_INSERTER (ins_sve_aimm);
 AARCH64_DECL_OPD_INSERTER (ins_sve_asimm);
+AARCH64_DECL_OPD_INSERTER (ins_sve_float_half_one);
+AARCH64_DECL_OPD_INSERTER (ins_sve_float_half_two);
+AARCH64_DECL_OPD_INSERTER (ins_sve_float_zero_one);
 AARCH64_DECL_OPD_INSERTER (ins_sve_index);
 AARCH64_DECL_OPD_INSERTER (ins_sve_limm_mov);
 AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index 4527456..110cf2e 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10426,21 +10426,21 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 135:
-    case 136:
-    case 137:
-    case 138:
     case 139:
     case 140:
     case 141:
     case 142:
-    case 155:
-    case 156:
-    case 157:
-    case 158:
+    case 143:
+    case 144:
+    case 145:
+    case 146:
     case 159:
     case 160:
+    case 161:
+    case 162:
     case 163:
+    case 164:
+    case 167:
       return aarch64_ext_regno (self, info, code, inst);
     case 8:
       return aarch64_ext_regrt_sysins (self, info, code, inst);
@@ -10483,16 +10483,16 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 69:
     case 70:
     case 71:
-    case 132:
-    case 134:
-    case 147:
-    case 148:
-    case 149:
-    case 150:
+    case 136:
+    case 138:
     case 151:
     case 152:
     case 153:
     case 154:
+    case 155:
+    case 156:
+    case 157:
+    case 158:
       return aarch64_ext_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -10504,9 +10504,10 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 43:
       return aarch64_ext_shll_imm (self, info, code, inst);
     case 46:
+    case 129:
       return aarch64_ext_fpimm (self, info, code, inst);
     case 60:
-    case 130:
+    case 134:
       return aarch64_ext_limm (self, info, code, inst);
     case 61:
       return aarch64_ext_aimm (self, info, code, inst);
@@ -10597,22 +10598,28 @@ aarch64_extract_operand (const aarch64_operand *self,
       return aarch64_ext_sve_aimm (self, info, code, inst);
     case 128:
       return aarch64_ext_sve_asimm (self, info, code, inst);
-    case 129:
-      return aarch64_ext_inv_limm (self, info, code, inst);
+    case 130:
+      return aarch64_ext_sve_float_half_one (self, info, code, inst);
     case 131:
-      return aarch64_ext_sve_limm_mov (self, info, code, inst);
+      return aarch64_ext_sve_float_half_two (self, info, code, inst);
+    case 132:
+      return aarch64_ext_sve_float_zero_one (self, info, code, inst);
     case 133:
+      return aarch64_ext_inv_limm (self, info, code, inst);
+    case 135:
+      return aarch64_ext_sve_limm_mov (self, info, code, inst);
+    case 137:
       return aarch64_ext_sve_scale (self, info, code, inst);
-    case 143:
-    case 144:
+    case 147:
+    case 148:
       return aarch64_ext_sve_shlimm (self, info, code, inst);
-    case 145:
-    case 146:
+    case 149:
+    case 150:
       return aarch64_ext_sve_shrimm (self, info, code, inst);
-    case 161:
+    case 165:
       return aarch64_ext_sve_index (self, info, code, inst);
-    case 162:
-    case 164:
+    case 166:
+    case 168:
       return aarch64_ext_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index ed050cd..385286c 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -1465,6 +1465,51 @@ aarch64_ext_sve_asimm (const aarch64_operand *self,
 	  && decode_sve_aimm (info, (int8_t) info->imm.value));
 }
 
+/* Decode a single-bit immediate that selects between #0.5 and #1.0.
+   The fields array specifies which field to use.  */
+int
+aarch64_ext_sve_float_half_one (const aarch64_operand *self,
+				aarch64_opnd_info *info, aarch64_insn code,
+				const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  if (extract_field (self->fields[0], code, 0))
+    info->imm.value = 0x3f800000;
+  else
+    info->imm.value = 0x3f000000;
+  info->imm.is_fp = TRUE;
+  return 1;
+}
+
+/* Decode a single-bit immediate that selects between #0.5 and #2.0.
+   The fields array specifies which field to use.  */
+int
+aarch64_ext_sve_float_half_two (const aarch64_operand *self,
+				aarch64_opnd_info *info, aarch64_insn code,
+				const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  if (extract_field (self->fields[0], code, 0))
+    info->imm.value = 0x40000000;
+  else
+    info->imm.value = 0x3f000000;
+  info->imm.is_fp = TRUE;
+  return 1;
+}
+
+/* Decode a single-bit immediate that selects between #0.0 and #1.0.
+   The fields array specifies which field to use.  */
+int
+aarch64_ext_sve_float_zero_one (const aarch64_operand *self,
+				aarch64_opnd_info *info, aarch64_insn code,
+				const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  if (extract_field (self->fields[0], code, 0))
+    info->imm.value = 0x3f800000;
+  else
+    info->imm.value = 0x0;
+  info->imm.is_fp = TRUE;
+  return 1;
+}
+
 /* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
    array specifies which field to use for Zn.  MM is encoded in the
    concatenation of imm5 and SVE_tszh, with imm5 being the less
diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
index 10983d1..474bc45 100644
--- a/opcodes/aarch64-dis.h
+++ b/opcodes/aarch64-dis.h
@@ -104,6 +104,9 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_sxtw);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_uxtw);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_aimm);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_asimm);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_float_half_one);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_float_half_two);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_float_zero_one);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_limm_mov);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
index d86e7dc..58c3aed 100644
--- a/opcodes/aarch64-opc-2.c
+++ b/opcodes/aarch64-opc-2.c
@@ -153,6 +153,10 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_UXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_AIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm9}, "a 9-bit unsigned arithmetic operand"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_ASIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm9}, "a 9-bit signed arithmetic operand"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_FPIMM8", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm8}, "an 8-bit floating-point immediate"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_I1_HALF_ONE", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_i1}, "either 0.5 or 1.0"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_I1_HALF_TWO", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_i1}, "either 0.5 or 2.0"},
+  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_I1_ZERO_ONE", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_i1}, "either 0.0 or 1.0"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_INV_LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "an inverted 13-bit logical immediate"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "a 13-bit logical immediate"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_LIMM_MOV", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "a 13-bit logical move immediate"},
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index dec7e06..3b0279c 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -280,6 +280,7 @@ const aarch64_field fields[] =
     { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
     {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
     {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
+    {  5,  1 }, /* SVE_i1: single-bit immediate.  */
     { 16,  3 }, /* SVE_imm3: 3-bit immediate field.  */
     { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
     {  5,  5 }, /* SVE_imm5: 5-bit immediate field.  */
@@ -2178,6 +2179,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 
 	case AARCH64_OPND_FPIMM:
 	case AARCH64_OPND_SIMD_FPIMM:
+	case AARCH64_OPND_SVE_FPIMM8:
 	  if (opnd->imm.is_fp == 0)
 	    {
 	      set_other_error (mismatch_detail, idx,
@@ -2254,6 +2256,36 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	  min_value = -128;
 	  goto sve_aimm;
 
+	case AARCH64_OPND_SVE_I1_HALF_ONE:
+	  assert (opnd->imm.is_fp);
+	  if (opnd->imm.value != 0x3f000000 && opnd->imm.value != 0x3f800000)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("floating-point value must be 0.5 or 1.0"));
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_I1_HALF_TWO:
+	  assert (opnd->imm.is_fp);
+	  if (opnd->imm.value != 0x3f000000 && opnd->imm.value != 0x40000000)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("floating-point value must be 0.5 or 2.0"));
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_I1_ZERO_ONE:
+	  assert (opnd->imm.is_fp);
+	  if (opnd->imm.value != 0 && opnd->imm.value != 0x3f800000)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("floating-point value must be 0.0 or 1.0"));
+	      return 0;
+	    }
+	  break;
+
 	case AARCH64_OPND_SVE_INV_LIMM:
 	  {
 	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
@@ -3105,6 +3137,16 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
       snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
       break;
 
+    case AARCH64_OPND_SVE_I1_HALF_ONE:
+    case AARCH64_OPND_SVE_I1_HALF_TWO:
+    case AARCH64_OPND_SVE_I1_ZERO_ONE:
+      {
+	single_conv_t c;
+	c.i = opnd->imm.value;
+	snprintf (buf, size, "#%.1f", c.f);
+	break;
+      }
+
     case AARCH64_OPND_SVE_PATTERN:
       if (optional_operand_p (opcode, idx)
 	  && opnd->imm.value == get_optional_operand_default_value (opcode))
@@ -3202,6 +3244,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
 
     case AARCH64_OPND_FPIMM:
     case AARCH64_OPND_SIMD_FPIMM:
+    case AARCH64_OPND_SVE_FPIMM8:
       switch (aarch64_get_qualifier_esize (opnds[0].qualifier))
 	{
 	case 2:	/* e.g. FMOV <Hd>, #<imm>.  */
diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
index 087376e..6c67786 100644
--- a/opcodes/aarch64-opc.h
+++ b/opcodes/aarch64-opc.h
@@ -107,6 +107,7 @@ enum aarch64_field_kind
   FLD_SVE_Zm_16,
   FLD_SVE_Zn,
   FLD_SVE_Zt,
+  FLD_SVE_i1,
   FLD_SVE_imm3,
   FLD_SVE_imm4,
   FLD_SVE_imm5,
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index d743e3b..562eea7 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2931,6 +2931,14 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "a 9-bit unsigned arithmetic operand")				\
     Y(IMMEDIATE, sve_asimm, "SVE_ASIMM", 0, F(FLD_SVE_imm9),		\
       "a 9-bit signed arithmetic operand")				\
+    Y(IMMEDIATE, fpimm, "SVE_FPIMM8", 0, F(FLD_SVE_imm8),		\
+      "an 8-bit floating-point immediate")				\
+    Y(IMMEDIATE, sve_float_half_one, "SVE_I1_HALF_ONE", 0,		\
+      F(FLD_SVE_i1), "either 0.5 or 1.0")				\
+    Y(IMMEDIATE, sve_float_half_two, "SVE_I1_HALF_TWO", 0,		\
+      F(FLD_SVE_i1), "either 0.5 or 2.0")				\
+    Y(IMMEDIATE, sve_float_zero_one, "SVE_I1_ZERO_ONE", 0,		\
+      F(FLD_SVE_i1), "either 0.0 or 1.0")				\
     Y(IMMEDIATE, inv_limm, "SVE_INV_LIMM", 0,				\
       F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
       "an inverted 13-bit logical immediate")				\

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 29/32] Add new SVE core & FP register operands
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (26 preceding siblings ...)
  2016-08-23  9:24 ` [AArch64][SVE 27/32] Add SVE integer immediate operands Richard Sandiford
@ 2016-08-23  9:25 ` Richard Sandiford
  2016-08-25 15:01   ` Richard Earnshaw (lists)
  2016-08-23  9:25 ` [AArch64][SVE 28/32] Add SVE FP immediate operands Richard Sandiford
                   ` (4 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:25 UTC (permalink / raw)
  To: binutils

SVE uses some new fields to store W, X and scalar FP registers.
This patch adds corresponding operands.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_SVE_Rm): New aarch64_opnd.
	(AARCH64_OPND_SVE_Rn_SP, AARCH64_OPND_SVE_VZn, AARCH64_OPND_SVE_Vd)
	(AARCH64_OPND_SVE_Vm, AARCH64_OPND_SVE_Vn): Likewise.

opcodes/
	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE core
	and FP register operands.
	* aarch64-opc.h (FLD_SVE_Rm, FLD_SVE_Rn, FLD_SVE_Vd, FLD_SVE_Vm)
	(FLD_SVE_Vn): New aarch64_field_kinds.
	* aarch64-opc.c (fields): Add corresponding entries.
	(aarch64_print_operand): Handle the new SVE core and FP register
	operands.
	* aarch64-opc-2.c: Regenerate.
	* aarch64-asm-2.c: Likewise.
	* aarch64-dis-2.c: Likewise.

gas/
	* config/tc-aarch64.c (parse_operands): Handle the new SVE core
	and FP register operands.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index eddc6f8..15652fa 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -5344,11 +5344,13 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	case AARCH64_OPND_Ra:
 	case AARCH64_OPND_Rt_SYS:
 	case AARCH64_OPND_PAIRREG:
+	case AARCH64_OPND_SVE_Rm:
 	  po_int_reg_or_fail (FALSE, TRUE);
 	  break;
 
 	case AARCH64_OPND_Rd_SP:
 	case AARCH64_OPND_Rn_SP:
+	case AARCH64_OPND_SVE_Rn_SP:
 	  po_int_reg_or_fail (TRUE, FALSE);
 	  break;
 
@@ -5380,6 +5382,10 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	case AARCH64_OPND_Sd:
 	case AARCH64_OPND_Sn:
 	case AARCH64_OPND_Sm:
+	case AARCH64_OPND_SVE_VZn:
+	case AARCH64_OPND_SVE_Vd:
+	case AARCH64_OPND_SVE_Vm:
+	case AARCH64_OPND_SVE_Vn:
 	  val = aarch64_reg_parse (&str, REG_TYPE_BHSDQ, &rtype, NULL);
 	  if (val == PARSE_FAIL)
 	    {
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index 9e7f5b5..8d3fb21 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -310,6 +310,8 @@ enum aarch64_opnd
   AARCH64_OPND_SVE_Pm,		/* SVE p0-p15 in Pm.  */
   AARCH64_OPND_SVE_Pn,		/* SVE p0-p15 in Pn.  */
   AARCH64_OPND_SVE_Pt,		/* SVE p0-p15 in Pt.  */
+  AARCH64_OPND_SVE_Rm,		/* Integer Rm or ZR, alt. SVE position.  */
+  AARCH64_OPND_SVE_Rn_SP,	/* Integer Rn or SP, alt. SVE position.  */
   AARCH64_OPND_SVE_SHLIMM_PRED,	  /* SVE shift left amount (predicated).  */
   AARCH64_OPND_SVE_SHLIMM_UNPRED, /* SVE shift left amount (unpredicated).  */
   AARCH64_OPND_SVE_SHRIMM_PRED,	  /* SVE shift right amount (predicated).  */
@@ -322,6 +324,10 @@ enum aarch64_opnd
   AARCH64_OPND_SVE_UIMM7,	/* SVE unsigned 7-bit immediate.  */
   AARCH64_OPND_SVE_UIMM8,	/* SVE unsigned 8-bit immediate.  */
   AARCH64_OPND_SVE_UIMM8_53,	/* SVE split unsigned 8-bit immediate.  */
+  AARCH64_OPND_SVE_VZn,		/* Scalar SIMD&FP register in Zn field.  */
+  AARCH64_OPND_SVE_Vd,		/* Scalar SIMD&FP register in Vd.  */
+  AARCH64_OPND_SVE_Vm,		/* Scalar SIMD&FP register in Vm.  */
+  AARCH64_OPND_SVE_Vn,		/* Scalar SIMD&FP register in Vn.  */
   AARCH64_OPND_SVE_Za_5,	/* SVE vector register in Za, bits [9,5].  */
   AARCH64_OPND_SVE_Za_16,	/* SVE vector register in Za, bits [20,16].  */
   AARCH64_OPND_SVE_Zd,		/* SVE vector register in Zd.  */
diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index d9d1981..5dd6a81 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -488,13 +488,19 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 144:
     case 145:
     case 146:
-    case 159:
-    case 160:
+    case 147:
+    case 148:
     case 161:
     case 162:
     case 163:
     case 164:
+    case 165:
+    case 166:
     case 167:
+    case 168:
+    case 169:
+    case 170:
+    case 173:
       return aarch64_ins_regno (self, info, code, inst);
     case 12:
       return aarch64_ins_reg_extended (self, info, code, inst);
@@ -534,14 +540,14 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 71:
     case 136:
     case 138:
-    case 151:
-    case 152:
     case 153:
     case 154:
     case 155:
     case 156:
     case 157:
     case 158:
+    case 159:
+    case 160:
       return aarch64_ins_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -657,16 +663,16 @@ aarch64_insert_operand (const aarch64_operand *self,
       return aarch64_ins_sve_limm_mov (self, info, code, inst);
     case 137:
       return aarch64_ins_sve_scale (self, info, code, inst);
-    case 147:
-    case 148:
-      return aarch64_ins_sve_shlimm (self, info, code, inst);
     case 149:
     case 150:
+      return aarch64_ins_sve_shlimm (self, info, code, inst);
+    case 151:
+    case 152:
       return aarch64_ins_sve_shrimm (self, info, code, inst);
-    case 165:
+    case 171:
       return aarch64_ins_sve_index (self, info, code, inst);
-    case 166:
-    case 168:
+    case 172:
+    case 174:
       return aarch64_ins_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index 110cf2e..c3bcfdb 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10434,13 +10434,19 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 144:
     case 145:
     case 146:
-    case 159:
-    case 160:
+    case 147:
+    case 148:
     case 161:
     case 162:
     case 163:
     case 164:
+    case 165:
+    case 166:
     case 167:
+    case 168:
+    case 169:
+    case 170:
+    case 173:
       return aarch64_ext_regno (self, info, code, inst);
     case 8:
       return aarch64_ext_regrt_sysins (self, info, code, inst);
@@ -10485,14 +10491,14 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 71:
     case 136:
     case 138:
-    case 151:
-    case 152:
     case 153:
     case 154:
     case 155:
     case 156:
     case 157:
     case 158:
+    case 159:
+    case 160:
       return aarch64_ext_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -10610,16 +10616,16 @@ aarch64_extract_operand (const aarch64_operand *self,
       return aarch64_ext_sve_limm_mov (self, info, code, inst);
     case 137:
       return aarch64_ext_sve_scale (self, info, code, inst);
-    case 147:
-    case 148:
-      return aarch64_ext_sve_shlimm (self, info, code, inst);
     case 149:
     case 150:
+      return aarch64_ext_sve_shlimm (self, info, code, inst);
+    case 151:
+    case 152:
       return aarch64_ext_sve_shrimm (self, info, code, inst);
-    case 165:
+    case 171:
       return aarch64_ext_sve_index (self, info, code, inst);
-    case 166:
-    case 168:
+    case 172:
+    case 174:
       return aarch64_ext_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
index 58c3aed..6028be4 100644
--- a/opcodes/aarch64-opc-2.c
+++ b/opcodes/aarch64-opc-2.c
@@ -171,6 +171,8 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pm}, "an SVE predicate register"},
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pn}, "an SVE predicate register"},
   {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pt", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pt}, "an SVE predicate register"},
+  {AARCH64_OPND_CLASS_INT_REG, "SVE_Rm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Rm}, "an integer register or zero"},
+  {AARCH64_OPND_CLASS_INT_REG, "SVE_Rn_SP", OPD_F_MAYBE_SP | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Rn}, "an integer register or SP"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHLIMM_PRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_SVE_imm5}, "a shift-left immediate operand"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHLIMM_UNPRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_imm5}, "a shift-left immediate operand"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHRIMM_PRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_SVE_imm5}, "a shift-right immediate operand"},
@@ -183,6 +185,10 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM7", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm7}, "a 7-bit unsigned immediate"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM8", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm8}, "an 8-bit unsigned immediate"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM8_53", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5,FLD_imm3}, "an 8-bit unsigned immediate"},
+  {AARCH64_OPND_CLASS_SIMD_REG, "SVE_VZn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "a SIMD register"},
+  {AARCH64_OPND_CLASS_SIMD_REG, "SVE_Vd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Vd}, "a SIMD register"},
+  {AARCH64_OPND_CLASS_SIMD_REG, "SVE_Vm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Vm}, "a SIMD register"},
+  {AARCH64_OPND_CLASS_SIMD_REG, "SVE_Vn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Vn}, "a SIMD register"},
   {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_5}, "an SVE vector register"},
   {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_16}, "an SVE vector register"},
   {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zd}, "an SVE vector register"},
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 3b0279c..1ad4ccf 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -273,6 +273,11 @@ const aarch64_field fields[] =
     { 16,  4 }, /* SVE_Pm: p0-p15, bits [19,16].  */
     {  5,  4 }, /* SVE_Pn: p0-p15, bits [8,5].  */
     {  0,  4 }, /* SVE_Pt: p0-p15, bits [3,0].  */
+    {  5,  5 }, /* SVE_Rm: SVE alternative position for Rm.  */
+    { 16,  5 }, /* SVE_Rn: SVE alternative position for Rn.  */
+    {  0,  5 }, /* SVE_Vd: Scalar SIMD&FP register, bits [4,0].  */
+    {  5,  5 }, /* SVE_Vm: Scalar SIMD&FP register, bits [9,5].  */
+    {  5,  5 }, /* SVE_Vn: Scalar SIMD&FP register, bits [9,5].  */
     {  5,  5 }, /* SVE_Za_5: SVE vector register, bits [9,5].  */
     { 16,  5 }, /* SVE_Za_16: SVE vector register, bits [20,16].  */
     {  0,  5 }, /* SVE_Zd: SVE vector register. bits [4,0].  */
@@ -2949,6 +2954,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
     case AARCH64_OPND_Ra:
     case AARCH64_OPND_Rt_SYS:
     case AARCH64_OPND_PAIRREG:
+    case AARCH64_OPND_SVE_Rm:
       /* The optional-ness of <Xt> in e.g. IC <ic_op>{, <Xt>} is determined by
 	 the <ic_op>, therefore we we use opnd->present to override the
 	 generic optional-ness information.  */
@@ -2966,6 +2972,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
 
     case AARCH64_OPND_Rd_SP:
     case AARCH64_OPND_Rn_SP:
+    case AARCH64_OPND_SVE_Rn_SP:
       assert (opnd->qualifier == AARCH64_OPND_QLF_W
 	      || opnd->qualifier == AARCH64_OPND_QLF_WSP
 	      || opnd->qualifier == AARCH64_OPND_QLF_X
@@ -3028,6 +3035,10 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
     case AARCH64_OPND_Sd:
     case AARCH64_OPND_Sn:
     case AARCH64_OPND_Sm:
+    case AARCH64_OPND_SVE_VZn:
+    case AARCH64_OPND_SVE_Vd:
+    case AARCH64_OPND_SVE_Vm:
+    case AARCH64_OPND_SVE_Vn:
       snprintf (buf, size, "%s%d", aarch64_get_qualifier_name (opnd->qualifier),
 		opnd->reg.regno);
       break;
diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
index 6c67786..a7654d0 100644
--- a/opcodes/aarch64-opc.h
+++ b/opcodes/aarch64-opc.h
@@ -100,6 +100,11 @@ enum aarch64_field_kind
   FLD_SVE_Pm,
   FLD_SVE_Pn,
   FLD_SVE_Pt,
+  FLD_SVE_Rm,
+  FLD_SVE_Rn,
+  FLD_SVE_Vd,
+  FLD_SVE_Vm,
+  FLD_SVE_Vn,
   FLD_SVE_Za_5,
   FLD_SVE_Za_16,
   FLD_SVE_Zd,
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index 562eea7..988c239 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2970,6 +2970,10 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "an SVE predicate register")					\
     Y(PRED_REG, regno, "SVE_Pt", 0, F(FLD_SVE_Pt),			\
       "an SVE predicate register")					\
+    Y(INT_REG, regno, "SVE_Rm", 0, F(FLD_SVE_Rm),			\
+      "an integer register or zero")					\
+    Y(INT_REG, regno, "SVE_Rn_SP", OPD_F_MAYBE_SP, F(FLD_SVE_Rn),	\
+      "an integer register or SP")					\
     Y(IMMEDIATE, sve_shlimm, "SVE_SHLIMM_PRED", 0,			\
       F(FLD_SVE_tszh,FLD_SVE_imm5), "a shift-left immediate operand")	\
     Y(IMMEDIATE, sve_shlimm, "SVE_SHLIMM_UNPRED", 0,			\
@@ -2994,6 +2998,10 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "an 8-bit unsigned immediate")					\
     Y(IMMEDIATE, imm, "SVE_UIMM8_53", 0, F(FLD_imm5,FLD_imm3),		\
       "an 8-bit unsigned immediate")					\
+    Y(SIMD_REG, regno, "SVE_VZn", 0, F(FLD_SVE_Zn), "a SIMD register")	\
+    Y(SIMD_REG, regno, "SVE_Vd", 0, F(FLD_SVE_Vd), "a SIMD register")	\
+    Y(SIMD_REG, regno, "SVE_Vm", 0, F(FLD_SVE_Vm), "a SIMD register")	\
+    Y(SIMD_REG, regno, "SVE_Vn", 0, F(FLD_SVE_Vn), "a SIMD register")	\
     Y(SVE_REG, regno, "SVE_Za_5", 0, F(FLD_SVE_Za_5),			\
       "an SVE vector register")						\
     Y(SVE_REG, regno, "SVE_Za_16", 0, F(FLD_SVE_Za_16),			\

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 30/32] Add SVE instruction classes
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (28 preceding siblings ...)
  2016-08-23  9:25 ` [AArch64][SVE 28/32] Add SVE FP immediate operands Richard Sandiford
@ 2016-08-23  9:26 ` Richard Sandiford
  2016-08-25 15:07   ` Richard Earnshaw (lists)
  2016-08-23  9:29 ` [AArch64][SVE 31/32] Add SVE instructions Richard Sandiford
                   ` (2 subsequent siblings)
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:26 UTC (permalink / raw)
  To: binutils

The main purpose of the SVE aarch64_insn_classes is to describe how
an index into an aarch64_opnd_qualifier_seq_t is represented in the
instruction encoding.  Other instructions usually use flags for this
information, but (a) we're running out of those and (b) the iclass
would otherwise be unused for SVE.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (sve_cpy, sve_index, sve_limm, sve_misc, sve_movprfx)
	(sve_pred_zm, sve_shift_pred, sve_shift_unpred, sve_size_bhs)
	(sve_size_bhsd, sve_size_hsd, sve_size_sd): New aarch64_insn_classes.

opcodes/
	* aarch64-opc.h (FLD_SVE_M_4, FLD_SVE_M_14, FLD_SVE_M_16)
	(FLD_SVE_sz, FLD_SVE_tsz, FLD_SVE_tszl_8, FLD_SVE_tszl_19): New
	aarch64_field_kinds.
	* aarch64-opc.c (fields): Add corresponding entries.
	* aarch64-asm.c (aarch64_get_variant): New function.
	(aarch64_encode_variant_using_iclass): Likewise.
	(aarch64_opcode_encode): Call it.
	* aarch64-dis.c (aarch64_decode_variant_using_iclass): New function.
	(aarch64_opcode_decode): Call it.

diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index 8d3fb21..01e6b2c 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -485,6 +485,18 @@ enum aarch64_insn_class
   movewide,
   pcreladdr,
   ic_system,
+  sve_cpy,
+  sve_index,
+  sve_limm,
+  sve_misc,
+  sve_movprfx,
+  sve_pred_zm,
+  sve_shift_pred,
+  sve_shift_unpred,
+  sve_size_bhs,
+  sve_size_bhsd,
+  sve_size_hsd,
+  sve_size_sd,
   testbranch,
 };
 
diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index fd356f4..78fd272 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -1140,6 +1140,27 @@ encode_fcvt (aarch64_inst *inst)
   return;
 }
 
+/* Return the index in qualifiers_list that INST is using.  Should only
+   be called once the qualifiers are known to be valid.  */
+
+static int
+aarch64_get_variant (struct aarch64_inst *inst)
+{
+  int i, nops, variant;
+
+  nops = aarch64_num_of_operands (inst->opcode);
+  for (variant = 0; variant < AARCH64_MAX_QLF_SEQ_NUM; ++variant)
+    {
+      for (i = 0; i < nops; ++i)
+	if (inst->opcode->qualifiers_list[variant][i]
+	    != inst->operands[i].qualifier)
+	  break;
+      if (i == nops)
+	return variant;
+    }
+  abort ();
+}
+
 /* Do miscellaneous encodings that are not common enough to be driven by
    flags.  */
 
@@ -1318,6 +1339,65 @@ do_special_encoding (struct aarch64_inst *inst)
   DEBUG_TRACE ("exit with coding 0x%x", (uint32_t) inst->value);
 }
 
+/* Some instructions (including all SVE ones) use the instruction class
+   to describe how a qualifiers_list index is represented in the instruction
+   encoding.  If INST is such an instruction, encode the chosen qualifier
+   variant.  */
+
+static void
+aarch64_encode_variant_using_iclass (struct aarch64_inst *inst)
+{
+  switch (inst->opcode->iclass)
+    {
+    case sve_cpy:
+      insert_fields (&inst->value, aarch64_get_variant (inst),
+		     0, 2, FLD_SVE_M_14, FLD_size);
+      break;
+
+    case sve_index:
+    case sve_shift_pred:
+    case sve_shift_unpred:
+      /* For indices and shift amounts, the variant is encoded as
+	 part of the immediate.  */
+      break;
+
+    case sve_limm:
+      /* For sve_limm, the .B, .H, and .S forms are just a convenience
+	 and depend on the immediate.  They don't have a separate
+	 encoding.  */
+      break;
+
+    case sve_misc:
+      /* sve_misc instructions have only a single variant.  */
+      break;
+
+    case sve_movprfx:
+      insert_fields (&inst->value, aarch64_get_variant (inst),
+		     0, 2, FLD_SVE_M_16, FLD_size);
+      break;
+
+    case sve_pred_zm:
+      insert_field (FLD_SVE_M_4, &inst->value, aarch64_get_variant (inst), 0);
+      break;
+
+    case sve_size_bhs:
+    case sve_size_bhsd:
+      insert_field (FLD_size, &inst->value, aarch64_get_variant (inst), 0);
+      break;
+
+    case sve_size_hsd:
+      insert_field (FLD_size, &inst->value, aarch64_get_variant (inst) + 1, 0);
+      break;
+
+    case sve_size_sd:
+      insert_field (FLD_SVE_sz, &inst->value, aarch64_get_variant (inst), 0);
+      break;
+
+    default:
+      break;
+    }
+}
+
 /* Converters converting an alias opcode instruction to its real form.  */
 
 /* ROR <Wd>, <Ws>, #<shift>
@@ -1686,6 +1766,10 @@ aarch64_opcode_encode (const aarch64_opcode *opcode,
   if (opcode_has_special_coder (opcode))
     do_special_encoding (inst);
 
+  /* Possibly use the instruction class to encode the chosen qualifier
+     variant.  */
+  aarch64_encode_variant_using_iclass (inst);
+
 encoding_exit:
   DEBUG_TRACE ("exit with %s", opcode->name);
 
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index 385286c..f84f216 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -2444,6 +2444,105 @@ determine_disassembling_preference (struct aarch64_inst *inst)
     }
 }
 
+/* Some instructions (including all SVE ones) use the instruction class
+   to describe how a qualifiers_list index is represented in the instruction
+   encoding.  If INST is such an instruction, decode the appropriate fields
+   and fill in the operand qualifiers accordingly.  Return true if no
+   problems are found.  */
+
+static bfd_boolean
+aarch64_decode_variant_using_iclass (aarch64_inst *inst)
+{
+  int i, variant;
+
+  variant = 0;
+  switch (inst->opcode->iclass)
+    {
+    case sve_cpy:
+      variant = extract_fields (inst->value, 0, 2, FLD_size, FLD_SVE_M_14);
+      break;
+
+    case sve_index:
+      i = extract_field (FLD_SVE_tsz, inst->value, 0);
+      if (i == 0)
+	return FALSE;
+      while ((i & 1) == 0)
+	{
+	  i >>= 1;
+	  variant += 1;
+	}
+      break;
+
+    case sve_limm:
+      /* Pick the smallest applicable element size.  */
+      if ((inst->value & 0x20600) == 0x600)
+	variant = 0;
+      else if ((inst->value & 0x20400) == 0x400)
+	variant = 1;
+      else if ((inst->value & 0x20000) == 0)
+	variant = 2;
+      else
+	variant = 3;
+      break;
+
+    case sve_misc:
+      /* sve_misc instructions have only a single variant.  */
+      break;
+
+    case sve_movprfx:
+      variant = extract_fields (inst->value, 0, 2, FLD_size, FLD_SVE_M_16);
+      break;
+
+    case sve_pred_zm:
+      variant = extract_field (FLD_SVE_M_4, inst->value, 0);
+      break;
+
+    case sve_shift_pred:
+      i = extract_fields (inst->value, 0, 2, FLD_SVE_tszh, FLD_SVE_tszl_8);
+    sve_shift:
+      if (i == 0)
+	return FALSE;
+      while (i != 1)
+	{
+	  i >>= 1;
+	  variant += 1;
+	}
+      break;
+
+    case sve_shift_unpred:
+      i = extract_fields (inst->value, 0, 2, FLD_SVE_tszh, FLD_SVE_tszl_19);
+      goto sve_shift;
+
+    case sve_size_bhs:
+      variant = extract_field (FLD_size, inst->value, 0);
+      if (variant >= 3)
+	return FALSE;
+      break;
+
+    case sve_size_bhsd:
+      variant = extract_field (FLD_size, inst->value, 0);
+      break;
+
+    case sve_size_hsd:
+      i = extract_field (FLD_size, inst->value, 0);
+      if (i < 1)
+	return FALSE;
+      variant = i - 1;
+      break;
+
+    case sve_size_sd:
+      variant = extract_field (FLD_SVE_sz, inst->value, 0);
+      break;
+
+    default:
+      /* No mapping between instruction class and qualifiers.  */
+      return TRUE;
+    }
+
+  for (i = 0; i < AARCH64_MAX_OPND_NUM; ++i)
+    inst->operands[i].qualifier = inst->opcode->qualifiers_list[variant][i];
+  return TRUE;
+}
 /* Decode the CODE according to OPCODE; fill INST.  Return 0 if the decoding
    fails, which meanes that CODE is not an instruction of OPCODE; otherwise
    return 1.
@@ -2491,6 +2590,14 @@ aarch64_opcode_decode (const aarch64_opcode *opcode, const aarch64_insn code,
       goto decode_fail;
     }
 
+  /* Possibly use the instruction class to determine the correct
+     qualifier.  */
+  if (!aarch64_decode_variant_using_iclass (inst))
+    {
+      DEBUG_TRACE ("iclass-based decoder FAIL");
+      goto decode_fail;
+    }
+
   /* Call operand decoders.  */
   for (i = 0; i < AARCH64_MAX_OPND_NUM; ++i)
     {
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 1ad4ccf..2eb2a81 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -264,6 +264,9 @@ const aarch64_field fields[] =
     { 31,  1 },	/* b5: in the test bit and branch instructions.  */
     { 19,  5 },	/* b40: in the test bit and branch instructions.  */
     { 10,  6 },	/* scale: in the fixed-point scalar to fp converting inst.  */
+    {  4,  1 }, /* SVE_M_4: Merge/zero select, bit 4.  */
+    { 14,  1 }, /* SVE_M_14: Merge/zero select, bit 14.  */
+    { 16,  1 }, /* SVE_M_16: Merge/zero select, bit 16.  */
     { 17,  1 }, /* SVE_N: SVE equivalent of N.  */
     {  0,  4 }, /* SVE_Pd: p0-p15, bits [3,0].  */
     { 10,  3 }, /* SVE_Pg3: p0-p7, bits [12,10].  */
@@ -299,7 +302,11 @@ const aarch64_field fields[] =
     { 10,  2 }, /* SVE_msz: 2-bit shift amount for ADR.  */
     {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
     {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
+    { 22,  1 }, /* SVE_sz: 1-bit element size select.  */
+    { 16,  4 }, /* SVE_tsz: triangular size select.  */
     { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
+    {  8,  2 }, /* SVE_tszl_8: triangular size select low, bits [9,8].  */
+    { 19,  2 }, /* SVE_tszl_19: triangular size select low, bits [20,19].  */
     { 14,  1 }, /* SVE_xs_14: UXTW/SXTW select (bit 14).  */
     { 22,  1 }  /* SVE_xs_22: UXTW/SXTW select (bit 22).  */
 };
diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
index a7654d0..0c3d90e 100644
--- a/opcodes/aarch64-opc.h
+++ b/opcodes/aarch64-opc.h
@@ -91,6 +91,9 @@ enum aarch64_field_kind
   FLD_b5,
   FLD_b40,
   FLD_scale,
+  FLD_SVE_M_4,
+  FLD_SVE_M_14,
+  FLD_SVE_M_16,
   FLD_SVE_N,
   FLD_SVE_Pd,
   FLD_SVE_Pg3,
@@ -126,7 +129,11 @@ enum aarch64_field_kind
   FLD_SVE_msz,
   FLD_SVE_pattern,
   FLD_SVE_prfop,
+  FLD_SVE_sz,
+  FLD_SVE_tsz,
   FLD_SVE_tszh,
+  FLD_SVE_tszl_8,
+  FLD_SVE_tszl_19,
   FLD_SVE_xs_14,
   FLD_SVE_xs_22,
 };

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 31/32] Add SVE instructions
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (29 preceding siblings ...)
  2016-08-23  9:26 ` [AArch64][SVE 30/32] Add SVE instruction classes Richard Sandiford
@ 2016-08-23  9:29 ` Richard Sandiford
  2016-08-25 15:18   ` Richard Earnshaw (lists)
  2016-08-23  9:31 ` [AArch64][SVE 32/32] Add SVE tests Richard Sandiford
  2016-08-30 13:04 ` [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Nick Clifton
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:29 UTC (permalink / raw)
  To: binutils

[-- Attachment #1: Type: text/plain, Size: 2560 bytes --]

This patch adds the SVE instruction definitions and associated OP_*
enum values.

OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_FEATURE_SVE): New macro.
	(OP_MOV_P_P, OP_MOV_Z_P_Z, OP_MOV_Z_V, OP_MOV_Z_Z, OP_MOV_Z_Zi)
	(OP_MOVM_P_P_P, OP_MOVS_P_P, OP_MOVZS_P_P_P, OP_MOVZ_P_P_P)
	(OP_NOTS_P_P_P_Z, OP_NOT_P_P_P_Z): New aarch64_ops.

opcodes/
	* aarch64-tbl.h (OP_SVE_B, OP_SVE_BB, OP_SVE_BBBU, OP_SVE_BMB)
	(OP_SVE_BPB, OP_SVE_BUB, OP_SVE_BUBB, OP_SVE_BUU, OP_SVE_BZ)
	(OP_SVE_BZB, OP_SVE_BZBB, OP_SVE_BZU, OP_SVE_DD, OP_SVE_DDD)
	(OP_SVE_DMD, OP_SVE_DMH, OP_SVE_DMS, OP_SVE_DU, OP_SVE_DUD, OP_SVE_DUU)
	(OP_SVE_DUV_BHS, OP_SVE_DUV_BHSD, OP_SVE_DZD, OP_SVE_DZU, OP_SVE_HB)
	(OP_SVE_HMD, OP_SVE_HMS, OP_SVE_HU, OP_SVE_HUU, OP_SVE_HZU, OP_SVE_RR)
	(OP_SVE_RURV_BHSD, OP_SVE_RUV_BHSD, OP_SVE_SMD, OP_SVE_SMH, OP_SVE_SMS)
	(OP_SVE_SU, OP_SVE_SUS, OP_SVE_SUU, OP_SVE_SZS, OP_SVE_SZU, OP_SVE_UB)
	(OP_SVE_UUD, OP_SVE_UUS, OP_SVE_VMR_BHSD, OP_SVE_VMU_SD)
	(OP_SVE_VMVD_BHS, OP_SVE_VMVU_BHSD, OP_SVE_VMVU_SD, OP_SVE_VMVV_BHSD)
	(OP_SVE_VMVV_SD, OP_SVE_VMV_BHSD, OP_SVE_VMV_HSD, OP_SVE_VMV_SD)
	(OP_SVE_VM_SD, OP_SVE_VPU_BHSD, OP_SVE_VPV_BHSD, OP_SVE_VRR_BHSD)
	(OP_SVE_VRU_BHSD, OP_SVE_VR_BHSD, OP_SVE_VUR_BHSD, OP_SVE_VUU_BHSD)
	(OP_SVE_VUVV_BHSD, OP_SVE_VUVV_SD, OP_SVE_VUV_BHSD, OP_SVE_VUV_SD)
	(OP_SVE_VU_BHSD, OP_SVE_VU_HSD, OP_SVE_VU_SD, OP_SVE_VVD_BHS)
	(OP_SVE_VVU_BHSD, OP_SVE_VVVU_SD, OP_SVE_VVV_BHSD, OP_SVE_VVV_SD)
	(OP_SVE_VV_BHSD, OP_SVE_VV_HSD_BHS, OP_SVE_VV_SD, OP_SVE_VWW_BHSD)
	(OP_SVE_VXX_BHSD, OP_SVE_VZVD_BHS, OP_SVE_VZVU_BHSD, OP_SVE_VZVV_BHSD)
	(OP_SVE_VZVV_SD, OP_SVE_VZV_SD, OP_SVE_V_SD, OP_SVE_WU, OP_SVE_WV_BHSD)
	(OP_SVE_XU, OP_SVE_XUV_BHSD, OP_SVE_XVW_BHSD, OP_SVE_XV_BHSD)
	(OP_SVE_XWU, OP_SVE_XXU): New macros.
	(aarch64_feature_sve): New variable.
	(SVE): New macro.
	(_SVE_INSN): Likewise.
	(aarch64_opcode_table): Add SVE instructions.
	* aarch64-opc.h (extract_fields): Declare.
	* aarch64-opc-2.c: Regenerate.
	* aarch64-asm.c (do_misc_encoding): Handle the new SVE aarch64_ops.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis.c (extract_fields): Make global.
	(do_misc_decoding): Handle the new SVE aarch64_ops.
	* aarch64-dis-2.c: Regenerate.

gas/
	* doc/c-aarch64.texi: Document the "sve" feature.
	* config/tc-aarch64.c (REG_TYPE_R_Z_BHSDQ_VZP): New register type.
	(get_reg_expected_msg): Handle it.
	(aarch64_check_reg_type): Likewise.
	(parse_operands): When parsing operands of an SVE instruction,
	disallow immediates that match REG_TYPE_R_Z_BHSDQ_VZP.
	(aarch64_features): Add an entry for SVE.


[-- Attachment #2: sve-31.diff.gz --]
[-- Type: application/octet-stream, Size: 31842 bytes --]

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [AArch64][SVE 32/32] Add SVE tests
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (30 preceding siblings ...)
  2016-08-23  9:29 ` [AArch64][SVE 31/32] Add SVE instructions Richard Sandiford
@ 2016-08-23  9:31 ` Richard Sandiford
  2016-08-25 15:23   ` Richard Earnshaw (lists)
  2016-08-30 13:04 ` [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Nick Clifton
  32 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-23  9:31 UTC (permalink / raw)
  To: binutils

[-- Attachment #1: Type: text/plain, Size: 670 bytes --]

This patch adds new tests for SVE.  It also extends diagnostic.[sl] with
checks for some inappropriate uses of MUL and MUL VL in base AArch64
instructions.

OK to install?

Thanks,
Richard


gas/testsuite/
	* gas/aarch64/diagnostic.s, gas/aarch64/diagnostic.l: Add tests for
	invalid uses of MUL VL and MUL in base AArch64 instructions.
	* gas/aarch64/sve-add.s, gas/aarch64/sve-add.d, gas/aarch64/sve-dup.s,
	gas/aarch64/sve-dup.d, gas/aarch64/sve-invalid.s,
	gas/aarch64/sve-invalid.d, gas/aarch64/sve-invalid.l,
	gas/aarch64/sve-reg-diagnostic.s, gas/aarch64/sve-reg-diagnostic.d,
	gas/aarch64/sve-reg-diagnostic.l, gas/aarch64/sve.s,
	gas/aarch64/sve.d: New tests.


[-- Attachment #2: sve-32.diff.gz --]
[-- Type: application/octet-stream, Size: 246917 bytes --]

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 01/32] Remove parse_neon_operand_type
  2016-08-23  9:06 ` [AArch64][SVE 01/32] Remove parse_neon_operand_type Richard Sandiford
@ 2016-08-23 14:28   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-23 14:28 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:06, Richard Sandiford wrote:
> A false return from parse_neon_operand_type had an overloaded
> meaning: either the parsing failed, or there was nothing to parse
> (which isn't necessarily an error).  The only caller, parse_typed_reg,
> would therefore not consume the suffix if it was invalid but instead
> (successfully) parse the register without a suffix.  It would still
> leave inst.parsing_error with an error about the invalid suffix.
> 
> It seems wrong for a successful parse to leave an error message,
> so this patch makes parse_typed_reg return PARSE_FAIL instead.
> 
> The patch doesn't seem to make much difference in practice.
> Most possible follow-on errors use set_first_error and so the
> error about the suffix tended to win despite the successful parse.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (parse_neon_operand_type): Delete.
> 	(parse_typed_reg): Call parse_neon_type_for_operand directly.
> 

OK.

R.

> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 34fdc53..ce8e713 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -821,31 +821,6 @@ elt_size:
>    return TRUE;
>  }
>  
> -/* Parse a single type, e.g. ".8b", leading period included.
> -   Only applicable to Vn registers.
> -
> -   Return TRUE on success; otherwise return FALSE.  */
> -static bfd_boolean
> -parse_neon_operand_type (struct neon_type_el *vectype, char **ccp)
> -{
> -  char *str = *ccp;
> -
> -  if (*str == '.')
> -    {
> -      if (! parse_neon_type_for_operand (vectype, &str))
> -	{
> -	  first_error (_("vector type expected"));
> -	  return FALSE;
> -	}
> -    }
> -  else
> -    return FALSE;
> -
> -  *ccp = str;
> -
> -  return TRUE;
> -}
> -
>  /* Parse a register of the type TYPE.
>  
>     Return PARSE_FAIL if the string pointed by *CCP is not a valid register
> @@ -889,9 +864,11 @@ parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
>      }
>    type = reg->type;
>  
> -  if (type == REG_TYPE_VN
> -      && parse_neon_operand_type (&parsetype, &str))
> +  if (type == REG_TYPE_VN && *str == '.')
>      {
> +      if (!parse_neon_type_for_operand (&parsetype, &str))
> +	return PARSE_FAIL;
> +
>        /* Register if of the form Vn.[bhsdq].  */
>        is_typed_vecreg = TRUE;
>  
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 02/32] Avoid hard-coded limit in indented_print
  2016-08-23  9:06 ` [AArch64][SVE 02/32] Avoid hard-coded limit in indented_print Richard Sandiford
@ 2016-08-23 14:35   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-23 14:35 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:06, Richard Sandiford wrote:
> The maximum indentation needed by aarch64-gen.c grows as more
> instructions are added to aarch64-tbl.h.  Rather than having to
> increase the indentation limit to a higher value, it seemed better
> to replace it with "%*s".
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> opcodes/
> 	* aarch64-gen.c (indented_print): Avoid hard-coded indentation limit.
> 
> diff --git a/opcodes/aarch64-gen.c b/opcodes/aarch64-gen.c
> index ed0834a..b87dea4 100644
> --- a/opcodes/aarch64-gen.c
> +++ b/opcodes/aarch64-gen.c
> @@ -378,13 +378,9 @@ initialize_decoder_tree (void)
>  static void __attribute__ ((format (printf, 2, 3)))
>  indented_print (unsigned int indent, const char *format, ...)
>  {
> -  /* 80 number of spaces pluc a NULL terminator.  */
> -  static const char spaces[81] =
> -    "                                                                                ";
>    va_list ap;
>    va_start (ap, format);
> -  assert (indent <= 80);
> -  printf ("%s", &spaces[80 - indent]);
> +  printf ("%*s", indent, "");
>    vprintf (format, ap);
>    va_end (ap);
>  }
> 

According to the printf(3) INDENT must be of type 'int' (with special
treatment of negative values).  So for portability and to ensure we
don't get any warnings, I think we need a cast here.

Otherwise OK.

R.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 03/32] Rename neon_el_type to vector_el_type
  2016-08-23  9:07 ` [AArch64][SVE 03/32] Rename neon_el_type to vector_el_type Richard Sandiford
@ 2016-08-23 14:36   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-23 14:36 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:07, Richard Sandiford wrote:
> Later patches will add SVEisms to neon_el_type, so this patch renames
> it to something more generic.
> 
> OK to install?
> 

OK

R.

> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (neon_el_type: Rename to...
> 	(vector_el_type): ...this.
> 	(neon_type_el): Update accordingly.
> 	(parse_neon_type_for_operand): Likewise.
> 	(vectype_to_qualifier): Likewise.
> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index ce8e713..de1a74d 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -76,7 +76,7 @@ static enum aarch64_abi_type aarch64_abi = AARCH64_ABI_LP64;
>  #define ilp32_p		(aarch64_abi == AARCH64_ABI_ILP32)
>  #endif
>  
> -enum neon_el_type
> +enum vector_el_type
>  {
>    NT_invtype = -1,
>    NT_b,
> @@ -92,7 +92,7 @@ enum neon_el_type
>  
>  struct neon_type_el
>  {
> -  enum neon_el_type type;
> +  enum vector_el_type type;
>    unsigned char defined;
>    unsigned width;
>    int64_t index;
> @@ -752,7 +752,7 @@ parse_neon_type_for_operand (struct neon_type_el *parsed_type, char **str)
>    char *ptr = *str;
>    unsigned width;
>    unsigned element_size;
> -  enum neon_el_type type;
> +  enum vector_el_type type;
>  
>    /* skip '.' */
>    ptr++;
> @@ -4637,7 +4637,7 @@ opcode_lookup (char **str)
>  static inline aarch64_opnd_qualifier_t
>  vectype_to_qualifier (const struct neon_type_el *vectype)
>  {
> -  /* Element size in bytes indexed by neon_el_type.  */
> +  /* Element size in bytes indexed by vector_el_type.  */
>    const unsigned char ele_size[5]
>      = {1, 2, 4, 8, 16};
>    const unsigned int ele_base [5] =
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 05/32] Rename parse_neon_type_for_operand
  2016-08-23  9:08 ` [AArch64][SVE 05/32] Rename parse_neon_type_for_operand Richard Sandiford
@ 2016-08-23 14:37   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-23 14:37 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:08, Richard Sandiford wrote:
> Generalise the name of parse_neon_type_for_operand to
> parse_vector_type_for_operand.  Later patches will add SVEisms to it.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (parse_neon_type_for_operand): Rename to...
> 	(parse_vector_type_for_operand): ...this.
> 	(parse_typed_reg): Update accordingly.

OK

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index db30ab4..c425418 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -747,7 +747,7 @@ aarch64_reg_parse_32_64 (char **ccp, int reject_sp, int reject_rz,
>     8b 16b 2h 4h 8h 2s 4s 1d 2d
>     b h s d q  */
>  static bfd_boolean
> -parse_neon_type_for_operand (struct vector_type_el *parsed_type, char **str)
> +parse_vector_type_for_operand (struct vector_type_el *parsed_type, char **str)
>  {
>    char *ptr = *str;
>    unsigned width;
> @@ -866,7 +866,7 @@ parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
>  
>    if (type == REG_TYPE_VN && *str == '.')
>      {
> -      if (!parse_neon_type_for_operand (&parsetype, &str))
> +      if (!parse_vector_type_for_operand (&parsetype, &str))
>  	return PARSE_FAIL;
>  
>        /* Register if of the form Vn.[bhsdq].  */
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 04/32] Rename neon_type_el to vector_type_el
  2016-08-23  9:07 ` [AArch64][SVE 04/32] Rename neon_type_el to vector_type_el Richard Sandiford
@ 2016-08-23 14:37   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-23 14:37 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:07, Richard Sandiford wrote:
> Similar to the previous patch, but this time for the neon_type_el
> structure.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (neon_type_el): Rename to...
> 	(vector_type_el): ...this.
> 	(parse_neon_type_for_operand): Update accordingly.
> 	(parse_typed_reg): Likewise.
> 	(aarch64_reg_parse): Likewise.
> 	(vectype_to_qualifier): Likewise.
> 	(parse_operands): Likewise.
> 	(eq_neon_type_el): Likewise.  Rename to...
> 	(eq_vector_type_el): ...this.
> 	(parse_neon_reg_list): Update accordingly.
> 

OK.

R.

> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index de1a74d..db30ab4 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -86,11 +86,11 @@ enum vector_el_type
>    NT_q
>  };
>  
> -/* Bits for DEFINED field in neon_type_el.  */
> +/* Bits for DEFINED field in vector_type_el.  */
>  #define NTA_HASTYPE  1
>  #define NTA_HASINDEX 2
>  
> -struct neon_type_el
> +struct vector_type_el
>  {
>    enum vector_el_type type;
>    unsigned char defined;
> @@ -747,7 +747,7 @@ aarch64_reg_parse_32_64 (char **ccp, int reject_sp, int reject_rz,
>     8b 16b 2h 4h 8h 2s 4s 1d 2d
>     b h s d q  */
>  static bfd_boolean
> -parse_neon_type_for_operand (struct neon_type_el *parsed_type, char **str)
> +parse_neon_type_for_operand (struct vector_type_el *parsed_type, char **str)
>  {
>    char *ptr = *str;
>    unsigned width;
> @@ -835,12 +835,12 @@ elt_size:
>  
>  static int
>  parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
> -		 struct neon_type_el *typeinfo, bfd_boolean in_reg_list)
> +		 struct vector_type_el *typeinfo, bfd_boolean in_reg_list)
>  {
>    char *str = *ccp;
>    const reg_entry *reg = parse_reg (&str);
> -  struct neon_type_el atype;
> -  struct neon_type_el parsetype;
> +  struct vector_type_el atype;
> +  struct vector_type_el parsetype;
>    bfd_boolean is_typed_vecreg = FALSE;
>  
>    atype.defined = 0;
> @@ -955,9 +955,9 @@ parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
>  
>  static int
>  aarch64_reg_parse (char **ccp, aarch64_reg_type type,
> -		   aarch64_reg_type *rtype, struct neon_type_el *vectype)
> +		   aarch64_reg_type *rtype, struct vector_type_el *vectype)
>  {
> -  struct neon_type_el atype;
> +  struct vector_type_el atype;
>    char *str = *ccp;
>    int reg = parse_typed_reg (&str, type, rtype, &atype,
>  			     /*in_reg_list= */ FALSE);
> @@ -974,7 +974,7 @@ aarch64_reg_parse (char **ccp, aarch64_reg_type type,
>  }
>  
>  static inline bfd_boolean
> -eq_neon_type_el (struct neon_type_el e1, struct neon_type_el e2)
> +eq_vector_type_el (struct vector_type_el e1, struct vector_type_el e2)
>  {
>    return
>      e1.type == e2.type
> @@ -1003,11 +1003,11 @@ eq_neon_type_el (struct neon_type_el e1, struct neon_type_el e2)
>     (by reg_list_valid_p).  */
>  
>  static int
> -parse_neon_reg_list (char **ccp, struct neon_type_el *vectype)
> +parse_neon_reg_list (char **ccp, struct vector_type_el *vectype)
>  {
>    char *str = *ccp;
>    int nb_regs;
> -  struct neon_type_el typeinfo, typeinfo_first;
> +  struct vector_type_el typeinfo, typeinfo_first;
>    int val, val_range;
>    int in_range;
>    int ret_val;
> @@ -1072,7 +1072,7 @@ parse_neon_reg_list (char **ccp, struct neon_type_el *vectype)
>  	  val_range = val;
>  	  if (nb_regs == 0)
>  	    typeinfo_first = typeinfo;
> -	  else if (! eq_neon_type_el (typeinfo_first, typeinfo))
> +	  else if (! eq_vector_type_el (typeinfo_first, typeinfo))
>  	    {
>  	      set_first_syntax_error
>  		(_("type mismatch in vector register list"));
> @@ -4631,11 +4631,11 @@ opcode_lookup (char **str)
>    return NULL;
>  }
>  
> -/* Internal helper routine converting a vector neon_type_el structure
> -   *VECTYPE to a corresponding operand qualifier.  */
> +/* Internal helper routine converting a vector_type_el structure *VECTYPE
> +   to a corresponding operand qualifier.  */
>  
>  static inline aarch64_opnd_qualifier_t
> -vectype_to_qualifier (const struct neon_type_el *vectype)
> +vectype_to_qualifier (const struct vector_type_el *vectype)
>  {
>    /* Element size in bytes indexed by vector_el_type.  */
>    const unsigned char ele_size[5]
> @@ -4988,7 +4988,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>        int isreg32, isregzero;
>        int comma_skipped_p = 0;
>        aarch64_reg_type rtype;
> -      struct neon_type_el vectype;
> +      struct vector_type_el vectype;
>        aarch64_opnd_info *info = &inst.base.operands[i];
>  
>        DEBUG_TRACE ("parse operand %d", i);
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 06/32] Generalise parse_neon_reg_list
  2016-08-23  9:08 ` [AArch64][SVE 06/32] Generalise parse_neon_reg_list Richard Sandiford
@ 2016-08-23 14:39   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-23 14:39 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:08, Richard Sandiford wrote:
> Rename parse_neon_reg_list to parse_vector_reg_list and take
> in the required register type as an argument.  Later patches
> will reuse the function for SVE registers.
> 
> OK to install?
> 

OK.

R.

> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (parse_neon_reg_list): Rename to...
> 	(parse_vector_reg_list): ...this and take a register type
> 	as input.
> 	(parse_operands): Update accordingly.
> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index c425418..e65cc7a 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -982,8 +982,9 @@ eq_vector_type_el (struct vector_type_el e1, struct vector_type_el e2)
>      && e1.width == e2.width && e1.index == e2.index;
>  }
>  
> -/* This function parses the NEON register list.  On success, it returns
> -   the parsed register list information in the following encoded format:
> +/* This function parses a list of vector registers of type TYPE.
> +   On success, it returns the parsed register list information in the
> +   following encoded format:
>  
>     bit   18-22   |   13-17   |   7-11    |    2-6    |   0-1
>         4th regno | 3rd regno | 2nd regno | 1st regno | num_of_reg
> @@ -1003,7 +1004,8 @@ eq_vector_type_el (struct vector_type_el e1, struct vector_type_el e2)
>     (by reg_list_valid_p).  */
>  
>  static int
> -parse_neon_reg_list (char **ccp, struct vector_type_el *vectype)
> +parse_vector_reg_list (char **ccp, aarch64_reg_type type,
> +		       struct vector_type_el *vectype)
>  {
>    char *str = *ccp;
>    int nb_regs;
> @@ -1038,7 +1040,7 @@ parse_neon_reg_list (char **ccp, struct vector_type_el *vectype)
>  	  str++;		/* skip over '-' */
>  	  val_range = val;
>  	}
> -      val = parse_typed_reg (&str, REG_TYPE_VN, NULL, &typeinfo,
> +      val = parse_typed_reg (&str, type, NULL, &typeinfo,
>  			     /*in_reg_list= */ TRUE);
>        if (val == PARSE_FAIL)
>  	{
> @@ -5135,7 +5137,8 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	case AARCH64_OPND_LVt:
>  	case AARCH64_OPND_LVt_AL:
>  	case AARCH64_OPND_LEt:
> -	  if ((val = parse_neon_reg_list (&str, &vectype)) == PARSE_FAIL)
> +	  if ((val = parse_vector_reg_list (&str, REG_TYPE_VN,
> +					    &vectype)) == PARSE_FAIL)
>  	    goto failure;
>  	  if (! reg_list_valid_p (val, /* accept_alternate */ 0))
>  	    {
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 07/32] Replace hard-coded uses of REG_TYPE_R_Z_BHSDQ_V
  2016-08-23  9:09 ` [AArch64][SVE 07/32] Replace hard-coded uses of REG_TYPE_R_Z_BHSDQ_V Richard Sandiford
@ 2016-08-25 10:36   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 10:36 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:09, Richard Sandiford wrote:
> To remove parsing ambiguities and to avoid register names being
> accidentally added to the symbol table, the immediate parsing
> routines reject things like:
> 
> 	.equ	x0, 0
> 	add	v0.4s, v0.4s, x0
> 
> An explicit '#' must be used instead:
> 
> 	.equ	x0, 0
> 	add	v0.4s, v0.4s, #x0
> 
> Of course, it wasn't possible to predict what other register
> names might be added in future, so this behaviour was restricted
> to the register names that were defined at the time.  For backwards
> compatibility, we should continue to allow things like:
> 
> 	.equ	p0, 0
> 	add	v0.4s, v0.4s, p0
> 
> even though p0 is now an SVE register.
> 
> However, it seems reasonable to extend the x0 behaviour above to
> SVE registers when parsing SVE instructions, especially since none
> of the SVE immediate formats are relocatable.  Doing so removes the
> same parsing ambiguity for SVE instructions as the x0 behaviour removes
> for base AArch64 instructions.
> 
> As a prerequisite, we then need to be able to tell the parsing routines
> which registers to reject.  This patch changes the interface to make
> that possible, although the set of rejected registers doesn't change
> at this stage.
> 
> OK to install?

With the exception of places in the syntax that expect a label (such as
branch instructions) I think it would probably be reasonable to require
a leading '#' in front of any other symbolic constant or such
substitution (we could also accept an expression in parentheses).  But
that's possibly a change we shouldn't make without some further
consideration.

In the mean time, this patch is OK.

R.

> 
> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (parse_immediate_expression): Add a
> 	reg_type parameter.
> 	(parse_constant_immediate): Likewise, and update calls.
> 	(parse_aarch64_imm_float): Likewise.
> 	(parse_big_immediate): Likewise.
> 	(po_imm_nc_or_fail): Update accordingly, passing down a new
> 	imm_reg_type variable.
> 	(po_imm_of_fail): Likewise.
> 	(parse_operands): Likewise.
> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index e65cc7a..eec08c7 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -2004,14 +2004,14 @@ reg_name_p (char *str, aarch64_reg_type reg_type)
>  
>     To prevent the expression parser from pushing a register name
>     into the symbol table as an undefined symbol, firstly a check is
> -   done to find out whether STR is a valid register name followed
> -   by a comma or the end of line.  Return FALSE if STR is such a
> -   string.  */
> +   done to find out whether STR is a register of type REG_TYPE followed
> +   by a comma or the end of line.  Return FALSE if STR is such a string.  */
>  
>  static bfd_boolean
> -parse_immediate_expression (char **str, expressionS *exp)
> +parse_immediate_expression (char **str, expressionS *exp,
> +			    aarch64_reg_type reg_type)
>  {
> -  if (reg_name_p (*str, REG_TYPE_R_Z_BHSDQ_V))
> +  if (reg_name_p (*str, reg_type))
>      {
>        set_recoverable_error (_("immediate operand required"));
>        return FALSE;
> @@ -2030,16 +2030,17 @@ parse_immediate_expression (char **str, expressionS *exp)
>  
>  /* Constant immediate-value read function for use in insn parsing.
>     STR points to the beginning of the immediate (with the optional
> -   leading #); *VAL receives the value.
> +   leading #); *VAL receives the value.  REG_TYPE says which register
> +   names should be treated as registers rather than as symbolic immediates.
>  
>     Return TRUE on success; otherwise return FALSE.  */
>  
>  static bfd_boolean
> -parse_constant_immediate (char **str, int64_t * val)
> +parse_constant_immediate (char **str, int64_t *val, aarch64_reg_type reg_type)
>  {
>    expressionS exp;
>  
> -  if (! parse_immediate_expression (str, &exp))
> +  if (! parse_immediate_expression (str, &exp, reg_type))
>      return FALSE;
>  
>    if (exp.X_op != O_constant)
> @@ -2148,12 +2149,14 @@ aarch64_double_precision_fmovable (uint64_t imm, uint32_t *fpword)
>     value in *IMMED in the format of IEEE754 single-precision encoding.
>     *CCP points to the start of the string; DP_P is TRUE when the immediate
>     is expected to be in double-precision (N.B. this only matters when
> -   hexadecimal representation is involved).
> +   hexadecimal representation is involved).  REG_TYPE says which register
> +   names should be treated as registers rather than as symbolic immediates.
>  
>     N.B. 0.0 is accepted by this function.  */
>  
>  static bfd_boolean
> -parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p)
> +parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p,
> +			 aarch64_reg_type reg_type)
>  {
>    char *str = *ccp;
>    char *fpnum;
> @@ -2173,7 +2176,7 @@ parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p)
>        /* Support the hexadecimal representation of the IEEE754 encoding.
>  	 Double-precision is expected when DP_P is TRUE, otherwise the
>  	 representation should be in single-precision.  */
> -      if (! parse_constant_immediate (&str, &val))
> +      if (! parse_constant_immediate (&str, &val, reg_type))
>  	goto invalid_fp;
>  
>        if (dp_p)
> @@ -2237,15 +2240,15 @@ invalid_fp:
>  
>     To prevent the expression parser from pushing a register name into the
>     symbol table as an undefined symbol, a check is firstly done to find
> -   out whether STR is a valid register name followed by a comma or the end
> -   of line.  Return FALSE if STR is such a register.  */
> +   out whether STR is a register of type REG_TYPE followed by a comma or
> +   the end of line.  Return FALSE if STR is such a register.  */
>  
>  static bfd_boolean
> -parse_big_immediate (char **str, int64_t *imm)
> +parse_big_immediate (char **str, int64_t *imm, aarch64_reg_type reg_type)
>  {
>    char *ptr = *str;
>  
> -  if (reg_name_p (ptr, REG_TYPE_R_Z_BHSDQ_V))
> +  if (reg_name_p (ptr, reg_type))
>      {
>        set_syntax_error (_("immediate operand required"));
>        return FALSE;
> @@ -3736,12 +3739,12 @@ parse_sys_ins_reg (char **str, struct hash_control *sys_ins_regs)
>    } while (0)
>  
>  #define po_imm_nc_or_fail() do {				\
> -    if (! parse_constant_immediate (&str, &val))		\
> +    if (! parse_constant_immediate (&str, &val, imm_reg_type))	\
>        goto failure;						\
>    } while (0)
>  
>  #define po_imm_or_fail(min, max) do {				\
> -    if (! parse_constant_immediate (&str, &val))		\
> +    if (! parse_constant_immediate (&str, &val, imm_reg_type))	\
>        goto failure;						\
>      if (val < min || val > max)					\
>        {								\
> @@ -4980,10 +4983,13 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>    int i;
>    char *backtrack_pos = 0;
>    const enum aarch64_opnd *operands = opcode->operands;
> +  aarch64_reg_type imm_reg_type;
>  
>    clear_error ();
>    skip_whitespace (str);
>  
> +  imm_reg_type = REG_TYPE_R_Z_BHSDQ_V;
> +
>    for (i = 0; operands[i] != AARCH64_OPND_NIL; i++)
>      {
>        int64_t val;
> @@ -5219,8 +5225,10 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	    bfd_boolean res1 = FALSE, res2 = FALSE;
>  	    /* N.B. -0.0 will be rejected; although -0.0 shouldn't be rejected,
>  	       it is probably not worth the effort to support it.  */
> -	    if (!(res1 = parse_aarch64_imm_float (&str, &qfloat, FALSE))
> -		&& !(res2 = parse_constant_immediate (&str, &val)))
> +	    if (!(res1 = parse_aarch64_imm_float (&str, &qfloat, FALSE,
> +						  imm_reg_type))
> +		&& !(res2 = parse_constant_immediate (&str, &val,
> +						      imm_reg_type)))
>  	      goto failure;
>  	    if ((res1 && qfloat == 0) || (res2 && val == 0))
>  	      {
> @@ -5253,7 +5261,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  
>  	case AARCH64_OPND_SIMD_IMM:
>  	case AARCH64_OPND_SIMD_IMM_SFT:
> -	  if (! parse_big_immediate (&str, &val))
> +	  if (! parse_big_immediate (&str, &val, imm_reg_type))
>  	    goto failure;
>  	  assign_imm_if_const_or_fixup_later (&inst.reloc, info,
>  					      /* addr_off_p */ 0,
> @@ -5284,7 +5292,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	    bfd_boolean dp_p
>  	      = (aarch64_get_qualifier_esize (inst.base.operands[0].qualifier)
>  		 == 8);
> -	    if (! parse_aarch64_imm_float (&str, &qfloat, dp_p))
> +	    if (! parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type))
>  	      goto failure;
>  	    if (qfloat == 0)
>  	      {
> @@ -5372,7 +5380,8 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  break;
>  
>  	case AARCH64_OPND_EXCEPTION:
> -	  po_misc_or_fail (parse_immediate_expression (&str, &inst.reloc.exp));
> +	  po_misc_or_fail (parse_immediate_expression (&str, &inst.reloc.exp,
> +						       imm_reg_type));
>  	  assign_imm_if_const_or_fixup_later (&inst.reloc, info,
>  					      /* addr_off_p */ 0,
>  					      /* need_libopcodes_p */ 0,
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 08/32] Generalise aarch64_double_precision_fmovable
  2016-08-23  9:10 ` [AArch64][SVE 08/32] Generalise aarch64_double_precision_fmovable Richard Sandiford
@ 2016-08-25 13:17   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:17 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:10, Richard Sandiford wrote:
> SVE has single-bit floating-point constants that don't really
> have any relation to the AArch64 8-bit floating-point encoding.
> (E.g. one of the constants selects between 0 and 1.)  The easiest
> way of representing them in the aarch64_opnd_info seemed to be
> to use the IEEE float representation directly, rather than invent
> some new scheme.
> 
> This patch paves the way for that by making the code that converts IEEE
> doubles to IEEE floats accept any value in the range of an IEEE float,
> not just zero and 8-bit floats.  It leaves the range checking to the
> caller (which already handles it).
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (aarch64_double_precision_fmovable): Rename
> 	to...
> 	(can_convert_double_to_float): ...this.  Accept any double-precision
> 	value that converts to single precision without loss of precision.
> 	(parse_aarch64_imm_float): Update accordingly.

OK.

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index eec08c7..40f6253 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -2093,56 +2093,52 @@ aarch64_imm_float_p (uint32_t imm)
>      && ((imm & 0x7e000000) == pattern);	/* bits 25 - 29 == ~ bit 30.  */
>  }
>  
> -/* Like aarch64_imm_float_p but for a double-precision floating-point value.
> -
> -   Return TRUE if the value encoded in IMM can be expressed in the AArch64
> -   8-bit signed floating-point format with 3-bit exponent and normalized 4
> -   bits of precision (i.e. can be used in an FMOV instruction); return the
> -   equivalent single-precision encoding in *FPWORD.
> -
> -   Otherwise return FALSE.  */
> +/* Return TRUE if the IEEE double value encoded in IMM can be expressed
> +   as an IEEE float without any loss of precision.  Store the value in
> +   *FPWORD if so.  */
>  
>  static bfd_boolean
> -aarch64_double_precision_fmovable (uint64_t imm, uint32_t *fpword)
> +can_convert_double_to_float (uint64_t imm, uint32_t *fpword)
>  {
>    /* If a double-precision floating-point value has the following bit
> -     pattern, it can be expressed in the AArch64 8-bit floating-point
> -     format:
> +     pattern, it can be expressed in a float:
>  
> -     6 66655555555 554444444...21111111111
> -     3 21098765432 109876543...098765432109876543210
> -     n Eeeeeeeeexx xxxx00000...000000000000000000000
> +     6 66655555555 5544 44444444 33333333 33222222 22221111 111111
> +     3 21098765432 1098 76543210 98765432 10987654 32109876 54321098 76543210
> +     n E~~~eeeeeee ssss ssssssss ssssssss SSS00000 00000000 00000000 00000000
>  
> -     where n, e and each x are either 0 or 1 independently, with
> -     E == ~ e.  */
> +       ----------------------------->     nEeeeeee esssssss ssssssss sssssSSS
> +	 if Eeee_eeee != 1111_1111
> +
> +     where n, e, s and S are either 0 or 1 independently and where ~ is the
> +     inverse of E.  */
>  
>    uint32_t pattern;
>    uint32_t high32 = imm >> 32;
> +  uint32_t low32 = imm;
>  
> -  /* Lower 32 bits need to be 0s.  */
> -  if ((imm & 0xffffffff) != 0)
> +  /* Lower 29 bits need to be 0s.  */
> +  if ((imm & 0x1fffffff) != 0)
>      return FALSE;
>  
>    /* Prepare the pattern for 'Eeeeeeeee'.  */
>    if (((high32 >> 30) & 0x1) == 0)
> -    pattern = 0x3fc00000;
> +    pattern = 0x38000000;
>    else
>      pattern = 0x40000000;
>  
> -  if ((high32 & 0xffff) == 0			/* bits 32 - 47 are 0.  */
> -      && (high32 & 0x7fc00000) == pattern)	/* bits 54 - 61 == ~ bit 62.  */
> -    {
> -      /* Convert to the single-precision encoding.
> -         i.e. convert
> -	   n Eeeeeeeeexx xxxx00000...000000000000000000000
> -	 to
> -	   n Eeeeeexx xxxx0000000000000000000.  */
> -      *fpword = ((high32 & 0xfe000000)			/* nEeeeee.  */
> -		 | (((high32 >> 16) & 0x3f) << 19));	/* xxxxxx.  */
> -      return TRUE;
> -    }
> -  else
> +  /* Check E~~~.  */
> +  if ((high32 & 0x78000000) != pattern)
>      return FALSE;
> +
> +  /* Check Eeee_eeee != 1111_1111.  */
> +  if ((high32 & 0x7ff00000) == 0x47f00000)
> +    return FALSE;
> +
> +  *fpword = ((high32 & 0xc0000000)		/* 1 n bit and 1 E bit.  */
> +	     | ((high32 << 3) & 0x3ffffff8)	/* 7 e and 20 s bits.  */
> +	     | (low32 >> 29));			/* 3 S bits.  */
> +  return TRUE;
>  }
>  
>  /* Parse a floating-point immediate.  Return TRUE on success and return the
> @@ -2181,7 +2177,7 @@ parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p,
>  
>        if (dp_p)
>  	{
> -	  if (! aarch64_double_precision_fmovable (val, &fpword))
> +	  if (!can_convert_double_to_float (val, &fpword))
>  	    goto invalid_fp;
>  	}
>        else if ((uint64_t) val > 0xffffffff)
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 09/32] Improve error messages for invalid floats
  2016-08-23  9:11 ` [AArch64][SVE 09/32] Improve error messages for invalid floats Richard Sandiford
@ 2016-08-25 13:19   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:19 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:10, Richard Sandiford wrote:
> Previously:
> 
>     fmov d0, #2
> 
> would give an error:
> 
>     Operand 2 should be an integer register
> 
> whereas the user probably just forgot to add the ".0" to make:
> 
>     fmov d0, #2.0
> 
> This patch reports an invalid floating point constant unless the
> operand is obviously a register.
> 
> The FPIMM8 handling is only relevant for SVE.  Without it:
> 
>     fmov z0, z1
> 
> would try to parse z1 as an integer immediate zero (the res2 path),
> whereas it's more likely that the user forgot the predicate.  This is
> tested by the final patch.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (parse_aarch64_imm_float): Report a specific
> 	low-severity error for registers.
> 	(parse_operands): Report an invalid floating point constant for
> 	if parsing an FPIMM8 fails, and if no better error has been
> 	recorded.
> 	* testsuite/gas/aarch64/diagnostic.s,
> 	testsuite/gas/aarch64/diagnostic.l: Add tests for integer operands
> 	to FMOV.
> 

OK.

R.

> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 40f6253..388c4bf 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -2189,6 +2189,12 @@ parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p,
>      }
>    else
>      {
> +      if (reg_name_p (str, reg_type))
> +	{
> +	  set_recoverable_error (_("immediate operand required"));
> +	  return FALSE;
> +	}
> +
>        /* We must not accidentally parse an integer as a floating-point number.
>  	 Make sure that the value we parse is not an integer by checking for
>  	 special characters '.' or 'e'.  */
> @@ -5223,8 +5229,9 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	       it is probably not worth the effort to support it.  */
>  	    if (!(res1 = parse_aarch64_imm_float (&str, &qfloat, FALSE,
>  						  imm_reg_type))
> -		&& !(res2 = parse_constant_immediate (&str, &val,
> -						      imm_reg_type)))
> +		&& (error_p ()
> +		    || !(res2 = parse_constant_immediate (&str, &val,
> +							  imm_reg_type))))
>  	      goto failure;
>  	    if ((res1 && qfloat == 0) || (res2 && val == 0))
>  	      {
> @@ -5288,11 +5295,12 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	    bfd_boolean dp_p
>  	      = (aarch64_get_qualifier_esize (inst.base.operands[0].qualifier)
>  		 == 8);
> -	    if (! parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type))
> -	      goto failure;
> -	    if (qfloat == 0)
> +	    if (!parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type)
> +		|| qfloat == 0)
>  	      {
> -		set_fatal_syntax_error (_("invalid floating-point constant"));
> +		if (!error_p ())
> +		  set_fatal_syntax_error (_("invalid floating-point"
> +					    " constant"));
>  		goto failure;
>  	      }
>  	    inst.base.operands[i].imm.value = encode_imm_float_bits (qfloat);
> diff --git a/gas/testsuite/gas/aarch64/diagnostic.l b/gas/testsuite/gas/aarch64/diagnostic.l
> index c278887..67ef484 100644
> --- a/gas/testsuite/gas/aarch64/diagnostic.l
> +++ b/gas/testsuite/gas/aarch64/diagnostic.l
> @@ -144,3 +144,7 @@
>  [^:]*:255: Error: register element index out of range 0 to 15 at operand 1 -- `ld2 {v0\.b,v1\.b}\[-1\],\[x0\]'
>  [^:]*:258: Error: register element index out of range 0 to 15 at operand 1 -- `ld2 {v0\.b,v1\.b}\[16\],\[x0\]'
>  [^:]*:259: Error: register element index out of range 0 to 15 at operand 1 -- `ld2 {v0\.b,v1\.b}\[67\],\[x0\]'
> +[^:]*:261: Error: invalid floating-point constant at operand 2 -- `fmov d0,#2'
> +[^:]*:262: Error: invalid floating-point constant at operand 2 -- `fmov d0,#-2'
> +[^:]*:263: Error: invalid floating-point constant at operand 2 -- `fmov s0,2'
> +[^:]*:264: Error: invalid floating-point constant at operand 2 -- `fmov s0,-2'
> diff --git a/gas/testsuite/gas/aarch64/diagnostic.s b/gas/testsuite/gas/aarch64/diagnostic.s
> index ac2eb5c..3092b9b 100644
> --- a/gas/testsuite/gas/aarch64/diagnostic.s
> +++ b/gas/testsuite/gas/aarch64/diagnostic.s
> @@ -257,3 +257,8 @@
>  	ld2	{v0.b, v1.b}[15], [x0]
>  	ld2	{v0.b, v1.b}[16], [x0]
>  	ld2	{v0.b, v1.b}[67], [x0]
> +
> +	fmov	d0, #2
> +	fmov	d0, #-2
> +	fmov	s0, 2
> +	fmov	s0, -2
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 10/32] Move range check out of parse_aarch64_imm_float
  2016-08-23  9:11 ` [AArch64][SVE 10/32] Move range check out of parse_aarch64_imm_float Richard Sandiford
@ 2016-08-25 13:20   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:20 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:11, Richard Sandiford wrote:
> Since some SVE constants are no longer explicitly tied to the 8-bit
> FP immediate format, it seems better to move the range checks out of
> parse_aarch64_imm_float and into the callers.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (parse_aarch64_imm_float): Remove range check.
> 	(parse_operands): Check the range of 8-bit FP immediates here instead.

OK.

R.


> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 388c4bf..2489d5b 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -2148,7 +2148,8 @@ can_convert_double_to_float (uint64_t imm, uint32_t *fpword)
>     hexadecimal representation is involved).  REG_TYPE says which register
>     names should be treated as registers rather than as symbolic immediates.
>  
> -   N.B. 0.0 is accepted by this function.  */
> +   This routine accepts any IEEE float; it is up to the callers to reject
> +   invalid ones.  */
>  
>  static bfd_boolean
>  parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p,
> @@ -2224,12 +2225,9 @@ parse_aarch64_imm_float (char **ccp, int *immed, bfd_boolean dp_p,
>  	}
>      }
>  
> -  if (aarch64_imm_float_p (fpword) || fpword == 0)
> -    {
> -      *immed = fpword;
> -      *ccp = str;
> -      return TRUE;
> -    }
> +  *immed = fpword;
> +  *ccp = str;
> +  return TRUE;
>  
>  invalid_fp:
>    set_fatal_syntax_error (_("invalid floating-point constant"));
> @@ -5296,7 +5294,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	      = (aarch64_get_qualifier_esize (inst.base.operands[0].qualifier)
>  		 == 8);
>  	    if (!parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type)
> -		|| qfloat == 0)
> +		|| !aarch64_imm_float_p (qfloat))
>  	      {
>  		if (!error_p ())
>  		  set_fatal_syntax_error (_("invalid floating-point"
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 11/32] Tweak aarch64_reg_parse_32_64 interface
  2016-08-23  9:12 ` [AArch64][SVE 11/32] Tweak aarch64_reg_parse_32_64 interface Richard Sandiford
@ 2016-08-25 13:27   ` Richard Earnshaw (lists)
  2016-09-16 11:51     ` Richard Sandiford
  0 siblings, 1 reply; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:27 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:12, Richard Sandiford wrote:
> aarch64_reg_parse_32_64 is currently used to parse address registers,
> among other things.  It returns two bits of information about the
> register: whether it's W rather than X, and whether it's a zero register.
> 
> SVE adds addressing modes in which the base or offset can be a vector
> register instead of a scalar, so a choice between W and X is no longer
> enough.  It's more convenient to pass the type of register around as
> a qualifier instead.
> 
> As it happens, two callers of aarch64_reg_parse_32_64 already wanted
> the information in the form of a qualifier, so the change feels pretty
> natural even without SVE.
> 
> Also, the function took two parameters to control whether {W}SP
> and (W|X)ZR should be accepted.  These parameters were negative
> "reject" parameters, but the closely-related parse_address_main
> had a positive "accept" parameter (for post-indexed addressing).
> One of the SVE patches adds a parameter to parse_address_main
> that needs to be passed down alongside the aarch64_reg_parse_32_64
> parameters, which as things stood led to an awkward mix of positive
> and negative bools.  The patch therefore changes the
> aarch64_reg_parse_32_64 parameters to "accept_sp" and "accept_rz"
> instead.
> 
> Finally, the two input parameters and isregzero return value were
> all ints but logically bools.  The patch changes the types to
> bfd_boolean.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (aarch64_reg_parse_32_64): Return the register
> 	type as a qualifier rather than an "isreg32" boolean.  Turn the
> 	SP/ZR control parameters from negative "reject" to positive
> 	"accept".  Make them and *ISREGZERO bfd_booleans rather than ints.
> 	(parse_shifter_operand): Update accordingly.
> 	(parse_address_main): Likewise.
> 	(po_int_reg_or_fail): Likewise.  Make the same reject->accept
> 	change to the macro parameters.
> 	(parse_operands): Update after the above changes, replacing
> 	the "isreg32" local variable with one called "qualifier".

I'm not a big fan of parameters that simply take 'true' or 'false',
especially when there is more than one such parameter: it's too easy to
get the order mixed up.

Furthermore, I'm not sure these two parameters are really independent.
Are there any cases where both can be true?

Given the above concerns I wonder whether a single enum with the
permitted states might be better.  It certainly makes the code clearer
at the caller as to which register types are acceptable.

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 2489d5b..2e0e4f8 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -690,15 +690,21 @@ aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
>      }
>  }
>  
> -/* Parse a register and return PARSE_FAIL if the register is not of type R_Z_SP.
> -   Return the register number otherwise.  *ISREG32 is set to one if the
> -   register is 32-bit wide; *ISREGZERO is set to one if the register is
> -   of type Z_32 or Z_64.
> +/* Try to parse a base or offset register.  ACCEPT_SP says whether {W}SP
> +   should be considered valid and ACCEPT_RZ says whether zero registers
> +   should be considered valid.
> +
> +   Return the register number on success, setting *QUALIFIER to the
> +   register qualifier and *ISREGZERO to whether the register is a zero
> +   register.  Return PARSE_FAIL otherwise.
> +
>     Note that this function does not issue any diagnostics.  */
>  
>  static int
> -aarch64_reg_parse_32_64 (char **ccp, int reject_sp, int reject_rz,
> -			 int *isreg32, int *isregzero)
> +aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
> +			 bfd_boolean accept_rz,
> +			 aarch64_opnd_qualifier_t *qualifier,
> +			 bfd_boolean *isregzero)
>  {
>    char *str = *ccp;
>    const reg_entry *reg = parse_reg (&str);
> @@ -713,22 +719,28 @@ aarch64_reg_parse_32_64 (char **ccp, int reject_sp, int reject_rz,
>      {
>      case REG_TYPE_SP_32:
>      case REG_TYPE_SP_64:
> -      if (reject_sp)
> +      if (!accept_sp)
>  	return PARSE_FAIL;
> -      *isreg32 = reg->type == REG_TYPE_SP_32;
> -      *isregzero = 0;
> +      *qualifier = (reg->type == REG_TYPE_SP_32
> +		    ? AARCH64_OPND_QLF_W
> +		    : AARCH64_OPND_QLF_X);
> +      *isregzero = FALSE;
>        break;
>      case REG_TYPE_R_32:
>      case REG_TYPE_R_64:
> -      *isreg32 = reg->type == REG_TYPE_R_32;
> -      *isregzero = 0;
> +      *qualifier = (reg->type == REG_TYPE_R_32
> +		    ? AARCH64_OPND_QLF_W
> +		    : AARCH64_OPND_QLF_X);
> +      *isregzero = FALSE;
>        break;
>      case REG_TYPE_Z_32:
>      case REG_TYPE_Z_64:
> -      if (reject_rz)
> +      if (!accept_rz)
>  	return PARSE_FAIL;
> -      *isreg32 = reg->type == REG_TYPE_Z_32;
> -      *isregzero = 1;
> +      *qualifier = (reg->type == REG_TYPE_Z_32
> +		    ? AARCH64_OPND_QLF_W
> +		    : AARCH64_OPND_QLF_X);
> +      *isregzero = TRUE;
>        break;
>      default:
>        return PARSE_FAIL;
> @@ -3033,12 +3045,13 @@ parse_shifter_operand (char **str, aarch64_opnd_info *operand,
>  		       enum parse_shift_mode mode)
>  {
>    int reg;
> -  int isreg32, isregzero;
> +  aarch64_opnd_qualifier_t qualifier;
> +  bfd_boolean isregzero;
>    enum aarch64_operand_class opd_class
>      = aarch64_get_operand_class (operand->type);
>  
> -  if ((reg =
> -       aarch64_reg_parse_32_64 (str, 0, 0, &isreg32, &isregzero)) != PARSE_FAIL)
> +  if ((reg = aarch64_reg_parse_32_64 (str, TRUE, TRUE, &qualifier,
> +				      &isregzero)) != PARSE_FAIL)
>      {
>        if (opd_class == AARCH64_OPND_CLASS_IMMEDIATE)
>  	{
> @@ -3053,7 +3066,7 @@ parse_shifter_operand (char **str, aarch64_opnd_info *operand,
>  	}
>  
>        operand->reg.regno = reg;
> -      operand->qualifier = isreg32 ? AARCH64_OPND_QLF_W : AARCH64_OPND_QLF_X;
> +      operand->qualifier = qualifier;
>  
>        /* Accept optional shift operation on register.  */
>        if (! skip_past_comma (str))
> @@ -3193,7 +3206,9 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>  {
>    char *p = *str;
>    int reg;
> -  int isreg32, isregzero;
> +  aarch64_opnd_qualifier_t base_qualifier;
> +  aarch64_opnd_qualifier_t offset_qualifier;
> +  bfd_boolean isregzero;
>    expressionS *exp = &inst.reloc.exp;
>  
>    if (! skip_past_char (&p, '['))
> @@ -3271,8 +3286,8 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>    /* [ */
>  
>    /* Accept SP and reject ZR */
> -  reg = aarch64_reg_parse_32_64 (&p, 0, 1, &isreg32, &isregzero);
> -  if (reg == PARSE_FAIL || isreg32)
> +  reg = aarch64_reg_parse_32_64 (&p, TRUE, FALSE, &base_qualifier, &isregzero);
> +  if (reg == PARSE_FAIL || base_qualifier == AARCH64_OPND_QLF_W)
>      {
>        set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_64)));
>        return FALSE;
> @@ -3286,7 +3301,8 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>        operand->addr.preind = 1;
>  
>        /* Reject SP and accept ZR */
> -      reg = aarch64_reg_parse_32_64 (&p, 1, 0, &isreg32, &isregzero);
> +      reg = aarch64_reg_parse_32_64 (&p, FALSE, TRUE, &offset_qualifier,
> +				     &isregzero);
>        if (reg != PARSE_FAIL)
>  	{
>  	  /* [Xn,Rm  */
> @@ -3309,13 +3325,13 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>  	      || operand->shifter.kind == AARCH64_MOD_LSL
>  	      || operand->shifter.kind == AARCH64_MOD_SXTX)
>  	    {
> -	      if (isreg32)
> +	      if (offset_qualifier == AARCH64_OPND_QLF_W)
>  		{
>  		  set_syntax_error (_("invalid use of 32-bit register offset"));
>  		  return FALSE;
>  		}
>  	    }
> -	  else if (!isreg32)
> +	  else if (offset_qualifier == AARCH64_OPND_QLF_X)
>  	    {
>  	      set_syntax_error (_("invalid use of 64-bit register offset"));
>  	      return FALSE;
> @@ -3399,11 +3415,12 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>  	}
>  
>        if (accept_reg_post_index
> -	  && (reg = aarch64_reg_parse_32_64 (&p, 1, 1, &isreg32,
> +	  && (reg = aarch64_reg_parse_32_64 (&p, FALSE, FALSE,
> +					     &offset_qualifier,
>  					     &isregzero)) != PARSE_FAIL)
>  	{
>  	  /* [Xn],Xm */
> -	  if (isreg32)
> +	  if (offset_qualifier == AARCH64_OPND_QLF_W)
>  	    {
>  	      set_syntax_error (_("invalid 32-bit register offset"));
>  	      return FALSE;
> @@ -3723,19 +3740,16 @@ parse_sys_ins_reg (char **str, struct hash_control *sys_ins_regs)
>        }								\
>    } while (0)
>  
> -#define po_int_reg_or_fail(reject_sp, reject_rz) do {		\
> -    val = aarch64_reg_parse_32_64 (&str, reject_sp, reject_rz,	\
> -                                   &isreg32, &isregzero);	\
> +#define po_int_reg_or_fail(accept_sp, accept_rz) do {		\
> +    val = aarch64_reg_parse_32_64 (&str, accept_sp, accept_rz,	\
> +                                   &qualifier, &isregzero);	\
>      if (val == PARSE_FAIL)					\
>        {								\
>  	set_default_error ();					\
>  	goto failure;						\
>        }								\
>      info->reg.regno = val;					\
> -    if (isreg32)						\
> -      info->qualifier = AARCH64_OPND_QLF_W;			\
> -    else							\
> -      info->qualifier = AARCH64_OPND_QLF_X;			\
> +    info->qualifier = qualifier;				\
>    } while (0)
>  
>  #define po_imm_nc_or_fail() do {				\
> @@ -4993,10 +5007,11 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>    for (i = 0; operands[i] != AARCH64_OPND_NIL; i++)
>      {
>        int64_t val;
> -      int isreg32, isregzero;
> +      bfd_boolean isregzero;
>        int comma_skipped_p = 0;
>        aarch64_reg_type rtype;
>        struct vector_type_el vectype;
> +      aarch64_opnd_qualifier_t qualifier;
>        aarch64_opnd_info *info = &inst.base.operands[i];
>  
>        DEBUG_TRACE ("parse operand %d", i);
> @@ -5032,12 +5047,12 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	case AARCH64_OPND_Ra:
>  	case AARCH64_OPND_Rt_SYS:
>  	case AARCH64_OPND_PAIRREG:
> -	  po_int_reg_or_fail (1, 0);
> +	  po_int_reg_or_fail (FALSE, TRUE);
>  	  break;
>  
>  	case AARCH64_OPND_Rd_SP:
>  	case AARCH64_OPND_Rn_SP:
> -	  po_int_reg_or_fail (0, 1);
> +	  po_int_reg_or_fail (TRUE, FALSE);
>  	  break;
>  
>  	case AARCH64_OPND_Rm_EXT:
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 12/32] Make more use of bfd_boolean
  2016-08-23  9:13 ` [AArch64][SVE 12/32] Make more use of bfd_boolean Richard Sandiford
@ 2016-08-25 13:39   ` Richard Earnshaw (lists)
  2016-09-16 11:56     ` Richard Sandiford
  0 siblings, 1 reply; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:39 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:13, Richard Sandiford wrote:
> Following on from the previous patch, which converted the
> aarch64_reg_parse_32_64 parameters to bfd_booleans, this one
> does the same for parse_address_main and parse_address.
> It also documents the parameters.
> 
> This isn't an attempt to convert the whole file to use bfd_booleans
> more often.  It's simply trying to avoid inconsistencies with new
> SVE parameters.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (parse_address_main): Turn reloc and
> 	accept_reg_post_index into bfd_booleans.  Add commentary.
> 	(parse_address_reloc): Update accordingly.  Add commentary.
> 	(parse_address): Likewise.  Also change accept_reg_post_index
> 	into a bfd_boolean here.
> 	(parse_operands): Update calls accordingly.

My comment on the previous patch applies somewhat here too, although the
two bools are not as closely related here.  In particular statements
such as

  return parse_address_main (str, operand, TRUE, FALSE);

are not intuitively obvious to the reader of the code.

I accept that this patch is not really changing what the code previously
did for the worse, so I'll approve it on that basis.

OK.

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 2e0e4f8..165ab9a 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -3197,12 +3197,17 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>  
>     The shift/extension information, if any, will be stored in .shifter.
>  
> -   It is the caller's responsibility to check for addressing modes not
> -   supported by the instruction, and to set inst.reloc.type.  */
> +   RELOC says whether relocation operators should be accepted
> +   and ACCEPT_REG_POST_INDEX says whether post-indexed register
> +   addressing should be accepted.
> +
> +   In all other cases, it is the caller's responsibility to check whether
> +   the addressing mode is supported by the instruction.  It is also the
> +   caller's responsibility to set inst.reloc.type.  */
>  
>  static bfd_boolean
> -parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
> -		    int accept_reg_post_index)
> +parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
> +		    bfd_boolean accept_reg_post_index)
>  {
>    char *p = *str;
>    int reg;
> @@ -3455,19 +3460,26 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>    return TRUE;
>  }
>  
> -/* Return TRUE on success; otherwise return FALSE.  */
> +/* Parse an address that cannot contain relocation operators.
> +   Look for and parse "[Xn], (Xm|#m)" as post-indexed addressing
> +   if ACCEPT_REG_POST_INDEX is true.
> +
> +   Return TRUE on success.  */
>  static bfd_boolean
>  parse_address (char **str, aarch64_opnd_info *operand,
> -	       int accept_reg_post_index)
> +	       bfd_boolean accept_reg_post_index)
>  {
> -  return parse_address_main (str, operand, 0, accept_reg_post_index);
> +  return parse_address_main (str, operand, FALSE, accept_reg_post_index);
>  }
>  
> -/* Return TRUE on success; otherwise return FALSE.  */
> +/* Parse an address that can contain relocation operators.  Do not
> +   accept post-indexed addressing.
> +
> +   Return TRUE on success.  */
>  static bfd_boolean
>  parse_address_reloc (char **str, aarch64_opnd_info *operand)
>  {
> -  return parse_address_main (str, operand, 1, 0);
> +  return parse_address_main (str, operand, TRUE, FALSE);
>  }
>  
>  /* Parse an operand for a MOVZ, MOVN or MOVK instruction.
> @@ -5534,7 +5546,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  
>  	case AARCH64_OPND_ADDR_REGOFF:
>  	  /* [<Xn|SP>, <R><m>{, <extend> {<amount>}}]  */
> -	  po_misc_or_fail (parse_address (&str, info, 0));
> +	  po_misc_or_fail (parse_address (&str, info, FALSE));
>  	  if (info->addr.pcrel || !info->addr.offset.is_reg
>  	      || !info->addr.preind || info->addr.postind
>  	      || info->addr.writeback)
> @@ -5553,7 +5565,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  break;
>  
>  	case AARCH64_OPND_ADDR_SIMM7:
> -	  po_misc_or_fail (parse_address (&str, info, 0));
> +	  po_misc_or_fail (parse_address (&str, info, FALSE));
>  	  if (info->addr.pcrel || info->addr.offset.is_reg
>  	      || (!info->addr.preind && !info->addr.postind))
>  	    {
> @@ -5609,7 +5621,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  
>  	case AARCH64_OPND_SIMD_ADDR_POST:
>  	  /* [<Xn|SP>], <Xm|#<amount>>  */
> -	  po_misc_or_fail (parse_address (&str, info, 1));
> +	  po_misc_or_fail (parse_address (&str, info, TRUE));
>  	  if (!info->addr.postind || !info->addr.writeback)
>  	    {
>  	      set_syntax_error (_("invalid addressing mode"));
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 13/32] Add an F_STRICT flag
  2016-08-23  9:14 ` [AArch64][SVE 13/32] Add an F_STRICT flag Richard Sandiford
@ 2016-08-25 13:45   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:45 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:14, Richard Sandiford wrote:
> SVE predicate operands can appear in three forms:
> 
> 1. unsuffixed: "Pn"
> 2. with a predication type: "Pn/[ZM]"
> 3. with a size suffix: "Pn.[BHSD]"
> 
> No variation is allowed: unsuffixed operands cannot have a (redundant)
> suffix, and the suffixes can never be dropped.  Unsuffixed Pn are used
> in LDR and STR, but they are also used for Pg operands in cases where
> the result is scalar and where there is therefore no choice to be made
> between "merging" and "zeroing".  This means that some Pg operands have
> suffixes and others don't.
> 
> It would be possible to use context-sensitive parsing to handle
> this difference.  The tc-aarch64.c code would then raise an error
> if the wrong kind of suffix is used for a particular instruction.
> 
> However, we get much more user-friendly error messages if we parse
> all three forms for all SVE instructions and record the suffix as a
> qualifier.  The normal qualifier matching code can then report cases
> where the wrong kind of suffix is used.  This is a slight extension
> of existing usage, which really only checks for the wrong choice of
> suffix within a particular kind of suffix.
> 
> The only catch is a that a "NIL" entry in the qualifier list
> specifically means "no suffix should be present" (case 1 above).
> NIL isn't a wildcard here.  It also means that an instruction that
> requires all-NIL qualifiers can fail to match (because a suffix was
> supplied when it shouldn't have been); this requires a slight change
> to find_best_match.
> 
> This patch adds an F_STRICT flag to select this behaviour.
> The flag will be set for all SVE instructions.  The behaviour
> for other instructions doesn't change.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (F_STRICT): New flag.
> 
> opcodes/
> 	* aarch64-opc.c (match_operands_qualifier): Handle F_STRICT.
> 
> gas/
> 	* config/tc-aarch64.c (find_best_match): Simplify, allowing an
> 	instruction with all-NIL qualifiers to fail to match.

OK.

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 165ab9a..9591704 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -4182,7 +4182,7 @@ find_best_match (const aarch64_inst *instr,
>      }
>  
>    max_num_matched = 0;
> -  idx = -1;
> +  idx = 0;
>  
>    /* For each pattern.  */
>    for (i = 0; i < AARCH64_MAX_QLF_SEQ_NUM; ++i, ++qualifiers_list)
> @@ -4194,9 +4194,6 @@ find_best_match (const aarch64_inst *instr,
>        if (empty_qualifier_sequence_p (qualifiers) == TRUE)
>  	{
>  	  DEBUG_TRACE_IF (i == 0, "empty list of qualifier sequence");
> -	  if (i != 0 && idx == -1)
> -	    /* If nothing has been matched, return the 1st sequence.  */
> -	    idx = 0;
>  	  break;
>  	}
>  
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index 1e38749..24a2ddb 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -598,7 +598,9 @@ extern aarch64_opcode aarch64_opcode_table[];
>  #define F_OD(X) (((X) & 0x7) << 24)
>  /* Instruction has the field of 'sz'.  */
>  #define F_LSE_SZ (1 << 27)
> -/* Next bit is 28.  */
> +/* Require an exact qualifier match, even for NIL qualifiers.  */
> +#define F_STRICT (1ULL << 28)
> +/* Next bit is 29.  */
>  
>  static inline bfd_boolean
>  alias_opcode_p (const aarch64_opcode *opcode)
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 322b991..d870fd6 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -854,7 +854,7 @@ aarch64_find_best_match (const aarch64_inst *inst,
>  static int
>  match_operands_qualifier (aarch64_inst *inst, bfd_boolean update_p)
>  {
> -  int i;
> +  int i, nops;
>    aarch64_opnd_qualifier_seq_t qualifiers;
>  
>    if (!aarch64_find_best_match (inst, inst->opcode->qualifiers_list, -1,
> @@ -864,6 +864,15 @@ match_operands_qualifier (aarch64_inst *inst, bfd_boolean update_p)
>        return 0;
>      }
>  
> +  if (inst->opcode->flags & F_STRICT)
> +    {
> +      /* Require an exact qualifier match, even for NIL qualifiers.  */
> +      nops = aarch64_num_of_operands (inst->opcode);
> +      for (i = 0; i < nops; ++i)
> +	if (inst->operands[i].qualifier != qualifiers[i])
> +	  return FALSE;
> +    }
> +
>    /* Update the qualifiers.  */
>    if (update_p == TRUE)
>      for (i = 0; i < AARCH64_MAX_OPND_NUM; ++i)
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 14/32] Make aarch64_logical_immediate_p take an element size
  2016-08-23  9:15 ` [AArch64][SVE 14/32] Make aarch64_logical_immediate_p take an element size Richard Sandiford
@ 2016-08-25 13:48   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:48 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:15, Richard Sandiford wrote:
> SVE supports logical immediate operations on 8-bit, 16-bit and 32-bit
> elements, treating them as aliases of operations on 64-bit elements in
> which the immediate is replicated.  This patch therefore replaces the
> "32-bit/64-bit" input to aarch64_logical_immediate_p with a more
> general "number of bytes" input.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> opcodes/
> 	* aarch64-opc.c (aarch64_logical_immediate_p): Replace is32
> 	with an esize parameter.
> 	(operand_general_constraint_met_p): Update accordingly.
> 	Fix misindented code.
> 	* aarch64-asm.c (aarch64_ins_limm): Update call to
> 	aarch64_logical_immediate_p.

OK

R.

> 
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index 2430be5..8fbd66f 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -436,11 +436,11 @@ aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
>  {
>    aarch64_insn value;
>    uint64_t imm = info->imm.value;
> -  int is32 = aarch64_get_qualifier_esize (inst->operands[0].qualifier) == 4;
> +  int esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
>  
>    if (inst->opcode->op == OP_BIC)
>      imm = ~imm;
> -  if (aarch64_logical_immediate_p (imm, is32, &value) == FALSE)
> +  if (aarch64_logical_immediate_p (imm, esize, &value) == FALSE)
>      /* The constraint check should have guaranteed this wouldn't happen.  */
>      assert (0);
>  
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index d870fd6..84da821 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -1062,16 +1062,18 @@ build_immediate_table (void)
>     be accepted by logical (immediate) instructions
>     e.g. ORR <Xd|SP>, <Xn>, #<imm>.
>  
> -   IS32 indicates whether or not VALUE is a 32-bit immediate.
> +   ESIZE is the number of bytes in the decoded immediate value.
>     If ENCODING is not NULL, on the return of TRUE, the standard encoding for
>     VALUE will be returned in *ENCODING.  */
>  
>  bfd_boolean
> -aarch64_logical_immediate_p (uint64_t value, int is32, aarch64_insn *encoding)
> +aarch64_logical_immediate_p (uint64_t value, int esize, aarch64_insn *encoding)
>  {
>    simd_imm_encoding imm_enc;
>    const simd_imm_encoding *imm_encoding;
>    static bfd_boolean initialized = FALSE;
> +  uint64_t upper;
> +  int i;
>  
>    DEBUG_TRACE ("enter with 0x%" PRIx64 "(%" PRIi64 "), is32: %d", value,
>  	       value, is32);
> @@ -1082,17 +1084,16 @@ aarch64_logical_immediate_p (uint64_t value, int is32, aarch64_insn *encoding)
>        initialized = TRUE;
>      }
>  
> -  if (is32)
> -    {
> -      /* Allow all zeros or all ones in top 32-bits, so that
> -	 constant expressions like ~1 are permitted.  */
> -      if (value >> 32 != 0 && value >> 32 != 0xffffffff)
> -	return FALSE;
> +  /* Allow all zeros or all ones in top bits, so that
> +     constant expressions like ~1 are permitted.  */
> +  upper = (uint64_t) -1 << (esize * 4) << (esize * 4);
> +  if ((value & ~upper) != value && (value | upper) != value)
> +    return FALSE;
>  
> -      /* Replicate the 32 lower bits to the 32 upper bits.  */
> -      value &= 0xffffffff;
> -      value |= value << 32;
> -    }
> +  /* Replicate to a full 64-bit value.  */
> +  value &= ~upper;
> +  for (i = esize * 8; i < 64; i *= 2)
> +    value |= (value << i);
>  
>    imm_enc.imm = value;
>    imm_encoding = (const simd_imm_encoding *)
> @@ -1645,7 +1646,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  
>  	case AARCH64_OPND_IMM_MOV:
>  	    {
> -	      int is32 = aarch64_get_qualifier_esize (opnds[0].qualifier) == 4;
> +	      int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
>  	      imm = opnd->imm.value;
>  	      assert (idx == 1);
>  	      switch (opcode->op)
> @@ -1654,7 +1655,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  		  imm = ~imm;
>  		  /* Fall through...  */
>  		case OP_MOV_IMM_WIDE:
> -		  if (!aarch64_wide_constant_p (imm, is32, NULL))
> +		  if (!aarch64_wide_constant_p (imm, esize == 4, NULL))
>  		    {
>  		      set_other_error (mismatch_detail, idx,
>  				       _("immediate out of range"));
> @@ -1662,7 +1663,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  		    }
>  		  break;
>  		case OP_MOV_IMM_LOG:
> -		  if (!aarch64_logical_immediate_p (imm, is32, NULL))
> +		  if (!aarch64_logical_immediate_p (imm, esize, NULL))
>  		    {
>  		      set_other_error (mismatch_detail, idx,
>  				       _("immediate out of range"));
> @@ -1707,18 +1708,18 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	  break;
>  
>  	case AARCH64_OPND_LIMM:
> -	    {
> -	      int is32 = opnds[0].qualifier == AARCH64_OPND_QLF_W;
> -	      uint64_t uimm = opnd->imm.value;
> -	      if (opcode->op == OP_BIC)
> -		uimm = ~uimm;
> -	      if (aarch64_logical_immediate_p (uimm, is32, NULL) == FALSE)
> -		{
> -		  set_other_error (mismatch_detail, idx,
> -				   _("immediate out of range"));
> -		  return 0;
> -		}
> -	    }
> +	  {
> +	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
> +	    uint64_t uimm = opnd->imm.value;
> +	    if (opcode->op == OP_BIC)
> +	      uimm = ~uimm;
> +	    if (aarch64_logical_immediate_p (uimm, esize, NULL) == FALSE)
> +	      {
> +		set_other_error (mismatch_detail, idx,
> +				 _("immediate out of range"));
> +		return 0;
> +	      }
> +	  }
>  	  break;
>  
>  	case AARCH64_OPND_IMM0:
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 15/32] Add {insert,extract}_all_fields helpers
  2016-08-23  9:15 ` [AArch64][SVE 15/32] Add {insert,extract}_all_fields helpers Richard Sandiford
@ 2016-08-25 13:50   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:50 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:15, Richard Sandiford wrote:
> Several of the SVE operands use the aarch64_operand fields array
> to store the fields that make up the operand, rather than hard-coding
> the names in the C code.  This patch adds helpers for inserting and
> extracting those fields.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> opcodes/
> 	* aarch64-asm.c: Include libiberty.h.
> 	(insert_fields): New function.
> 	(aarch64_ins_imm): Use it.
> 	* aarch64-dis.c (extract_fields): New function.
> 	(aarch64_ext_imm): Use it.
> 

OK.

R.

> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index 8fbd66f..3b0a383 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -20,6 +20,7 @@
>  
>  #include "sysdep.h"
>  #include <stdarg.h>
> +#include "libiberty.h"
>  #include "aarch64-asm.h"
>  
>  /* Utilities.  */
> @@ -55,6 +56,25 @@ insert_fields (aarch64_insn *code, aarch64_insn value, aarch64_insn mask, ...)
>    va_end (va);
>  }
>  
> +/* Insert a raw field value VALUE into all fields in SELF->fields.
> +   The least significant bit goes in the final field.  */
> +
> +static void
> +insert_all_fields (const aarch64_operand *self, aarch64_insn *code,
> +		   aarch64_insn value)
> +{
> +  unsigned int i;
> +  enum aarch64_field_kind kind;
> +
> +  for (i = ARRAY_SIZE (self->fields); i-- > 0; )
> +    if (self->fields[i] != FLD_NIL)
> +      {
> +	kind = self->fields[i];
> +	insert_field (kind, code, value, 0);
> +	value >>= fields[kind].width;
> +      }
> +}
> +
>  /* Operand inserters.  */
>  
>  /* Insert register number.  */
> @@ -318,17 +338,11 @@ aarch64_ins_imm (const aarch64_operand *self, const aarch64_opnd_info *info,
>  		 const aarch64_inst *inst ATTRIBUTE_UNUSED)
>  {
>    int64_t imm;
> -  /* Maximum of two fields to insert.  */
> -  assert (self->fields[2] == FLD_NIL);
>  
>    imm = info->imm.value;
>    if (operand_need_shift_by_two (self))
>      imm >>= 2;
> -  if (self->fields[1] == FLD_NIL)
> -    insert_field (self->fields[0], code, imm, 0);
> -  else
> -    /* e.g. TBZ b5:b40.  */
> -    insert_fields (code, imm, 0, 2, self->fields[1], self->fields[0]);
> +  insert_all_fields (self, code, imm);
>    return NULL;
>  }
>  
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index 9ffc713..67daa66 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -145,6 +145,26 @@ extract_fields (aarch64_insn code, aarch64_insn mask, ...)
>    return value;
>  }
>  
> +/* Extract the value of all fields in SELF->fields from instruction CODE.
> +   The least significant bit comes from the final field.  */
> +
> +static aarch64_insn
> +extract_all_fields (const aarch64_operand *self, aarch64_insn code)
> +{
> +  aarch64_insn value;
> +  unsigned int i;
> +  enum aarch64_field_kind kind;
> +
> +  value = 0;
> +  for (i = 0; i < ARRAY_SIZE (self->fields) && self->fields[i] != FLD_NIL; ++i)
> +    {
> +      kind = self->fields[i];
> +      value <<= fields[kind].width;
> +      value |= extract_field (kind, code, 0);
> +    }
> +  return value;
> +}
> +
>  /* Sign-extend bit I of VALUE.  */
>  static inline int32_t
>  sign_extend (aarch64_insn value, unsigned i)
> @@ -575,14 +595,8 @@ aarch64_ext_imm (const aarch64_operand *self, aarch64_opnd_info *info,
>  		 const aarch64_inst *inst ATTRIBUTE_UNUSED)
>  {
>    int64_t imm;
> -  /* Maximum of two fields to extract.  */
> -  assert (self->fields[2] == FLD_NIL);
>  
> -  if (self->fields[1] == FLD_NIL)
> -    imm = extract_field (self->fields[0], code, 0);
> -  else
> -    /* e.g. TBZ b5:b40.  */
> -    imm = extract_fields (code, 0, 2, self->fields[0], self->fields[1]);
> +  imm = extract_all_fields (self, code);
>  
>    if (info->type == AARCH64_OPND_FPIMM)
>      info->imm.is_fp = 1;
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 16/32] Use specific insert/extract methods for fpimm
  2016-08-23  9:16 ` [AArch64][SVE 16/32] Use specific insert/extract methods for fpimm Richard Sandiford
@ 2016-08-25 13:52   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:52 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:16, Richard Sandiford wrote:
> FPIMM used the normal "imm" insert/extract methods, with a specific
> test for FPIMM in the extract method.  SVE needs to use the same
> extractors, so rather than add extra checks for specific operand types,
> it seemed cleaner to use a separate insert/extract method.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERNADS): Use fpimm rather than imm
> 	for FPIMM.
> 	* aarch64-asm.h (ins_fpimm): New inserter.
> 	* aarch64-asm.c (aarch64_ins_fpimm): New function.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.h (ext_fpimm): New extractor.
> 	* aarch64-dis.c (aarch64_ext_imm): Remove fpimm test.
> 	(aarch64_ext_fpimm): New function.
> 	* aarch64-dis-2.c: Regenerate.
> 

OK.

R.

> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index 605bf08..439dd3d 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -500,7 +500,6 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 34:
>        return aarch64_ins_ldst_elemlist (self, info, code, inst);
>      case 37:
> -    case 46:
>      case 47:
>      case 48:
>      case 49:
> @@ -525,6 +524,8 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 41:
>      case 42:
>        return aarch64_ins_advsimd_imm_modified (self, info, code, inst);
> +    case 46:
> +      return aarch64_ins_fpimm (self, info, code, inst);
>      case 59:
>        return aarch64_ins_limm (self, info, code, inst);
>      case 60:
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index 3b0a383..f291495 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -417,6 +417,16 @@ aarch64_ins_advsimd_imm_modified (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return NULL;
>  }
>  
> +/* Insert fields for an 8-bit floating-point immediate.  */
> +const char *
> +aarch64_ins_fpimm (const aarch64_operand *self, const aarch64_opnd_info *info,
> +		   aarch64_insn *code,
> +		   const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  insert_all_fields (self, code, info->imm.value);
> +  return NULL;
> +}
> +
>  /* Insert #<fbits> for the immediate operand in fp fix-point instructions,
>     e.g.  SCVTF <Dd>, <Wn>, #<fbits>.  */
>  const char *
> diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
> index ad9183d..3211aff 100644
> --- a/opcodes/aarch64-asm.h
> +++ b/opcodes/aarch64-asm.h
> @@ -50,6 +50,7 @@ AARCH64_DECL_OPD_INSERTER (ins_advsimd_imm_shift);
>  AARCH64_DECL_OPD_INSERTER (ins_imm);
>  AARCH64_DECL_OPD_INSERTER (ins_imm_half);
>  AARCH64_DECL_OPD_INSERTER (ins_advsimd_imm_modified);
> +AARCH64_DECL_OPD_INSERTER (ins_fpimm);
>  AARCH64_DECL_OPD_INSERTER (ins_fbits);
>  AARCH64_DECL_OPD_INSERTER (ins_aimm);
>  AARCH64_DECL_OPD_INSERTER (ins_limm);
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 8e85dbf..a86a84d 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10450,7 +10450,6 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 34:
>        return aarch64_ext_ldst_elemlist (self, info, code, inst);
>      case 37:
> -    case 46:
>      case 47:
>      case 48:
>      case 49:
> @@ -10478,6 +10477,8 @@ aarch64_extract_operand (const aarch64_operand *self,
>        return aarch64_ext_advsimd_imm_modified (self, info, code, inst);
>      case 43:
>        return aarch64_ext_shll_imm (self, info, code, inst);
> +    case 46:
> +      return aarch64_ext_fpimm (self, info, code, inst);
>      case 59:
>        return aarch64_ext_limm (self, info, code, inst);
>      case 60:
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index 67daa66..4c3b521 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -598,9 +598,6 @@ aarch64_ext_imm (const aarch64_operand *self, aarch64_opnd_info *info,
>  
>    imm = extract_all_fields (self, code);
>  
> -  if (info->type == AARCH64_OPND_FPIMM)
> -    info->imm.is_fp = 1;
> -
>    if (operand_need_sign_extension (self))
>      imm = sign_extend (imm, get_operand_fields_width (self) - 1);
>  
> @@ -695,6 +692,17 @@ aarch64_ext_advsimd_imm_modified (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return 1;
>  }
>  
> +/* Decode an 8-bit floating-point immediate.  */
> +int
> +aarch64_ext_fpimm (const aarch64_operand *self, aarch64_opnd_info *info,
> +		   const aarch64_insn code,
> +		   const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  info->imm.value = extract_all_fields (self, code);
> +  info->imm.is_fp = 1;
> +  return 1;
> +}
> +
>  /* Decode scale for e.g. SCVTF <Dd>, <Wn>, #<fbits>.  */
>  int
>  aarch64_ext_fbits (const aarch64_operand *self ATTRIBUTE_UNUSED,
> diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
> index 9be5d7f..1f10157 100644
> --- a/opcodes/aarch64-dis.h
> +++ b/opcodes/aarch64-dis.h
> @@ -72,6 +72,7 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_shll_imm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_imm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_imm_half);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_advsimd_imm_modified);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_fpimm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_fbits);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_aimm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_limm);
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index b82678f..9a831e4 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2738,7 +2738,7 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "an immediate shift amount of 8, 16 or 32")			\
>      X(IMMEDIATE, 0, 0, "IMM0", 0, F(), "0")				\
>      X(IMMEDIATE, 0, 0, "FPIMM0", 0, F(), "0.0")				\
> -    Y(IMMEDIATE, imm, "FPIMM", 0, F(FLD_imm8),				\
> +    Y(IMMEDIATE, fpimm, "FPIMM", 0, F(FLD_imm8),			\
>        "an 8-bit floating-point constant")				\
>      Y(IMMEDIATE, imm, "IMMR", 0, F(FLD_immr),				\
>        "the right rotate amount")					\
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 17/32] Add a prefix parameter to print_register_list
  2016-08-23  9:16 ` [AArch64][SVE 17/32] Add a prefix parameter to print_register_list Richard Sandiford
@ 2016-08-25 13:53   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:53 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:16, Richard Sandiford wrote:
> This patch generalises the interface to print_register_list so
> that it can print register lists involving SVE z registers as
> well as AdvSIMD v ones.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> opcodes/
> 	* aarch64-opc.c (print_register_list): Add a prefix parameter.
> 	(aarch64_print_operand): Update accordingly.
> 

OK.

R.

> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 84da821..6eac70a 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -2261,9 +2261,11 @@ expand_fp_imm (int size, uint32_t imm8)
>  }
>  
>  /* Produce the string representation of the register list operand *OPND
> -   in the buffer pointed by BUF of size SIZE.  */
> +   in the buffer pointed by BUF of size SIZE.  PREFIX is the part of
> +   the register name that comes before the register number, such as "v".  */
>  static void
> -print_register_list (char *buf, size_t size, const aarch64_opnd_info *opnd)
> +print_register_list (char *buf, size_t size, const aarch64_opnd_info *opnd,
> +		     const char *prefix)
>  {
>    const int num_regs = opnd->reglist.num_regs;
>    const int first_reg = opnd->reglist.first_regno;
> @@ -2284,8 +2286,8 @@ print_register_list (char *buf, size_t size, const aarch64_opnd_info *opnd)
>       more than two registers in the list, and the register numbers
>       are monotonically increasing in increments of one.  */
>    if (num_regs > 2 && last_reg > first_reg)
> -    snprintf (buf, size, "{v%d.%s-v%d.%s}%s", first_reg, qlf_name,
> -	      last_reg, qlf_name, tb);
> +    snprintf (buf, size, "{%s%d.%s-%s%d.%s}%s", prefix, first_reg, qlf_name,
> +	      prefix, last_reg, qlf_name, tb);
>    else
>      {
>        const int reg0 = first_reg;
> @@ -2296,20 +2298,21 @@ print_register_list (char *buf, size_t size, const aarch64_opnd_info *opnd)
>        switch (num_regs)
>  	{
>  	case 1:
> -	  snprintf (buf, size, "{v%d.%s}%s", reg0, qlf_name, tb);
> +	  snprintf (buf, size, "{%s%d.%s}%s", prefix, reg0, qlf_name, tb);
>  	  break;
>  	case 2:
> -	  snprintf (buf, size, "{v%d.%s, v%d.%s}%s", reg0, qlf_name,
> -		    reg1, qlf_name, tb);
> +	  snprintf (buf, size, "{%s%d.%s, %s%d.%s}%s", prefix, reg0, qlf_name,
> +		    prefix, reg1, qlf_name, tb);
>  	  break;
>  	case 3:
> -	  snprintf (buf, size, "{v%d.%s, v%d.%s, v%d.%s}%s", reg0, qlf_name,
> -		    reg1, qlf_name, reg2, qlf_name, tb);
> +	  snprintf (buf, size, "{%s%d.%s, %s%d.%s, %s%d.%s}%s",
> +		    prefix, reg0, qlf_name, prefix, reg1, qlf_name,
> +		    prefix, reg2, qlf_name, tb);
>  	  break;
>  	case 4:
> -	  snprintf (buf, size, "{v%d.%s, v%d.%s, v%d.%s, v%d.%s}%s",
> -		    reg0, qlf_name, reg1, qlf_name, reg2, qlf_name,
> -		    reg3, qlf_name, tb);
> +	  snprintf (buf, size, "{%s%d.%s, %s%d.%s, %s%d.%s, %s%d.%s}%s",
> +		    prefix, reg0, qlf_name, prefix, reg1, qlf_name,
> +		    prefix, reg2, qlf_name, prefix, reg3, qlf_name, tb);
>  	  break;
>  	}
>      }
> @@ -2513,7 +2516,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_LVt:
>      case AARCH64_OPND_LVt_AL:
>      case AARCH64_OPND_LEt:
> -      print_register_list (buf, size, opnd);
> +      print_register_list (buf, size, opnd, "v");
>        break;
>  
>      case AARCH64_OPND_Cn:
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 18/32] Tidy definition of aarch64-opc.c:int_reg
  2016-08-23  9:16 ` [AArch64][SVE 18/32] Tidy definition of aarch64-opc.c:int_reg Richard Sandiford
@ 2016-08-25 13:55   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:55 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:16, Richard Sandiford wrote:
> Use a macro to define 31 regular registers followed by a supplied
> value for 0b11111.  The SVE code will also use this for vector base
> and offset registers.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> opcodes/
> 	* aarch64-opc.c (BANK): New macro.
> 	(R32, R64): Take a register number as argument
> 	(int_reg): Use BANK.
> 

OK.

R.

> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 6eac70a..3f9be62 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -2149,32 +2149,25 @@ aarch64_operand_index (const enum aarch64_opnd *operands, enum aarch64_opnd oper
>    return -1;
>  }
>  \f
> +/* R0...R30, followed by FOR31.  */
> +#define BANK(R, FOR31) \
> +  { R  (0), R  (1), R  (2), R  (3), R  (4), R  (5), R  (6), R  (7), \
> +    R  (8), R  (9), R (10), R (11), R (12), R (13), R (14), R (15), \
> +    R (16), R (17), R (18), R (19), R (20), R (21), R (22), R (23), \
> +    R (24), R (25), R (26), R (27), R (28), R (29), R (30),  FOR31 }
>  /* [0][0]  32-bit integer regs with sp   Wn
>     [0][1]  64-bit integer regs with sp   Xn  sf=1
>     [1][0]  32-bit integer regs with #0   Wn
>     [1][1]  64-bit integer regs with #0   Xn  sf=1 */
>  static const char *int_reg[2][2][32] = {
> -#define R32 "w"
> -#define R64 "x"
> -  { { R32  "0", R32  "1", R32  "2", R32  "3", R32  "4", R32  "5", R32  "6", R32  "7",
> -      R32  "8", R32  "9", R32 "10", R32 "11", R32 "12", R32 "13", R32 "14", R32 "15",
> -      R32 "16", R32 "17", R32 "18", R32 "19", R32 "20", R32 "21", R32 "22", R32 "23",
> -      R32 "24", R32 "25", R32 "26", R32 "27", R32 "28", R32 "29", R32 "30",    "wsp" },
> -    { R64  "0", R64  "1", R64  "2", R64  "3", R64  "4", R64  "5", R64  "6", R64  "7",
> -      R64  "8", R64  "9", R64 "10", R64 "11", R64 "12", R64 "13", R64 "14", R64 "15",
> -      R64 "16", R64 "17", R64 "18", R64 "19", R64 "20", R64 "21", R64 "22", R64 "23",
> -      R64 "24", R64 "25", R64 "26", R64 "27", R64 "28", R64 "29", R64 "30",     "sp" } },
> -  { { R32  "0", R32  "1", R32  "2", R32  "3", R32  "4", R32  "5", R32  "6", R32  "7",
> -      R32  "8", R32  "9", R32 "10", R32 "11", R32 "12", R32 "13", R32 "14", R32 "15",
> -      R32 "16", R32 "17", R32 "18", R32 "19", R32 "20", R32 "21", R32 "22", R32 "23",
> -      R32 "24", R32 "25", R32 "26", R32 "27", R32 "28", R32 "29", R32 "30", R32 "zr" },
> -    { R64  "0", R64  "1", R64  "2", R64  "3", R64  "4", R64  "5", R64  "6", R64  "7",
> -      R64  "8", R64  "9", R64 "10", R64 "11", R64 "12", R64 "13", R64 "14", R64 "15",
> -      R64 "16", R64 "17", R64 "18", R64 "19", R64 "20", R64 "21", R64 "22", R64 "23",
> -      R64 "24", R64 "25", R64 "26", R64 "27", R64 "28", R64 "29", R64 "30", R64 "zr" } }
> +#define R32(X) "w" #X
> +#define R64(X) "x" #X
> +  { BANK (R32, "wsp"), BANK (R64, "sp") },
> +  { BANK (R32, "wzr"), BANK (R64, "xzr") }
>  #undef R64
>  #undef R32
>  };
> +#undef BANK
>  
>  /* Return the integer register name.
>     if SP_REG_P is not 0, R31 is an SP reg, other R31 is the zero reg.  */
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 19/32] Refactor address-printing code
  2016-08-23  9:17 ` [AArch64][SVE 19/32] Refactor address-printing code Richard Sandiford
@ 2016-08-25 13:57   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:57 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:17, Richard Sandiford wrote:
> SVE adds addresses in which the base or offset are vector registers.
> The addresses otherwise have the same kind of form as normal AArch64
> addresses, including things like SXTW with or without a shift, UXTW
> with or without a shift, and LSL.
> 
> This patch therefore refactors the address-printing code so that it
> can cope with both scalar and vector registers.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> opcodes/
> 	* aarch64-opc.c (get_offset_int_reg_name): New function.
> 	(print_immediate_offset_address): Likewise.
> 	(print_register_offset_address): Take the base and offset
> 	registers as parameters.
> 	(aarch64_print_operand): Update caller accordingly.  Use
> 	print_immediate_offset_address.
> 

OK.

R.

> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 3f9be62..7a73c7e 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -2189,6 +2189,27 @@ get_64bit_int_reg_name (int regno, int sp_reg_p)
>    return int_reg[has_zr][1][regno];
>  }
>  
> +/* Get the name of the integer offset register in OPND, using the shift type
> +   to decide whether it's a word or doubleword.  */
> +
> +static inline const char *
> +get_offset_int_reg_name (const aarch64_opnd_info *opnd)
> +{
> +  switch (opnd->shifter.kind)
> +    {
> +    case AARCH64_MOD_UXTW:
> +    case AARCH64_MOD_SXTW:
> +      return get_int_reg_name (opnd->addr.offset.regno, AARCH64_OPND_QLF_W, 0);
> +
> +    case AARCH64_MOD_LSL:
> +    case AARCH64_MOD_SXTX:
> +      return get_int_reg_name (opnd->addr.offset.regno, AARCH64_OPND_QLF_X, 0);
> +
> +    default:
> +      abort ();
> +    }
> +}
> +
>  /* Types for expanding an encoded 8-bit value to a floating-point value.  */
>  
>  typedef union
> @@ -2311,28 +2332,43 @@ print_register_list (char *buf, size_t size, const aarch64_opnd_info *opnd,
>      }
>  }
>  
> +/* Print the register+immediate address in OPND to BUF, which has SIZE
> +   characters.  BASE is the name of the base register.  */
> +
> +static void
> +print_immediate_offset_address (char *buf, size_t size,
> +				const aarch64_opnd_info *opnd,
> +				const char *base)
> +{
> +  if (opnd->addr.writeback)
> +    {
> +      if (opnd->addr.preind)
> +	snprintf (buf, size, "[%s,#%d]!", base, opnd->addr.offset.imm);
> +      else
> +	snprintf (buf, size, "[%s],#%d", base, opnd->addr.offset.imm);
> +    }
> +  else
> +    {
> +      if (opnd->addr.offset.imm)
> +	snprintf (buf, size, "[%s,#%d]", base, opnd->addr.offset.imm);
> +      else
> +	snprintf (buf, size, "[%s]", base);
> +    }
> +}
> +
>  /* Produce the string representation of the register offset address operand
> -   *OPND in the buffer pointed by BUF of size SIZE.  */
> +   *OPND in the buffer pointed by BUF of size SIZE.  BASE and OFFSET are
> +   the names of the base and offset registers.  */
>  static void
>  print_register_offset_address (char *buf, size_t size,
> -			       const aarch64_opnd_info *opnd)
> +			       const aarch64_opnd_info *opnd,
> +			       const char *base, const char *offset)
>  {
>    char tb[16];			/* Temporary buffer.  */
> -  bfd_boolean lsl_p = FALSE;	/* Is LSL shift operator?  */
> -  bfd_boolean wm_p = FALSE;	/* Should Rm be Wm?  */
>    bfd_boolean print_extend_p = TRUE;
>    bfd_boolean print_amount_p = TRUE;
>    const char *shift_name = aarch64_operand_modifiers[opnd->shifter.kind].name;
>  
> -  switch (opnd->shifter.kind)
> -    {
> -    case AARCH64_MOD_UXTW: wm_p = TRUE; break;
> -    case AARCH64_MOD_LSL : lsl_p = TRUE; break;
> -    case AARCH64_MOD_SXTW: wm_p = TRUE; break;
> -    case AARCH64_MOD_SXTX: break;
> -    default: assert (0);
> -    }
> -
>    if (!opnd->shifter.amount && (opnd->qualifier != AARCH64_OPND_QLF_S_B
>  				|| !opnd->shifter.amount_present))
>      {
> @@ -2341,7 +2377,7 @@ print_register_offset_address (char *buf, size_t size,
>        print_amount_p = FALSE;
>        /* Likewise, no need to print the shift operator LSL in such a
>  	 situation.  */
> -      if (lsl_p)
> +      if (opnd->shifter.kind == AARCH64_MOD_LSL)
>  	print_extend_p = FALSE;
>      }
>  
> @@ -2356,12 +2392,7 @@ print_register_offset_address (char *buf, size_t size,
>    else
>      tb[0] = '\0';
>  
> -  snprintf (buf, size, "[%s,%s%s]",
> -	    get_64bit_int_reg_name (opnd->addr.base_regno, 1),
> -	    get_int_reg_name (opnd->addr.offset.regno,
> -			      wm_p ? AARCH64_OPND_QLF_W : AARCH64_OPND_QLF_X,
> -			      0 /* sp_reg_p */),
> -	    tb);
> +  snprintf (buf, size, "[%s,%s%s]", base, offset, tb);
>  }
>  
>  /* Generate the string representation of the operand OPNDS[IDX] for OPCODE
> @@ -2668,27 +2699,16 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>        break;
>  
>      case AARCH64_OPND_ADDR_REGOFF:
> -      print_register_offset_address (buf, size, opnd);
> +      print_register_offset_address
> +	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1),
> +	 get_offset_int_reg_name (opnd));
>        break;
>  
>      case AARCH64_OPND_ADDR_SIMM7:
>      case AARCH64_OPND_ADDR_SIMM9:
>      case AARCH64_OPND_ADDR_SIMM9_2:
> -      name = get_64bit_int_reg_name (opnd->addr.base_regno, 1);
> -      if (opnd->addr.writeback)
> -	{
> -	  if (opnd->addr.preind)
> -	    snprintf (buf, size, "[%s,#%d]!", name, opnd->addr.offset.imm);
> -	  else
> -	    snprintf (buf, size, "[%s],#%d", name, opnd->addr.offset.imm);
> -	}
> -      else
> -	{
> -	  if (opnd->addr.offset.imm)
> -	    snprintf (buf, size, "[%s,#%d]", name, opnd->addr.offset.imm);
> -	  else
> -	    snprintf (buf, size, "[%s]", name);
> -	}
> +      print_immediate_offset_address
> +	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1));
>        break;
>  
>      case AARCH64_OPND_ADDR_UIMM12:
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 20/32] Add support for tied operands
  2016-08-23  9:18 ` [AArch64][SVE 20/32] Add support for tied operands Richard Sandiford
@ 2016-08-25 13:59   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 13:59 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:18, Richard Sandiford wrote:
> SVE has some instructions in which the same register appears twice
> in the assembly string, once as an input and once as an output.
> This patch adds a general mechanism for that.
> 
> The patch needs to add new information to the instruction entries.
> One option would have been to extend the flags field of the opcode
> to 64 bits (since we already rely on 64-bit integers being available
> on the host).  However, the *_INSN macros mean that it's easy to add
> new information as top-level fields without affecting the existing
> table entries too much.  Going for that option seemed to give slightly
> neater code.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (aarch64_opcode): Add a tied_operand field.
> 	(AARCH64_OPDE_UNTIED_OPERAND): New aarch64_operand_error_kind.
> 
> opcodes/
> 	* aarch64-tbl.h (CORE_INSN, __FP_INSN, SIMD_INSN, CRYP_INSN)
> 	(_CRC_INSN, _LSE_INSN, _LOR_INSN, RDMA_INSN, FP16_INSN, SF16_INSN)
> 	(V8_2_INSN, aarch64_opcode_table): Initialize tied_operand field.
> 	* aarch64-opc.c (aarch64_match_operands_constraint): Check for
> 	tied operands.
> 
> gas/
> 	* config/tc-aarch64.c (output_operand_error_record): Handle
> 	AARCH64_OPDE_UNTIED_OPERAND.

OK.

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 9591704..37f7d26 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -4419,6 +4419,11 @@ output_operand_error_record (const operand_error_record *record, char *str)
>  	}
>        break;
>  
> +    case AARCH64_OPDE_UNTIED_OPERAND:
> +      as_bad (_("operand %d must be the same register as operand 1 -- `%s'"),
> +	      detail->index + 1, str);
> +      break;
> +
>      case AARCH64_OPDE_OUT_OF_RANGE:
>        if (detail->data[0] != detail->data[1])
>  	as_bad (_("%s out of range %d to %d at operand %d -- `%s'"),
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index 24a2ddb..d39f10d 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -539,6 +539,10 @@ struct aarch64_opcode
>    /* Flags providing information about this instruction */
>    uint32_t flags;
>  
> +  /* If nonzero, this operand and operand 0 are both registers and
> +     are required to have the same register number.  */
> +  unsigned char tied_operand;
> +
>    /* If non-NULL, a function to verify that a given instruction is valid.  */
>    bfd_boolean (* verifier) (const struct aarch64_opcode *, const aarch64_insn);
>  };
> @@ -872,6 +876,10 @@ typedef struct aarch64_inst aarch64_inst;
>       No syntax error, but the operands are not a valid combination, e.g.
>       FMOV D0,S0
>  
> +   AARCH64_OPDE_UNTIED_OPERAND
> +     The asm failed to use the same register for a destination operand
> +     and a tied source operand.
> +
>     AARCH64_OPDE_OUT_OF_RANGE
>       Error about some immediate value out of a valid range.
>  
> @@ -908,6 +916,7 @@ enum aarch64_operand_error_kind
>    AARCH64_OPDE_SYNTAX_ERROR,
>    AARCH64_OPDE_FATAL_SYNTAX_ERROR,
>    AARCH64_OPDE_INVALID_VARIANT,
> +  AARCH64_OPDE_UNTIED_OPERAND,
>    AARCH64_OPDE_OUT_OF_RANGE,
>    AARCH64_OPDE_UNALIGNED,
>    AARCH64_OPDE_REG_LIST,
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 7a73c7e..30501fc 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -2058,6 +2058,23 @@ aarch64_match_operands_constraint (aarch64_inst *inst,
>  
>    DEBUG_TRACE ("enter");
>  
> +  /* Check for cases where a source register needs to be the same as the
> +     destination register.  Do this before matching qualifiers since if
> +     an instruction has both invalid tying and invalid qualifiers,
> +     the error about qualifiers would suggest several alternative
> +     instructions that also have invalid tying.  */
> +  i = inst->opcode->tied_operand;
> +  if (i > 0 && (inst->operands[0].reg.regno != inst->operands[i].reg.regno))
> +    {
> +      if (mismatch_detail)
> +	{
> +	  mismatch_detail->kind = AARCH64_OPDE_UNTIED_OPERAND;
> +	  mismatch_detail->index = i;
> +	  mismatch_detail->error = NULL;
> +	}
> +      return 0;
> +    }
> +
>    /* Match operands' qualifier.
>       *INST has already had qualifier establish for some, if not all, of
>       its operands; we need to find out whether these established
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index 9a831e4..8f1c9b2 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -1393,27 +1393,27 @@ static const aarch64_feature_set aarch64_feature_stat_profile =
>  #define ARMV8_2		&aarch64_feature_v8_2
>  
>  #define CORE_INSN(NAME,OPCODE,MASK,CLASS,OP,OPS,QUALS,FLAGS) \
> -  { NAME, OPCODE, MASK, CLASS, OP, CORE, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, OP, CORE, OPS, QUALS, FLAGS, 0, NULL }
>  #define __FP_INSN(NAME,OPCODE,MASK,CLASS,OP,OPS,QUALS,FLAGS) \
> -  { NAME, OPCODE, MASK, CLASS, OP, FP, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, OP, FP, OPS, QUALS, FLAGS, 0, NULL }
>  #define SIMD_INSN(NAME,OPCODE,MASK,CLASS,OP,OPS,QUALS,FLAGS) \
> -  { NAME, OPCODE, MASK, CLASS, OP, SIMD, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, OP, SIMD, OPS, QUALS, FLAGS, 0, NULL }
>  #define CRYP_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
> -  { NAME, OPCODE, MASK, CLASS, 0, CRYPTO, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, 0, CRYPTO, OPS, QUALS, FLAGS, 0, NULL }
>  #define _CRC_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
> -  { NAME, OPCODE, MASK, CLASS, 0, CRC, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, 0, CRC, OPS, QUALS, FLAGS, 0, NULL }
>  #define _LSE_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
> -  { NAME, OPCODE, MASK, CLASS, 0, LSE, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, 0, LSE, OPS, QUALS, FLAGS, 0, NULL }
>  #define _LOR_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
> -  { NAME, OPCODE, MASK, CLASS, 0, LOR, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, 0, LOR, OPS, QUALS, FLAGS, 0, NULL }
>  #define RDMA_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
> -  { NAME, OPCODE, MASK, CLASS, 0, RDMA, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, 0, RDMA, OPS, QUALS, FLAGS, 0, NULL }
>  #define FF16_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS) \
> -  { NAME, OPCODE, MASK, CLASS, 0, FP_F16, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, 0, FP_F16, OPS, QUALS, FLAGS, 0, NULL }
>  #define SF16_INSN(NAME,OPCODE,MASK,CLASS,OPS,QUALS,FLAGS)		\
> -  { NAME, OPCODE, MASK, CLASS, 0, SIMD_F16, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, 0, SIMD_F16, OPS, QUALS, FLAGS, 0, NULL }
>  #define V8_2_INSN(NAME,OPCODE,MASK,CLASS,OP,OPS,QUALS,FLAGS) \
> -  { NAME, OPCODE, MASK, CLASS, OP, ARMV8_2, OPS, QUALS, FLAGS, NULL }
> +  { NAME, OPCODE, MASK, CLASS, OP, ARMV8_2, OPS, QUALS, FLAGS, 0, NULL }
>  
>  struct aarch64_opcode aarch64_opcode_table[] =
>  {
> @@ -2389,13 +2389,13 @@ struct aarch64_opcode aarch64_opcode_table[] =
>    CORE_INSN ("ldp", 0x29400000, 0x7ec00000, ldstpair_off, 0, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_R, F_SF),
>    CORE_INSN ("stp", 0x2d000000, 0x3fc00000, ldstpair_off, 0, OP3 (Ft, Ft2, ADDR_SIMM7), QL_LDST_PAIR_FP, 0),
>    CORE_INSN ("ldp", 0x2d400000, 0x3fc00000, ldstpair_off, 0, OP3 (Ft, Ft2, ADDR_SIMM7), QL_LDST_PAIR_FP, 0),
> -  {"ldpsw", 0x69400000, 0xffc00000, ldstpair_off, 0, CORE, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_X32, 0, VERIFIER (ldpsw)},
> +  {"ldpsw", 0x69400000, 0xffc00000, ldstpair_off, 0, CORE, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_X32, 0, 0, VERIFIER (ldpsw)},
>    /* Load/store register pair (indexed).  */
>    CORE_INSN ("stp", 0x28800000, 0x7ec00000, ldstpair_indexed, 0, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_R, F_SF),
>    CORE_INSN ("ldp", 0x28c00000, 0x7ec00000, ldstpair_indexed, 0, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_R, F_SF),
>    CORE_INSN ("stp", 0x2c800000, 0x3ec00000, ldstpair_indexed, 0, OP3 (Ft, Ft2, ADDR_SIMM7), QL_LDST_PAIR_FP, 0),
>    CORE_INSN ("ldp", 0x2cc00000, 0x3ec00000, ldstpair_indexed, 0, OP3 (Ft, Ft2, ADDR_SIMM7), QL_LDST_PAIR_FP, 0),
> -  {"ldpsw", 0x68c00000, 0xfec00000, ldstpair_indexed, 0, CORE, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_X32, 0, VERIFIER (ldpsw)},
> +  {"ldpsw", 0x68c00000, 0xfec00000, ldstpair_indexed, 0, CORE, OP3 (Rt, Rt2, ADDR_SIMM7), QL_LDST_PAIR_X32, 0, 0, VERIFIER (ldpsw)},
>    /* Load register (literal).  */
>    CORE_INSN ("ldr",   0x18000000, 0xbf000000, loadlit, OP_LDR_LIT,   OP2 (Rt, ADDR_PCREL19),    QL_R_PCREL, F_GPRSIZE_IN_Q),
>    CORE_INSN ("ldr",   0x1c000000, 0x3f000000, loadlit, OP_LDRV_LIT,  OP2 (Ft, ADDR_PCREL19),    QL_FP_PCREL, 0),
> @@ -2613,8 +2613,8 @@ struct aarch64_opcode aarch64_opcode_table[] =
>    CORE_INSN ("wfi", 0xd503207f, 0xffffffff, ic_system, 0, OP0 (), {}, F_ALIAS),
>    CORE_INSN ("sev", 0xd503209f, 0xffffffff, ic_system, 0, OP0 (), {}, F_ALIAS),
>    CORE_INSN ("sevl",0xd50320bf, 0xffffffff, ic_system, 0, OP0 (), {}, F_ALIAS),
> -  {"esb", 0xd503221f, 0xffffffff, ic_system, 0, RAS, OP0 (), {}, F_ALIAS, NULL},
> -  {"psb", 0xd503223f, 0xffffffff, ic_system, 0, STAT_PROFILE, OP1 (BARRIER_PSB), {}, F_ALIAS, NULL},
> +  {"esb", 0xd503221f, 0xffffffff, ic_system, 0, RAS, OP0 (), {}, F_ALIAS, 0, NULL},
> +  {"psb", 0xd503223f, 0xffffffff, ic_system, 0, STAT_PROFILE, OP1 (BARRIER_PSB), {}, F_ALIAS, 0, NULL},
>    CORE_INSN ("clrex", 0xd503305f, 0xfffff0ff, ic_system, 0, OP1 (UIMM4), {}, F_OPD0_OPT | F_DEFAULT (0xF)),
>    CORE_INSN ("dsb", 0xd503309f, 0xfffff0ff, ic_system, 0, OP1 (BARRIER), {}, 0),
>    CORE_INSN ("dmb", 0xd50330bf, 0xfffff0ff, ic_system, 0, OP1 (BARRIER), {}, 0),
> @@ -2648,7 +2648,7 @@ struct aarch64_opcode aarch64_opcode_table[] =
>    CORE_INSN ("bgt", 0x5400000c, 0xff00001f, condbranch, 0, OP1 (ADDR_PCREL19), QL_PCREL_NIL, F_ALIAS | F_PSEUDO),
>    CORE_INSN ("ble", 0x5400000d, 0xff00001f, condbranch, 0, OP1 (ADDR_PCREL19), QL_PCREL_NIL, F_ALIAS | F_PSEUDO),
>  
> -  {0, 0, 0, 0, 0, 0, {}, {}, 0, NULL},
> +  {0, 0, 0, 0, 0, 0, {}, {}, 0, 0, NULL},
>  };
>  
>  #ifdef AARCH64_OPERANDS
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 21/32] Add Zn and Pn registers
  2016-08-23  9:18 ` [AArch64][SVE 21/32] Add Zn and Pn registers Richard Sandiford
@ 2016-08-25 14:07   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 14:07 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:18, Richard Sandiford wrote:
> This patch adds the Zn and Pn registers, and associated fields and
> operands.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_CLASS_SVE_REG): New aarch64_operand_class.
> 	(AARCH64_OPND_CLASS_PRED_REG): Likewise.
> 	(AARCH64_OPND_SVE_Pd, AARCH64_OPND_SVE_Pg3, AARCH64_OPND_SVE_Pg4_5)
> 	(AARCH64_OPND_SVE_Pg4_10, AARCH64_OPND_SVE_Pg4_16)
> 	(AARCH64_OPND_SVE_Pm, AARCH64_OPND_SVE_Pn, AARCH64_OPND_SVE_Pt)
> 	(AARCH64_OPND_SVE_Za_5, AARCH64_OPND_SVE_Za_16, AARCH64_OPND_SVE_Zd)
> 	(AARCH64_OPND_SVE_Zm_5, AARCH64_OPND_SVE_Zm_16, AARCH64_OPND_SVE_Zn)
> 	(AARCH64_OPND_SVE_Zn_INDEX, AARCH64_OPND_SVE_ZnxN)
> 	(AARCH64_OPND_SVE_Zt, AARCH64_OPND_SVE_ZtxN): New aarch64_opnds.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for new SVE operands.
> 	* aarch64-opc.h (FLD_SVE_Pd, FLD_SVE_Pg3, FLD_SVE_Pg4_5)
> 	(FLD_SVE_Pg4_10, FLD_SVE_Pg4_16, FLD_SVE_Pm, FLD_SVE_Pn, FLD_SVE_Pt)
> 	(FLD_SVE_Za_5, FLD_SVE_Za_16, FLD_SVE_Zd, FLD_SVE_Zm_5, FLD_SVE_Zm_16)
> 	(FLD_SVE_Zn, FLD_SVE_Zt, FLD_SVE_tzsh): New aarch64_field_kinds.
> 	* aarch64-opc.c (fields): Add corresponding entries here.
> 	(operand_general_constraint_met_p): Check that SVE register lists
> 	have the correct length.  Check the ranges of SVE index registers.
> 	Check for cases where p8-p15 are used in 3-bit predicate fields.
> 	(aarch64_print_operand): Handle the new SVE operands.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm.h (ins_sve_index, ins_sve_reglist): New inserters.
> 	* aarch64-asm.c (aarch64_ins_sve_index): New function.
> 	(aarch64_ins_sve_reglist): Likewise.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.h (ext_sve_index, ext_sve_reglist): New extractors.
> 	* aarch64-dis.c (aarch64_ext_sve_index): New function.
> 	(aarch64_ext_sve_reglist): Likewise.
> 	* aarch64-dis-2.c: Regenerate.
> 
> gas/
> 	* config/tc-aarch64.c (NTA_HASVARWIDTH): New macro.
> 	(AARCH64_REG_TYPES): Add ZN and PN.
> 	(get_reg_expected_msg): Handle them.
> 	(aarch64_check_reg_type): Likewise.  Update comment for
> 	REG_TYPE_R_Z_BHSDQ_V.
> 	(parse_vector_type_for_operand): Add a reg_type parameter.
> 	Skip the width for Zn and Pn registers.
> 	(parse_typed_reg): Extend vector handling to Zn and Pn.  Update the
> 	call to parse_vector_type_for_operand.  Set HASVARTYPE for Zn and Pn,
> 	expecting the width to be 0.
> 	(parse_vector_reg_list): Restrict error about [BHSD]nn operands to
> 	REG_TYPE_VN.
> 	(vectype_to_qualifier): Use S_[BHSD] qualifiers for NTA_HASVARWIDTH.
> 	(parse_operands): Handle the new Zn and Pn operands.
> 	(REGSET16): New macro, split out from...
> 	(REGSET31): ...here.
> 	(reg_names): Add Zn and Pn entries.
> 

OK.

R.


> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 37f7d26..53e602f 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -87,8 +87,9 @@ enum vector_el_type
>  };
>  
>  /* Bits for DEFINED field in vector_type_el.  */
> -#define NTA_HASTYPE  1
> -#define NTA_HASINDEX 2
> +#define NTA_HASTYPE     1
> +#define NTA_HASINDEX    2
> +#define NTA_HASVARWIDTH 4
>  
>  struct vector_type_el
>  {
> @@ -265,6 +266,8 @@ struct reloc_entry
>    BASIC_REG_TYPE(FP_Q)	/* q[0-31] */	\
>    BASIC_REG_TYPE(CN)	/* c[0-7]  */	\
>    BASIC_REG_TYPE(VN)	/* v[0-31] */	\
> +  BASIC_REG_TYPE(ZN)	/* z[0-31] */	\
> +  BASIC_REG_TYPE(PN)	/* p[0-15] */	\
>    /* Typecheck: any 64-bit int reg         (inc SP exc XZR) */		\
>    MULTI_REG_TYPE(R64_SP, REG_TYPE(R_64) | REG_TYPE(SP_64))		\
>    /* Typecheck: any int                    (inc {W}SP inc [WX]ZR) */	\
> @@ -378,6 +381,12 @@ get_reg_expected_msg (aarch64_reg_type reg_type)
>      case REG_TYPE_VN:		/* any V reg  */
>        msg = N_("vector register expected");
>        break;
> +    case REG_TYPE_ZN:
> +      msg = N_("SVE vector register expected");
> +      break;
> +    case REG_TYPE_PN:
> +      msg = N_("SVE predicate register expected");
> +      break;
>      default:
>        as_fatal (_("invalid register type %d"), reg_type);
>      }
> @@ -678,12 +687,15 @@ aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
>      {
>      case REG_TYPE_R64_SP:	/* 64-bit integer reg (inc SP exc XZR).  */
>      case REG_TYPE_R_Z_SP:	/* Integer reg (inc {X}SP inc [WX]ZR).  */
> -    case REG_TYPE_R_Z_BHSDQ_V:	/* Any register apart from Cn.  */
> +    case REG_TYPE_R_Z_BHSDQ_V:	/* Any register apart from Zn, Pn or Cn.  */
>      case REG_TYPE_BHSDQ:	/* Any [BHSDQ]P FP or SIMD scalar register.  */
>      case REG_TYPE_VN:		/* Vector register.  */
>        gas_assert (reg->type < REG_TYPE_MAX && type < REG_TYPE_MAX);
>        return ((reg_type_masks[reg->type] & reg_type_masks[type])
>  	      == reg_type_masks[reg->type]);
> +    case REG_TYPE_ZN:
> +    case REG_TYPE_PN:
> +      return reg->type == type;
>      default:
>        as_fatal ("unhandled type %d", type);
>        abort ();
> @@ -751,15 +763,16 @@ aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
>    return reg->number;
>  }
>  
> -/* Parse the qualifier of a SIMD vector register or a SIMD vector element.
> -   Fill in *PARSED_TYPE and return TRUE if the parsing succeeds;
> -   otherwise return FALSE.
> +/* Parse the qualifier of a vector register or vector element of type
> +   REG_TYPE.  Fill in *PARSED_TYPE and return TRUE if the parsing
> +   succeeds; otherwise return FALSE.
>  
>     Accept only one occurrence of:
>     8b 16b 2h 4h 8h 2s 4s 1d 2d
>     b h s d q  */
>  static bfd_boolean
> -parse_vector_type_for_operand (struct vector_type_el *parsed_type, char **str)
> +parse_vector_type_for_operand (aarch64_reg_type reg_type,
> +			       struct vector_type_el *parsed_type, char **str)
>  {
>    char *ptr = *str;
>    unsigned width;
> @@ -769,7 +782,7 @@ parse_vector_type_for_operand (struct vector_type_el *parsed_type, char **str)
>    /* skip '.' */
>    ptr++;
>  
> -  if (!ISDIGIT (*ptr))
> +  if (reg_type == REG_TYPE_ZN || reg_type == REG_TYPE_PN || !ISDIGIT (*ptr))
>      {
>        width = 0;
>        goto elt_size;
> @@ -876,15 +889,23 @@ parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
>      }
>    type = reg->type;
>  
> -  if (type == REG_TYPE_VN && *str == '.')
> +  if ((type == REG_TYPE_VN || type == REG_TYPE_ZN || type == REG_TYPE_PN)
> +      && *str == '.')
>      {
> -      if (!parse_vector_type_for_operand (&parsetype, &str))
> +      if (!parse_vector_type_for_operand (type, &parsetype, &str))
>  	return PARSE_FAIL;
>  
>        /* Register if of the form Vn.[bhsdq].  */
>        is_typed_vecreg = TRUE;
>  
> -      if (parsetype.width == 0)
> +      if (type == REG_TYPE_ZN || type == REG_TYPE_PN)
> +	{
> +	  /* The width is always variable; we don't allow an integer width
> +	     to be specified.  */
> +	  gas_assert (parsetype.width == 0);
> +	  atype.defined |= NTA_HASVARWIDTH | NTA_HASTYPE;
> +	}
> +      else if (parsetype.width == 0)
>  	/* Expect index. In the new scheme we cannot have
>  	   Vn.[bhsdq] represent a scalar. Therefore any
>  	   Vn.[bhsdq] should have an index following it.
> @@ -1061,7 +1082,7 @@ parse_vector_reg_list (char **ccp, aarch64_reg_type type,
>  	  continue;
>  	}
>        /* reject [bhsd]n */
> -      if (typeinfo.defined == 0)
> +      if (type == REG_TYPE_VN && typeinfo.defined == 0)
>  	{
>  	  set_first_syntax_error (_("invalid scalar register in list"));
>  	  error = TRUE;
> @@ -4687,7 +4708,7 @@ vectype_to_qualifier (const struct vector_type_el *vectype)
>  
>    gas_assert (vectype->type >= NT_b && vectype->type <= NT_q);
>  
> -  if (vectype->defined & NTA_HASINDEX)
> +  if (vectype->defined & (NTA_HASINDEX | NTA_HASVARWIDTH))
>      /* Vector element register.  */
>      return AARCH64_OPND_QLF_S_B + vectype->type;
>    else
> @@ -5027,6 +5048,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>        struct vector_type_el vectype;
>        aarch64_opnd_qualifier_t qualifier;
>        aarch64_opnd_info *info = &inst.base.operands[i];
> +      aarch64_reg_type reg_type;
>  
>        DEBUG_TRACE ("parse operand %d", i);
>  
> @@ -5109,22 +5131,54 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  info->qualifier = AARCH64_OPND_QLF_S_B + (rtype - REG_TYPE_FP_B);
>  	  break;
>  
> +	case AARCH64_OPND_SVE_Pd:
> +	case AARCH64_OPND_SVE_Pg3:
> +	case AARCH64_OPND_SVE_Pg4_5:
> +	case AARCH64_OPND_SVE_Pg4_10:
> +	case AARCH64_OPND_SVE_Pg4_16:
> +	case AARCH64_OPND_SVE_Pm:
> +	case AARCH64_OPND_SVE_Pn:
> +	case AARCH64_OPND_SVE_Pt:
> +	  reg_type = REG_TYPE_PN;
> +	  goto vector_reg;
> +
> +	case AARCH64_OPND_SVE_Za_5:
> +	case AARCH64_OPND_SVE_Za_16:
> +	case AARCH64_OPND_SVE_Zd:
> +	case AARCH64_OPND_SVE_Zm_5:
> +	case AARCH64_OPND_SVE_Zm_16:
> +	case AARCH64_OPND_SVE_Zn:
> +	case AARCH64_OPND_SVE_Zt:
> +	  reg_type = REG_TYPE_ZN;
> +	  goto vector_reg;
> +
>  	case AARCH64_OPND_Vd:
>  	case AARCH64_OPND_Vn:
>  	case AARCH64_OPND_Vm:
> -	  val = aarch64_reg_parse (&str, REG_TYPE_VN, NULL, &vectype);
> +	  reg_type = REG_TYPE_VN;
> +	vector_reg:
> +	  val = aarch64_reg_parse (&str, reg_type, NULL, &vectype);
>  	  if (val == PARSE_FAIL)
>  	    {
> -	      first_error (_(get_reg_expected_msg (REG_TYPE_VN)));
> +	      first_error (_(get_reg_expected_msg (reg_type)));
>  	      goto failure;
>  	    }
>  	  if (vectype.defined & NTA_HASINDEX)
>  	    goto failure;
>  
>  	  info->reg.regno = val;
> -	  info->qualifier = vectype_to_qualifier (&vectype);
> -	  if (info->qualifier == AARCH64_OPND_QLF_NIL)
> -	    goto failure;
> +	  if ((reg_type == REG_TYPE_PN || reg_type == REG_TYPE_ZN)
> +	      && vectype.type == NT_invtype)
> +	    /* Unqualified Pn and Zn registers are allowed in certain
> +	       contexts.  Rely on F_STRICT qualifier checking to catch
> +	       invalid uses.  */
> +	    info->qualifier = AARCH64_OPND_QLF_NIL;
> +	  else
> +	    {
> +	      info->qualifier = vectype_to_qualifier (&vectype);
> +	      if (info->qualifier == AARCH64_OPND_QLF_NIL)
> +		goto failure;
> +	    }
>  	  break;
>  
>  	case AARCH64_OPND_VdD1:
> @@ -5149,13 +5203,19 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  info->qualifier = AARCH64_OPND_QLF_S_D;
>  	  break;
>  
> +	case AARCH64_OPND_SVE_Zn_INDEX:
> +	  reg_type = REG_TYPE_ZN;
> +	  goto vector_reg_index;
> +
>  	case AARCH64_OPND_Ed:
>  	case AARCH64_OPND_En:
>  	case AARCH64_OPND_Em:
> -	  val = aarch64_reg_parse (&str, REG_TYPE_VN, NULL, &vectype);
> +	  reg_type = REG_TYPE_VN;
> +	vector_reg_index:
> +	  val = aarch64_reg_parse (&str, reg_type, NULL, &vectype);
>  	  if (val == PARSE_FAIL)
>  	    {
> -	      first_error (_(get_reg_expected_msg (REG_TYPE_VN)));
> +	      first_error (_(get_reg_expected_msg (reg_type)));
>  	      goto failure;
>  	    }
>  	  if (vectype.type == NT_invtype || !(vectype.defined & NTA_HASINDEX))
> @@ -5168,20 +5228,43 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	    goto failure;
>  	  break;
>  
> +	case AARCH64_OPND_SVE_ZnxN:
> +	case AARCH64_OPND_SVE_ZtxN:
> +	  reg_type = REG_TYPE_ZN;
> +	  goto vector_reg_list;
> +
>  	case AARCH64_OPND_LVn:
>  	case AARCH64_OPND_LVt:
>  	case AARCH64_OPND_LVt_AL:
>  	case AARCH64_OPND_LEt:
> -	  if ((val = parse_vector_reg_list (&str, REG_TYPE_VN,
> -					    &vectype)) == PARSE_FAIL)
> -	    goto failure;
> -	  if (! reg_list_valid_p (val, /* accept_alternate */ 0))
> +	  reg_type = REG_TYPE_VN;
> +	vector_reg_list:
> +	  if (reg_type == REG_TYPE_ZN
> +	      && get_opcode_dependent_value (opcode) == 1
> +	      && *str != '{')
>  	    {
> -	      set_fatal_syntax_error (_("invalid register list"));
> -	      goto failure;
> +	      val = aarch64_reg_parse (&str, reg_type, NULL, &vectype);
> +	      if (val == PARSE_FAIL)
> +		{
> +		  first_error (_(get_reg_expected_msg (reg_type)));
> +		  goto failure;
> +		}
> +	      info->reglist.first_regno = val;
> +	      info->reglist.num_regs = 1;
> +	    }
> +	  else
> +	    {
> +	      val = parse_vector_reg_list (&str, reg_type, &vectype);
> +	      if (val == PARSE_FAIL)
> +		goto failure;
> +	      if (! reg_list_valid_p (val, /* accept_alternate */ 0))
> +		{
> +		  set_fatal_syntax_error (_("invalid register list"));
> +		  goto failure;
> +		}
> +	      info->reglist.first_regno = (val >> 2) & 0x1f;
> +	      info->reglist.num_regs = (val & 0x3) + 1;
>  	    }
> -	  info->reglist.first_regno = (val >> 2) & 0x1f;
> -	  info->reglist.num_regs = (val & 0x3) + 1;
>  	  if (operands[i] == AARCH64_OPND_LEt)
>  	    {
>  	      if (!(vectype.defined & NTA_HASINDEX))
> @@ -5189,8 +5272,17 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	      info->reglist.has_index = 1;
>  	      info->reglist.index = vectype.index;
>  	    }
> -	  else if (!(vectype.defined & NTA_HASTYPE))
> -	    goto failure;
> +	  else
> +	    {
> +	      if (vectype.defined & NTA_HASINDEX)
> +		goto failure;
> +	      if (!(vectype.defined & NTA_HASTYPE))
> +		{
> +		  if (reg_type == REG_TYPE_ZN)
> +		    set_fatal_syntax_error (_("missing type suffix"));
> +		  goto failure;
> +		}
> +	    }
>  	  info->qualifier = vectype_to_qualifier (&vectype);
>  	  if (info->qualifier == AARCH64_OPND_QLF_NIL)
>  	    goto failure;
> @@ -6185,11 +6277,13 @@ aarch64_canonicalize_symbol_name (char *name)
>  
>  #define REGDEF(s,n,t) { #s, n, REG_TYPE_##t, TRUE }
>  #define REGNUM(p,n,t) REGDEF(p##n, n, t)
> -#define REGSET31(p,t) \
> +#define REGSET16(p,t) \
>    REGNUM(p, 0,t), REGNUM(p, 1,t), REGNUM(p, 2,t), REGNUM(p, 3,t), \
>    REGNUM(p, 4,t), REGNUM(p, 5,t), REGNUM(p, 6,t), REGNUM(p, 7,t), \
>    REGNUM(p, 8,t), REGNUM(p, 9,t), REGNUM(p,10,t), REGNUM(p,11,t), \
> -  REGNUM(p,12,t), REGNUM(p,13,t), REGNUM(p,14,t), REGNUM(p,15,t), \
> +  REGNUM(p,12,t), REGNUM(p,13,t), REGNUM(p,14,t), REGNUM(p,15,t)
> +#define REGSET31(p,t) \
> +  REGSET16(p, t), \
>    REGNUM(p,16,t), REGNUM(p,17,t), REGNUM(p,18,t), REGNUM(p,19,t), \
>    REGNUM(p,20,t), REGNUM(p,21,t), REGNUM(p,22,t), REGNUM(p,23,t), \
>    REGNUM(p,24,t), REGNUM(p,25,t), REGNUM(p,26,t), REGNUM(p,27,t), \
> @@ -6229,10 +6323,18 @@ static const reg_entry reg_names[] = {
>  
>    /* FP/SIMD registers.  */
>    REGSET (v, VN), REGSET (V, VN),
> +
> +  /* SVE vector registers.  */
> +  REGSET (z, ZN), REGSET (Z, ZN),
> +
> +  /* SVE predicate registers.  */
> +  REGSET16 (p, PN), REGSET16 (P, PN)
>  };
>  
>  #undef REGDEF
>  #undef REGNUM
> +#undef REGSET16
> +#undef REGSET31
>  #undef REGSET
>  
>  #define N 1
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index d39f10d..b0eb617 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -120,6 +120,8 @@ enum aarch64_operand_class
>    AARCH64_OPND_CLASS_SISD_REG,
>    AARCH64_OPND_CLASS_SIMD_REGLIST,
>    AARCH64_OPND_CLASS_CP_REG,
> +  AARCH64_OPND_CLASS_SVE_REG,
> +  AARCH64_OPND_CLASS_PRED_REG,
>    AARCH64_OPND_CLASS_ADDRESS,
>    AARCH64_OPND_CLASS_IMMEDIATE,
>    AARCH64_OPND_CLASS_SYSTEM,
> @@ -241,6 +243,25 @@ enum aarch64_opnd
>    AARCH64_OPND_BARRIER_ISB,	/* Barrier operand for ISB.  */
>    AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
>    AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
> +
> +  AARCH64_OPND_SVE_Pd,		/* SVE p0-p15 in Pd.  */
> +  AARCH64_OPND_SVE_Pg3,		/* SVE p0-p7 in Pg.  */
> +  AARCH64_OPND_SVE_Pg4_5,	/* SVE p0-p15 in Pg, bits [8,5].  */
> +  AARCH64_OPND_SVE_Pg4_10,	/* SVE p0-p15 in Pg, bits [13,10].  */
> +  AARCH64_OPND_SVE_Pg4_16,	/* SVE p0-p15 in Pg, bits [19,16].  */
> +  AARCH64_OPND_SVE_Pm,		/* SVE p0-p15 in Pm.  */
> +  AARCH64_OPND_SVE_Pn,		/* SVE p0-p15 in Pn.  */
> +  AARCH64_OPND_SVE_Pt,		/* SVE p0-p15 in Pt.  */
> +  AARCH64_OPND_SVE_Za_5,	/* SVE vector register in Za, bits [9,5].  */
> +  AARCH64_OPND_SVE_Za_16,	/* SVE vector register in Za, bits [20,16].  */
> +  AARCH64_OPND_SVE_Zd,		/* SVE vector register in Zd.  */
> +  AARCH64_OPND_SVE_Zm_5,	/* SVE vector register in Zm, bits [9,5].  */
> +  AARCH64_OPND_SVE_Zm_16,	/* SVE vector register in Zm, bits [20,16].  */
> +  AARCH64_OPND_SVE_Zn,		/* SVE vector register in Zn.  */
> +  AARCH64_OPND_SVE_Zn_INDEX,	/* Indexed SVE vector register, for DUP.  */
> +  AARCH64_OPND_SVE_ZnxN,	/* SVE vector register list in Zn.  */
> +  AARCH64_OPND_SVE_Zt,		/* SVE vector register in Zt.  */
> +  AARCH64_OPND_SVE_ZtxN,	/* SVE vector register list in Zt.  */
>  };
>  
>  /* Qualifier constrains an operand.  It either specifies a variant of an
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index 439dd3d..9c797b2 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -480,6 +480,21 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> +    case 89:
> +    case 90:
> +    case 91:
> +    case 92:
> +    case 93:
> +    case 94:
> +    case 95:
> +    case 96:
> +    case 97:
> +    case 98:
> +    case 99:
> +    case 100:
> +    case 101:
> +    case 102:
> +    case 105:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -566,6 +581,11 @@ aarch64_insert_operand (const aarch64_operand *self,
>        return aarch64_ins_prfop (self, info, code, inst);
>      case 88:
>        return aarch64_ins_hint (self, info, code, inst);
> +    case 103:
> +      return aarch64_ins_sve_index (self, info, code, inst);
> +    case 104:
> +    case 106:
> +      return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
>  }
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index f291495..c045f9e 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -745,6 +745,33 @@ aarch64_ins_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return NULL;
>  }
>  
> +/* Encode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
> +   array specifies which field to use for Zn.  MM is encoded in the
> +   concatenation of imm5 and SVE_tszh, with imm5 being the less
> +   significant part.  */
> +const char *
> +aarch64_ins_sve_index (const aarch64_operand *self,
> +		       const aarch64_opnd_info *info, aarch64_insn *code,
> +		       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  unsigned int esize = aarch64_get_qualifier_esize (info->qualifier);
> +  insert_field (self->fields[0], code, info->reglane.regno, 0);
> +  insert_fields (code, (info->reglane.index * 2 + 1) * esize, 0,
> +		 2, FLD_imm5, FLD_SVE_tszh);
> +  return NULL;
> +}
> +
> +/* Encode {Zn.<T> - Zm.<T>}.  The fields array specifies which field
> +   to use for Zn.  */
> +const char *
> +aarch64_ins_sve_reglist (const aarch64_operand *self,
> +			 const aarch64_opnd_info *info, aarch64_insn *code,
> +			 const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  insert_field (self->fields[0], code, info->reglist.first_regno, 0);
> +  return NULL;
> +}
> +
>  /* Miscellaneous encoding functions.  */
>  
>  /* Encode size[0], i.e. bit 22, for
> diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
> index 3211aff..ede366c 100644
> --- a/opcodes/aarch64-asm.h
> +++ b/opcodes/aarch64-asm.h
> @@ -69,6 +69,8 @@ AARCH64_DECL_OPD_INSERTER (ins_hint);
>  AARCH64_DECL_OPD_INSERTER (ins_prfop);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_index);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
>  
>  #undef AARCH64_DECL_OPD_INSERTER
>  
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index a86a84d..6ea010b 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10426,6 +10426,21 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> +    case 89:
> +    case 90:
> +    case 91:
> +    case 92:
> +    case 93:
> +    case 94:
> +    case 95:
> +    case 96:
> +    case 97:
> +    case 98:
> +    case 99:
> +    case 100:
> +    case 101:
> +    case 102:
> +    case 105:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10519,6 +10534,11 @@ aarch64_extract_operand (const aarch64_operand *self,
>        return aarch64_ext_prfop (self, info, code, inst);
>      case 88:
>        return aarch64_ext_hint (self, info, code, inst);
> +    case 103:
> +      return aarch64_ext_sve_index (self, info, code, inst);
> +    case 104:
> +    case 106:
> +      return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
>  }
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index 4c3b521..ab93234 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -1185,6 +1185,40 @@ aarch64_ext_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
>  
>    return 1;
>  }
> +
> +/* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
> +   array specifies which field to use for Zn.  MM is encoded in the
> +   concatenation of imm5 and SVE_tszh, with imm5 being the less
> +   significant part.  */
> +int
> +aarch64_ext_sve_index (const aarch64_operand *self,
> +		       aarch64_opnd_info *info, aarch64_insn code,
> +		       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int val;
> +
> +  info->reglane.regno = extract_field (self->fields[0], code, 0);
> +  val = extract_fields (code, 0, 2, FLD_SVE_tszh, FLD_imm5);
> +  if ((val & 15) == 0)
> +    return 0;
> +  while ((val & 1) == 0)
> +    val /= 2;
> +  info->reglane.index = val / 2;
> +  return 1;
> +}
> +
> +/* Decode {Zn.<T> - Zm.<T>}.  The fields array specifies which field
> +   to use for Zn.  The opcode-dependent value specifies the number
> +   of registers in the list.  */
> +int
> +aarch64_ext_sve_reglist (const aarch64_operand *self,
> +			 aarch64_opnd_info *info, aarch64_insn code,
> +			 const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  info->reglist.first_regno = extract_field (self->fields[0], code, 0);
> +  info->reglist.num_regs = get_opcode_dependent_value (inst->opcode);
> +  return 1;
> +}
>  \f
>  /* Bitfields that are commonly used to encode certain operands' information
>     may be partially used as part of the base opcode in some instructions.
> diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
> index 1f10157..5efb904 100644
> --- a/opcodes/aarch64-dis.h
> +++ b/opcodes/aarch64-dis.h
> @@ -91,6 +91,8 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_hint);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_prfop);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
>  
>  #undef AARCH64_DECL_OPD_EXTRACTOR
>  
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index b53bb5c..f8a7079 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -113,6 +113,24 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
> +  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pd}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg3", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg3}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg4_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg4_5}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg4_10", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg4_10}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg4_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg4_16}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pm}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pn}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pt", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pt}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_5}, "an SVE vector register"},
> +  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_16}, "an SVE vector register"},
> +  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zd}, "an SVE vector register"},
> +  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zm_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zm_5}, "an SVE vector register"},
> +  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zm_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zm_16}, "an SVE vector register"},
> +  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an SVE vector register"},
> +  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zn_INDEX", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an indexed SVE vector register"},
> +  {AARCH64_OPND_CLASS_SVE_REG, "SVE_ZnxN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "a list of SVE vector registers"},
> +  {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zt", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zt}, "an SVE vector register"},
> +  {AARCH64_OPND_CLASS_SVE_REG, "SVE_ZtxN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zt}, "a list of SVE vector registers"},
>    {AARCH64_OPND_CLASS_NIL, "", 0, {0}, "DUMMY"},
>  };
>  
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 30501fc..56a0169 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -199,6 +199,22 @@ const aarch64_field fields[] =
>      { 31,  1 },	/* b5: in the test bit and branch instructions.  */
>      { 19,  5 },	/* b40: in the test bit and branch instructions.  */
>      { 10,  6 },	/* scale: in the fixed-point scalar to fp converting inst.  */
> +    {  0,  4 }, /* SVE_Pd: p0-p15, bits [3,0].  */
> +    { 10,  3 }, /* SVE_Pg3: p0-p7, bits [12,10].  */
> +    {  5,  4 }, /* SVE_Pg4_5: p0-p15, bits [8,5].  */
> +    { 10,  4 }, /* SVE_Pg4_10: p0-p15, bits [13,10].  */
> +    { 16,  4 }, /* SVE_Pg4_16: p0-p15, bits [19,16].  */
> +    { 16,  4 }, /* SVE_Pm: p0-p15, bits [19,16].  */
> +    {  5,  4 }, /* SVE_Pn: p0-p15, bits [8,5].  */
> +    {  0,  4 }, /* SVE_Pt: p0-p15, bits [3,0].  */
> +    {  5,  5 }, /* SVE_Za_5: SVE vector register, bits [9,5].  */
> +    { 16,  5 }, /* SVE_Za_16: SVE vector register, bits [20,16].  */
> +    {  0,  5 }, /* SVE_Zd: SVE vector register. bits [4,0].  */
> +    {  5,  5 }, /* SVE_Zm_5: SVE vector register, bits [9,5].  */
> +    { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
> +    {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
> +    {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
> +    { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
>  };
>  
>  enum aarch64_operand_class
> @@ -1332,6 +1348,43 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	}
>        break;
>  
> +    case AARCH64_OPND_CLASS_SVE_REG:
> +      switch (type)
> +	{
> +	case AARCH64_OPND_SVE_Zn_INDEX:
> +	  size = aarch64_get_qualifier_esize (opnd->qualifier);
> +	  if (!value_in_range_p (opnd->reglane.index, 0, 64 / size - 1))
> +	    {
> +	      set_elem_idx_out_of_range_error (mismatch_detail, idx,
> +					       0, 64 / size - 1);
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ZnxN:
> +	case AARCH64_OPND_SVE_ZtxN:
> +	  if (opnd->reglist.num_regs != get_opcode_dependent_value (opcode))
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid register list"));
> +	      return 0;
> +	    }
> +	  break;
> +
> +	default:
> +	  break;
> +	}
> +      break;
> +
> +    case AARCH64_OPND_CLASS_PRED_REG:
> +      if (opnd->reg.regno >= 8
> +	  && get_operand_fields_width (get_operand_from_code (type)) == 3)
> +	{
> +	  set_other_error (mismatch_detail, idx, _("p0-p7 expected"));
> +	  return 0;
> +	}
> +      break;
> +
>      case AARCH64_OPND_CLASS_COND:
>        if (type == AARCH64_OPND_COND1
>  	  && (opnds[idx].cond->value & 0xe) == 0xe)
> @@ -2560,6 +2613,46 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>        print_register_list (buf, size, opnd, "v");
>        break;
>  
> +    case AARCH64_OPND_SVE_Pd:
> +    case AARCH64_OPND_SVE_Pg3:
> +    case AARCH64_OPND_SVE_Pg4_5:
> +    case AARCH64_OPND_SVE_Pg4_10:
> +    case AARCH64_OPND_SVE_Pg4_16:
> +    case AARCH64_OPND_SVE_Pm:
> +    case AARCH64_OPND_SVE_Pn:
> +    case AARCH64_OPND_SVE_Pt:
> +      if (opnd->qualifier == AARCH64_OPND_QLF_NIL)
> +	snprintf (buf, size, "p%d", opnd->reg.regno);
> +      else
> +	snprintf (buf, size, "p%d.%s", opnd->reg.regno,
> +		  aarch64_get_qualifier_name (opnd->qualifier));
> +      break;
> +
> +    case AARCH64_OPND_SVE_Za_5:
> +    case AARCH64_OPND_SVE_Za_16:
> +    case AARCH64_OPND_SVE_Zd:
> +    case AARCH64_OPND_SVE_Zm_5:
> +    case AARCH64_OPND_SVE_Zm_16:
> +    case AARCH64_OPND_SVE_Zn:
> +    case AARCH64_OPND_SVE_Zt:
> +      if (opnd->qualifier == AARCH64_OPND_QLF_NIL)
> +	snprintf (buf, size, "z%d", opnd->reg.regno);
> +      else
> +	snprintf (buf, size, "z%d.%s", opnd->reg.regno,
> +		  aarch64_get_qualifier_name (opnd->qualifier));
> +      break;
> +
> +    case AARCH64_OPND_SVE_ZnxN:
> +    case AARCH64_OPND_SVE_ZtxN:
> +      print_register_list (buf, size, opnd, "z");
> +      break;
> +
> +    case AARCH64_OPND_SVE_Zn_INDEX:
> +      snprintf (buf, size, "z%d.%s[%" PRIi64 "]", opnd->reglane.regno,
> +		aarch64_get_qualifier_name (opnd->qualifier),
> +		opnd->reglane.index);
> +      break;
> +
>      case AARCH64_OPND_Cn:
>      case AARCH64_OPND_Cm:
>        snprintf (buf, size, "C%d", opnd->reg.regno);
> diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
> index 08494c6..cc3dbef 100644
> --- a/opcodes/aarch64-opc.h
> +++ b/opcodes/aarch64-opc.h
> @@ -91,6 +91,22 @@ enum aarch64_field_kind
>    FLD_b5,
>    FLD_b40,
>    FLD_scale,
> +  FLD_SVE_Pd,
> +  FLD_SVE_Pg3,
> +  FLD_SVE_Pg4_5,
> +  FLD_SVE_Pg4_10,
> +  FLD_SVE_Pg4_16,
> +  FLD_SVE_Pm,
> +  FLD_SVE_Pn,
> +  FLD_SVE_Pt,
> +  FLD_SVE_Za_5,
> +  FLD_SVE_Za_16,
> +  FLD_SVE_Zd,
> +  FLD_SVE_Zm_5,
> +  FLD_SVE_Zm_16,
> +  FLD_SVE_Zn,
> +  FLD_SVE_Zt,
> +  FLD_SVE_tszh,
>  };
>  
>  /* Field description.  */
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index 8f1c9b2..9dbe0c0 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2819,4 +2819,40 @@ struct aarch64_opcode aarch64_opcode_table[] =
>      Y(SYSTEM, prfop, "PRFOP", 0, F(),					\
>        "a prefetch operation specifier")					\
>      Y (SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
> -      "the PSB option name CSYNC")
> +      "the PSB option name CSYNC")					\
> +    Y(PRED_REG, regno, "SVE_Pd", 0, F(FLD_SVE_Pd),			\
> +      "an SVE predicate register")					\
> +    Y(PRED_REG, regno, "SVE_Pg3", 0, F(FLD_SVE_Pg3),			\
> +      "an SVE predicate register")					\
> +    Y(PRED_REG, regno, "SVE_Pg4_5", 0, F(FLD_SVE_Pg4_5),		\
> +      "an SVE predicate register")					\
> +    Y(PRED_REG, regno, "SVE_Pg4_10", 0, F(FLD_SVE_Pg4_10),		\
> +      "an SVE predicate register")					\
> +    Y(PRED_REG, regno, "SVE_Pg4_16", 0, F(FLD_SVE_Pg4_16),		\
> +      "an SVE predicate register")					\
> +    Y(PRED_REG, regno, "SVE_Pm", 0, F(FLD_SVE_Pm),			\
> +      "an SVE predicate register")					\
> +    Y(PRED_REG, regno, "SVE_Pn", 0, F(FLD_SVE_Pn),			\
> +      "an SVE predicate register")					\
> +    Y(PRED_REG, regno, "SVE_Pt", 0, F(FLD_SVE_Pt),			\
> +      "an SVE predicate register")					\
> +    Y(SVE_REG, regno, "SVE_Za_5", 0, F(FLD_SVE_Za_5),			\
> +      "an SVE vector register")						\
> +    Y(SVE_REG, regno, "SVE_Za_16", 0, F(FLD_SVE_Za_16),			\
> +      "an SVE vector register")						\
> +    Y(SVE_REG, regno, "SVE_Zd", 0, F(FLD_SVE_Zd),			\
> +      "an SVE vector register")						\
> +    Y(SVE_REG, regno, "SVE_Zm_5", 0, F(FLD_SVE_Zm_5),			\
> +      "an SVE vector register")						\
> +    Y(SVE_REG, regno, "SVE_Zm_16", 0, F(FLD_SVE_Zm_16),			\
> +      "an SVE vector register")						\
> +    Y(SVE_REG, regno, "SVE_Zn", 0, F(FLD_SVE_Zn),			\
> +      "an SVE vector register")						\
> +    Y(SVE_REG, sve_index, "SVE_Zn_INDEX", 0, F(FLD_SVE_Zn),		\
> +      "an indexed SVE vector register")					\
> +    Y(SVE_REG, sve_reglist, "SVE_ZnxN", 0, F(FLD_SVE_Zn),		\
> +      "a list of SVE vector registers")					\
> +    Y(SVE_REG, regno, "SVE_Zt", 0, F(FLD_SVE_Zt),			\
> +      "an SVE vector register")						\
> +    Y(SVE_REG, sve_reglist, "SVE_ZtxN", 0, F(FLD_SVE_Zt),		\
> +      "a list of SVE vector registers")
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 22/32] Add qualifiers for merging and zeroing predication
  2016-08-23  9:19 ` [AArch64][SVE 22/32] Add qualifiers for merging and zeroing predication Richard Sandiford
@ 2016-08-25 14:08   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 14:08 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:19, Richard Sandiford wrote:
> This patch adds qualifiers to represent /z and /m suffixes on
> predicate registers.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_QLF_P_Z): New aarch64_opnd_qualifier.
> 	(AARCH64_OPND_QLF_P_M): Likewise.
> 
> opcodes/
> 	* aarch64-opc.c (aarch64_opnd_qualifiers): Add entries for
> 	AARCH64_OPND_QLF_P_[ZM].
> 	(aarch64_print_operand): Print /z and /m where appropriate.
> 
> gas/
> 	* config/tc-aarch64.c (vector_el_type): Add NT_zero and NT_merge.
> 	(parse_vector_type_for_operand): Assert that the skipped character
> 	is a '.'.
> 	(parse_predication_for_operand): New function.
> 	(parse_typed_reg): Parse /z and /m suffixes for predicate registers.
> 	(vectype_to_qualifier): Handle NT_zero and NT_merge.
> 

OK.

R.


> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 53e602f..ed4933b 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -83,7 +83,9 @@ enum vector_el_type
>    NT_h,
>    NT_s,
>    NT_d,
> -  NT_q
> +  NT_q,
> +  NT_zero,
> +  NT_merge
>  };
>  
>  /* Bits for DEFINED field in vector_type_el.  */
> @@ -780,6 +782,7 @@ parse_vector_type_for_operand (aarch64_reg_type reg_type,
>    enum vector_el_type type;
>  
>    /* skip '.' */
> +  gas_assert (*ptr == '.');
>    ptr++;
>  
>    if (reg_type == REG_TYPE_ZN || reg_type == REG_TYPE_PN || !ISDIGIT (*ptr))
> @@ -846,6 +849,38 @@ elt_size:
>    return TRUE;
>  }
>  
> +/* *STR contains an SVE zero/merge predication suffix.  Parse it into
> +   *PARSED_TYPE and point *STR at the end of the suffix.  */
> +
> +static bfd_boolean
> +parse_predication_for_operand (struct vector_type_el *parsed_type, char **str)
> +{
> +  char *ptr = *str;
> +
> +  /* Skip '/'.  */
> +  gas_assert (*ptr == '/');
> +  ptr++;
> +  switch (TOLOWER (*ptr))
> +    {
> +    case 'z':
> +      parsed_type->type = NT_zero;
> +      break;
> +    case 'm':
> +      parsed_type->type = NT_merge;
> +      break;
> +    default:
> +      if (*ptr != '\0' && *ptr != ',')
> +	first_error_fmt (_("unexpected character `%c' in predication type"),
> +			 *ptr);
> +      else
> +	first_error (_("missing predication type"));
> +      return FALSE;
> +    }
> +  parsed_type->width = 0;
> +  *str = ptr + 1;
> +  return TRUE;
> +}
> +
>  /* Parse a register of the type TYPE.
>  
>     Return PARSE_FAIL if the string pointed by *CCP is not a valid register
> @@ -890,10 +925,18 @@ parse_typed_reg (char **ccp, aarch64_reg_type type, aarch64_reg_type *rtype,
>    type = reg->type;
>  
>    if ((type == REG_TYPE_VN || type == REG_TYPE_ZN || type == REG_TYPE_PN)
> -      && *str == '.')
> +      && (*str == '.' || (type == REG_TYPE_PN && *str == '/')))
>      {
> -      if (!parse_vector_type_for_operand (type, &parsetype, &str))
> -	return PARSE_FAIL;
> +      if (*str == '.')
> +	{
> +	  if (!parse_vector_type_for_operand (type, &parsetype, &str))
> +	    return PARSE_FAIL;
> +	}
> +      else
> +	{
> +	  if (!parse_predication_for_operand (&parsetype, &str))
> +	    return PARSE_FAIL;
> +	}
>  
>        /* Register if of the form Vn.[bhsdq].  */
>        is_typed_vecreg = TRUE;
> @@ -4706,6 +4749,11 @@ vectype_to_qualifier (const struct vector_type_el *vectype)
>    if (!vectype->defined || vectype->type == NT_invtype)
>      goto vectype_conversion_fail;
>  
> +  if (vectype->type == NT_zero)
> +    return AARCH64_OPND_QLF_P_Z;
> +  if (vectype->type == NT_merge)
> +    return AARCH64_OPND_QLF_P_M;
> +
>    gas_assert (vectype->type >= NT_b && vectype->type <= NT_q);
>  
>    if (vectype->defined & (NTA_HASINDEX | NTA_HASVARWIDTH))
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index b0eb617..8eae0b9 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -315,6 +315,9 @@ enum aarch64_opnd_qualifier
>    AARCH64_OPND_QLF_V_2D,
>    AARCH64_OPND_QLF_V_1Q,
>  
> +  AARCH64_OPND_QLF_P_Z,
> +  AARCH64_OPND_QLF_P_M,
> +
>    /* Constraint on value.  */
>    AARCH64_OPND_QLF_imm_0_7,
>    AARCH64_OPND_QLF_imm_0_15,
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 56a0169..41c058f 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -603,6 +603,9 @@ struct operand_qualifier_data aarch64_opnd_qualifiers[] =
>    {8, 2, 0x7, "2d", OQK_OPD_VARIANT},
>    {16, 1, 0x8, "1q", OQK_OPD_VARIANT},
>  
> +  {0, 0, 0, "z", OQK_OPD_VARIANT},
> +  {0, 0, 0, "m", OQK_OPD_VARIANT},
> +
>    /* Qualifiers constraining the value range.
>       First 3 fields:
>       Lower bound, higher bound, unused.  */
> @@ -2623,6 +2626,10 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_SVE_Pt:
>        if (opnd->qualifier == AARCH64_OPND_QLF_NIL)
>  	snprintf (buf, size, "p%d", opnd->reg.regno);
> +      else if (opnd->qualifier == AARCH64_OPND_QLF_P_Z
> +	       || opnd->qualifier == AARCH64_OPND_QLF_P_M)
> +	snprintf (buf, size, "p%d/%s", opnd->reg.regno,
> +		  aarch64_get_qualifier_name (opnd->qualifier));
>        else
>  	snprintf (buf, size, "p%d.%s", opnd->reg.regno,
>  		  aarch64_get_qualifier_name (opnd->qualifier));
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 23/32] Add SVE pattern and prfop operands
  2016-08-23  9:20 ` [AArch64][SVE 23/32] Add SVE pattern and prfop operands Richard Sandiford
@ 2016-08-25 14:12   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 14:12 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:19, Richard Sandiford wrote:
> The SVE instructions have two enumerated operands: one to select a
> vector pattern and another to select a prefetch operation.  The latter
> is a cut-down version of the base AArch64 prefetch operation.
> 
> Both types of operand can also be specified as raw enum values such as #31.
> Reserved values can only be specified this way.
> 
> If it hadn't been for the pattern operand, I would have been tempted
> to use the existing parsing for prefetch operations and add extra
> checks for SVE.  However, since the patterns needed new enum parsing
> code anyway, it seeemed cleaner to reuse it for the prefetches too.
> 
> Because of the small number of enum values, I don't think we'd gain
> anything by using hash tables.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_SVE_PATTERN): New aarch64_opnd.
> 	(AARCH64_OPND_SVE_PRFOP): Likewise.
> 	(aarch64_sve_pattern_array): Declare.
> 	(aarch64_sve_prfop_array): Likewise.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for
> 	AARCH64_OPND_SVE_PATTERN and AARCH64_OPND_SVE_PRFOP.
> 	* aarch64-opc.h (FLD_SVE_pattern): New aarch64_field_kind.
> 	(FLD_SVE_prfop): Likewise.
> 	* aarch64-opc.c: Include libiberty.h.
> 	(aarch64_sve_pattern_array): New variable.
> 	(aarch64_sve_prfop_array): Likewise.
> 	(fields): Add entries for FLD_SVE_pattern and FLD_SVE_prfop.
> 	(aarch64_print_operand): Handle AARCH64_OPND_SVE_PATTERN and
> 	AARCH64_OPND_SVE_PRFOP.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis-2.c: Likewise.
> 	* aarch64-opc-2.c: Likewise.
> 
> gas/
> 	* config/tc-aarch64.c (parse_enum_string): New function.
> 	(po_enum_or_fail): New macro.
> 	(parse_operands): Handle AARCH64_OPND_SVE_PATTERN and
> 	AARCH64_OPND_SVE_PRFOP.
> 

OK.

R.

> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index ed4933b..9d1e3ec 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -3634,6 +3634,52 @@ parse_adrp (char **str)
>  
>  /* Miscellaneous. */
>  
> +/* Parse a symbolic operand such as "pow2" at *STR.  ARRAY is an array
> +   of SIZE tokens in which index I gives the token for field value I,
> +   or is null if field value I is invalid.  REG_TYPE says which register
> +   names should be treated as registers rather than as symbolic immediates.
> +
> +   Return true on success, moving *STR past the operand and storing the
> +   field value in *VAL.  */
> +
> +static int
> +parse_enum_string (char **str, int64_t *val, const char *const *array,
> +		   size_t size, aarch64_reg_type reg_type)
> +{
> +  expressionS exp;
> +  char *p, *q;
> +  size_t i;
> +
> +  /* Match C-like tokens.  */
> +  p = q = *str;
> +  while (ISALNUM (*q))
> +    q++;
> +
> +  for (i = 0; i < size; ++i)
> +    if (array[i]
> +	&& strncasecmp (array[i], p, q - p) == 0
> +	&& array[i][q - p] == 0)
> +      {
> +	*val = i;
> +	*str = q;
> +	return TRUE;
> +      }
> +
> +  if (!parse_immediate_expression (&p, &exp, reg_type))
> +    return FALSE;
> +
> +  if (exp.X_op == O_constant
> +      && (uint64_t) exp.X_add_number < size)
> +    {
> +      *val = exp.X_add_number;
> +      *str = p;
> +      return TRUE;
> +    }
> +
> +  /* Use the default error for this operand.  */
> +  return FALSE;
> +}
> +
>  /* Parse an option for a preload instruction.  Returns the encoding for the
>     option, or PARSE_FAIL.  */
>  
> @@ -3844,6 +3890,12 @@ parse_sys_ins_reg (char **str, struct hash_control *sys_ins_regs)
>        }								\
>    } while (0)
>  
> +#define po_enum_or_fail(array) do {				\
> +    if (!parse_enum_string (&str, &val, array,			\
> +			    ARRAY_SIZE (array), imm_reg_type))	\
> +      goto failure;						\
> +  } while (0)
> +
>  #define po_misc_or_fail(expr) do {				\
>      if (!expr)							\
>        goto failure;						\
> @@ -4857,6 +4909,8 @@ process_omitted_operand (enum aarch64_opnd type, const aarch64_opcode *opcode,
>      case AARCH64_OPND_WIDTH:
>      case AARCH64_OPND_UIMM7:
>      case AARCH64_OPND_NZCV:
> +    case AARCH64_OPND_SVE_PATTERN:
> +    case AARCH64_OPND_SVE_PRFOP:
>        operand->imm.value = default_value;
>        break;
>  
> @@ -5365,6 +5419,16 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  info->imm.value = val;
>  	  break;
>  
> +	case AARCH64_OPND_SVE_PATTERN:
> +	  po_enum_or_fail (aarch64_sve_pattern_array);
> +	  info->imm.value = val;
> +	  break;
> +
> +	case AARCH64_OPND_SVE_PRFOP:
> +	  po_enum_or_fail (aarch64_sve_prfop_array);
> +	  info->imm.value = val;
> +	  break;
> +
>  	case AARCH64_OPND_UIMM7:
>  	  po_imm_or_fail (0, 127);
>  	  info->imm.value = val;
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index 8eae0b9..dd191cf 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -244,6 +244,8 @@ enum aarch64_opnd
>    AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
>    AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
>  
> +  AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
> +  AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
>    AARCH64_OPND_SVE_Pd,		/* SVE p0-p15 in Pd.  */
>    AARCH64_OPND_SVE_Pg3,		/* SVE p0-p7 in Pg.  */
>    AARCH64_OPND_SVE_Pg4_5,	/* SVE p0-p15 in Pg, bits [8,5].  */
> @@ -1037,6 +1039,9 @@ aarch64_verbose (const char *, ...) __attribute__ ((format (printf, 1, 2)));
>  #define DEBUG_TRACE_IF(C, M, ...) ;
>  #endif /* DEBUG_AARCH64 */
>  
> +extern const char *const aarch64_sve_pattern_array[32];
> +extern const char *const aarch64_sve_prfop_array[16];
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index 9c797b2..0a6e476 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -480,8 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 89:
> -    case 90:
>      case 91:
>      case 92:
>      case 93:
> @@ -494,7 +492,9 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 100:
>      case 101:
>      case 102:
> -    case 105:
> +    case 103:
> +    case 104:
> +    case 107:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -531,6 +531,8 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 68:
>      case 69:
>      case 70:
> +    case 89:
> +    case 90:
>        return aarch64_ins_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -581,10 +583,10 @@ aarch64_insert_operand (const aarch64_operand *self,
>        return aarch64_ins_prfop (self, info, code, inst);
>      case 88:
>        return aarch64_ins_hint (self, info, code, inst);
> -    case 103:
> +    case 105:
>        return aarch64_ins_sve_index (self, info, code, inst);
> -    case 104:
>      case 106:
> +    case 108:
>        return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 6ea010b..9f936f0 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10426,8 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 89:
> -    case 90:
>      case 91:
>      case 92:
>      case 93:
> @@ -10440,7 +10438,9 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 100:
>      case 101:
>      case 102:
> -    case 105:
> +    case 103:
> +    case 104:
> +    case 107:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10482,6 +10482,8 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 68:
>      case 69:
>      case 70:
> +    case 89:
> +    case 90:
>        return aarch64_ext_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -10534,10 +10536,10 @@ aarch64_extract_operand (const aarch64_operand *self,
>        return aarch64_ext_prfop (self, info, code, inst);
>      case 88:
>        return aarch64_ext_hint (self, info, code, inst);
> -    case 103:
> +    case 105:
>        return aarch64_ext_sve_index (self, info, code, inst);
> -    case 104:
>      case 106:
> +    case 108:
>        return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index f8a7079..3905053 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -113,6 +113,8 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pd}, "an SVE predicate register"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg3", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg3}, "an SVE predicate register"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg4_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg4_5}, "an SVE predicate register"},
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 41c058f..934c14d 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -27,6 +27,7 @@
>  #include <inttypes.h>
>  
>  #include "opintl.h"
> +#include "libiberty.h"
>  
>  #include "aarch64-opc.h"
>  
> @@ -34,6 +35,70 @@
>  int debug_dump = FALSE;
>  #endif /* DEBUG_AARCH64 */
>  
> +/* The enumeration strings associated with each value of a 5-bit SVE
> +   pattern operand.  A null entry indicates a reserved meaning.  */
> +const char *const aarch64_sve_pattern_array[32] = {
> +  /* 0-7.  */
> +  "pow2",
> +  "vl1",
> +  "vl2",
> +  "vl3",
> +  "vl4",
> +  "vl5",
> +  "vl6",
> +  "vl7",
> +  /* 8-15.  */
> +  "vl8",
> +  "vl16",
> +  "vl32",
> +  "vl64",
> +  "vl128",
> +  "vl256",
> +  0,
> +  0,
> +  /* 16-23.  */
> +  0,
> +  0,
> +  0,
> +  0,
> +  0,
> +  0,
> +  0,
> +  0,
> +  /* 24-31.  */
> +  0,
> +  0,
> +  0,
> +  0,
> +  0,
> +  "mul4",
> +  "mul3",
> +  "all"
> +};
> +
> +/* The enumeration strings associated with each value of a 4-bit SVE
> +   prefetch operand.  A null entry indicates a reserved meaning.  */
> +const char *const aarch64_sve_prfop_array[16] = {
> +  /* 0-7.  */
> +  "pldl1keep",
> +  "pldl1strm",
> +  "pldl2keep",
> +  "pldl2strm",
> +  "pldl3keep",
> +  "pldl3strm",
> +  0,
> +  0,
> +  /* 8-15.  */
> +  "pstl1keep",
> +  "pstl1strm",
> +  "pstl2keep",
> +  "pstl2strm",
> +  "pstl3keep",
> +  "pstl3strm",
> +  0,
> +  0
> +};
> +
>  /* Helper functions to determine which operand to be used to encode/decode
>     the size:Q fields for AdvSIMD instructions.  */
>  
> @@ -214,6 +279,8 @@ const aarch64_field fields[] =
>      { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
>      {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
>      {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
> +    {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
> +    {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
>      { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
>  };
>  
> @@ -2489,7 +2556,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>    const char *name = NULL;
>    const aarch64_opnd_info *opnd = opnds + idx;
>    enum aarch64_modifier_kind kind;
> -  uint64_t addr;
> +  uint64_t addr, enum_value;
>  
>    buf[0] = '\0';
>    if (pcrel_p)
> @@ -2681,6 +2748,27 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>        snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
>        break;
>  
> +    case AARCH64_OPND_SVE_PATTERN:
> +      if (optional_operand_p (opcode, idx)
> +	  && opnd->imm.value == get_optional_operand_default_value (opcode))
> +	break;
> +      enum_value = opnd->imm.value;
> +      assert (enum_value < ARRAY_SIZE (aarch64_sve_pattern_array));
> +      if (aarch64_sve_pattern_array[enum_value])
> +	snprintf (buf, size, "%s", aarch64_sve_pattern_array[enum_value]);
> +      else
> +	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
> +      break;
> +
> +    case AARCH64_OPND_SVE_PRFOP:
> +      enum_value = opnd->imm.value;
> +      assert (enum_value < ARRAY_SIZE (aarch64_sve_prfop_array));
> +      if (aarch64_sve_prfop_array[enum_value])
> +	snprintf (buf, size, "%s", aarch64_sve_prfop_array[enum_value]);
> +      else
> +	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
> +      break;
> +
>      case AARCH64_OPND_IMM_MOV:
>        switch (aarch64_get_qualifier_esize (opnds[0].qualifier))
>  	{
> diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
> index cc3dbef..b54f35e 100644
> --- a/opcodes/aarch64-opc.h
> +++ b/opcodes/aarch64-opc.h
> @@ -106,6 +106,8 @@ enum aarch64_field_kind
>    FLD_SVE_Zm_16,
>    FLD_SVE_Zn,
>    FLD_SVE_Zt,
> +  FLD_SVE_pattern,
> +  FLD_SVE_prfop,
>    FLD_SVE_tszh,
>  };
>  
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index 9dbe0c0..73415f7 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2820,6 +2820,10 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "a prefetch operation specifier")					\
>      Y (SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
>        "the PSB option name CSYNC")					\
> +    Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
> +      "an enumeration value such as POW2")				\
> +    Y(IMMEDIATE, imm, "SVE_PRFOP", 0, F(FLD_SVE_prfop),			\
> +      "an enumeration value such as PLDL1KEEP")				\
>      Y(PRED_REG, regno, "SVE_Pd", 0, F(FLD_SVE_Pd),			\
>        "an SVE predicate register")					\
>      Y(PRED_REG, regno, "SVE_Pg3", 0, F(FLD_SVE_Pg3),			\
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 24/32] Add AARCH64_OPND_SVE_PATTERN_SCALED
  2016-08-23  9:21 ` [AArch64][SVE 24/32] Add AARCH64_OPND_SVE_PATTERN_SCALED Richard Sandiford
@ 2016-08-25 14:28   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 14:28 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:20, Richard Sandiford wrote:
> Some SVE instructions count the number of elements in a given vector
> pattern and allow a scale factor of [1, 16] to be applied to the result.
> This scale factor is written ", MUL #n", where "MUL" is a new operator.
> E.g.:
> 
> 	UQINCD	X0, POW2, MUL #2
> 
> This patch adds support for this kind of operand.
> 
> All existing operators were shifts of some kind, so there was a natural
> range of [0, 63] regardless of context.  This was then narrowered further
> by later checks (e.g. to [0, 31] when used for 32-bit values).
> 
> In contrast, MUL doesn't really have a natural context-independent range.
> Rather than pick one arbitrarily, it seemed better to make the "shift"
> amount a full 64-bit value and leave the range test to the usual
> operand-checking code.  I've rearranged the fields of aarch64_opnd_info
> so that this doesn't increase the size of the structure (although I don't
> think its size is critical anyway).
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_SVE_PATTERN_SCALED): New aarch64_opnd.
> 	(AARCH64_MOD_MUL): New aarch64_modifier_kind.
> 	(aarch64_opnd_info): Make shifter.amount an int64_t and
> 	rearrange the fields.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add an entry for
> 	AARCH64_OPND_SVE_PATTERN_SCALED.
> 	* aarch64-opc.h (FLD_SVE_imm4): New aarch64_field_kind.
> 	* aarch64-opc.c (fields): Add a corresponding entry.
> 	(set_multiplier_out_of_range_error): New function.
> 	(aarch64_operand_modifiers): Add entry for AARCH64_MOD_MUL.
> 	(operand_general_constraint_met_p): Handle
> 	AARCH64_OPND_SVE_PATTERN_SCALED.
> 	(print_register_offset_address): Use PRIi64 to print the
> 	shift amount.
> 	(aarch64_print_operand): Likewise.  Handle
> 	AARCH64_OPND_SVE_PATTERN_SCALED.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm.h (ins_sve_scale): New inserter.
> 	* aarch64-asm.c (aarch64_ins_sve_scale): New function.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.h (ext_sve_scale): New inserter.
> 	* aarch64-dis.c (aarch64_ext_sve_scale): New function.
> 	* aarch64-dis-2.c: Regenerate.
> 
> gas/
> 	* config/tc-aarch64.c (SHIFTED_MUL): New parse_shift_mode.
> 	(parse_shift): Handle it.  Reject AARCH64_MOD_MUL for all other
> 	shift modes.  Skip range tests for AARCH64_MOD_MUL.
> 	(process_omitted_operand): Handle AARCH64_OPND_SVE_PATTERN_SCALED.
> 	(parse_operands): Likewise.

OK.

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 9d1e3ec..079f1c9 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -2912,6 +2912,7 @@ enum parse_shift_mode
>    SHIFTED_LOGIC_IMM,		/* "rn{,lsl|lsr|asl|asr|ror #n}" or
>  				   "#imm"  */
>    SHIFTED_LSL,			/* bare "lsl #n"  */
> +  SHIFTED_MUL,			/* bare "mul #n"  */
>    SHIFTED_LSL_MSL,		/* "lsl|msl #n"  */
>    SHIFTED_REG_OFFSET		/* [su]xtw|sxtx {#n} or lsl #n  */
>  };
> @@ -2953,6 +2954,13 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>        return FALSE;
>      }
>  
> +  if (kind == AARCH64_MOD_MUL
> +      && mode != SHIFTED_MUL)
> +    {
> +      set_syntax_error (_("invalid use of 'MUL'"));
> +      return FALSE;
> +    }
> +
>    switch (mode)
>      {
>      case SHIFTED_LOGIC_IMM:
> @@ -2979,6 +2987,14 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>  	}
>        break;
>  
> +    case SHIFTED_MUL:
> +      if (kind != AARCH64_MOD_MUL)
> +	{
> +	  set_syntax_error (_("only 'MUL' is permitted"));
> +	  return FALSE;
> +	}
> +      break;
> +
>      case SHIFTED_REG_OFFSET:
>        if (kind != AARCH64_MOD_UXTW && kind != AARCH64_MOD_LSL
>  	  && kind != AARCH64_MOD_SXTW && kind != AARCH64_MOD_SXTX)
> @@ -3031,7 +3047,11 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>        set_syntax_error (_("constant shift amount required"));
>        return FALSE;
>      }
> -  else if (exp.X_add_number < 0 || exp.X_add_number > 63)
> +  /* For parsing purposes, MUL #n has no inherent range.  The range
> +     depends on the operand and will be checked by operand-specific
> +     routines.  */
> +  else if (kind != AARCH64_MOD_MUL
> +	   && (exp.X_add_number < 0 || exp.X_add_number > 63))
>      {
>        set_fatal_syntax_error (_("shift amount out of range 0 to 63"));
>        return FALSE;
> @@ -4914,6 +4934,12 @@ process_omitted_operand (enum aarch64_opnd type, const aarch64_opcode *opcode,
>        operand->imm.value = default_value;
>        break;
>  
> +    case AARCH64_OPND_SVE_PATTERN_SCALED:
> +      operand->imm.value = default_value;
> +      operand->shifter.kind = AARCH64_MOD_MUL;
> +      operand->shifter.amount = 1;
> +      break;
> +
>      case AARCH64_OPND_EXCEPTION:
>        inst.reloc.type = BFD_RELOC_UNUSED;
>        break;
> @@ -5424,6 +5450,20 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  info->imm.value = val;
>  	  break;
>  
> +	case AARCH64_OPND_SVE_PATTERN_SCALED:
> +	  po_enum_or_fail (aarch64_sve_pattern_array);
> +	  info->imm.value = val;
> +	  if (skip_past_comma (&str)
> +	      && !parse_shift (&str, info, SHIFTED_MUL))
> +	    goto failure;
> +	  if (!info->shifter.operator_present)
> +	    {
> +	      gas_assert (info->shifter.kind == AARCH64_MOD_NONE);
> +	      info->shifter.kind = AARCH64_MOD_MUL;
> +	      info->shifter.amount = 1;
> +	    }
> +	  break;
> +
>  	case AARCH64_OPND_SVE_PRFOP:
>  	  po_enum_or_fail (aarch64_sve_prfop_array);
>  	  info->imm.value = val;
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index dd191cf..49b4413 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -245,6 +245,7 @@ enum aarch64_opnd
>    AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
>  
>    AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
> +  AARCH64_OPND_SVE_PATTERN_SCALED, /* Likewise, with additional MUL factor.  */
>    AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
>    AARCH64_OPND_SVE_Pd,		/* SVE p0-p15 in Pd.  */
>    AARCH64_OPND_SVE_Pg3,		/* SVE p0-p7 in Pg.  */
> @@ -745,6 +746,7 @@ enum aarch64_modifier_kind
>    AARCH64_MOD_SXTH,
>    AARCH64_MOD_SXTW,
>    AARCH64_MOD_SXTX,
> +  AARCH64_MOD_MUL,
>  };
>  
>  bfd_boolean
> @@ -836,10 +838,10 @@ struct aarch64_opnd_info
>    struct
>      {
>        enum aarch64_modifier_kind kind;
> -      int amount;
>        unsigned operator_present: 1;	/* Only valid during encoding.  */
>        /* Value of the 'S' field in ld/st reg offset; used only in decoding.  */
>        unsigned amount_present: 1;
> +      int64_t amount;
>      } shifter;
>  
>    unsigned skip:1;	/* Operand is not completed if there is a fixup needed
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index 0a6e476..039b9be 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -480,7 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 91:
>      case 92:
>      case 93:
>      case 94:
> @@ -494,7 +493,8 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 102:
>      case 103:
>      case 104:
> -    case 107:
> +    case 105:
> +    case 108:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -532,7 +532,7 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 69:
>      case 70:
>      case 89:
> -    case 90:
> +    case 91:
>        return aarch64_ins_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -583,10 +583,12 @@ aarch64_insert_operand (const aarch64_operand *self,
>        return aarch64_ins_prfop (self, info, code, inst);
>      case 88:
>        return aarch64_ins_hint (self, info, code, inst);
> -    case 105:
> -      return aarch64_ins_sve_index (self, info, code, inst);
> +    case 90:
> +      return aarch64_ins_sve_scale (self, info, code, inst);
>      case 106:
> -    case 108:
> +      return aarch64_ins_sve_index (self, info, code, inst);
> +    case 107:
> +    case 109:
>        return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index c045f9e..117a3c6 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -772,6 +772,19 @@ aarch64_ins_sve_reglist (const aarch64_operand *self,
>    return NULL;
>  }
>  
> +/* Encode <pattern>{, MUL #<amount>}.  The fields array specifies which
> +   fields to use for <pattern>.  <amount> - 1 is encoded in the SVE_imm4
> +   field.  */
> +const char *
> +aarch64_ins_sve_scale (const aarch64_operand *self,
> +		       const aarch64_opnd_info *info, aarch64_insn *code,
> +		       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  insert_all_fields (self, code, info->imm.value);
> +  insert_field (FLD_SVE_imm4, code, info->shifter.amount - 1, 0);
> +  return NULL;
> +}
> +
>  /* Miscellaneous encoding functions.  */
>  
>  /* Encode size[0], i.e. bit 22, for
> diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
> index ede366c..ac5faeb 100644
> --- a/opcodes/aarch64-asm.h
> +++ b/opcodes/aarch64-asm.h
> @@ -71,6 +71,7 @@ AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_index);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_scale);
>  
>  #undef AARCH64_DECL_OPD_INSERTER
>  
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 9f936f0..124385d 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10426,7 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 91:
>      case 92:
>      case 93:
>      case 94:
> @@ -10440,7 +10439,8 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 102:
>      case 103:
>      case 104:
> -    case 107:
> +    case 105:
> +    case 108:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10483,7 +10483,7 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 69:
>      case 70:
>      case 89:
> -    case 90:
> +    case 91:
>        return aarch64_ext_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -10536,10 +10536,12 @@ aarch64_extract_operand (const aarch64_operand *self,
>        return aarch64_ext_prfop (self, info, code, inst);
>      case 88:
>        return aarch64_ext_hint (self, info, code, inst);
> -    case 105:
> -      return aarch64_ext_sve_index (self, info, code, inst);
> +    case 90:
> +      return aarch64_ext_sve_scale (self, info, code, inst);
>      case 106:
> -    case 108:
> +      return aarch64_ext_sve_index (self, info, code, inst);
> +    case 107:
> +    case 109:
>        return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index ab93234..1d00c0a 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -1219,6 +1219,26 @@ aarch64_ext_sve_reglist (const aarch64_operand *self,
>    info->reglist.num_regs = get_opcode_dependent_value (inst->opcode);
>    return 1;
>  }
> +
> +/* Decode <pattern>{, MUL #<amount>}.  The fields array specifies which
> +   fields to use for <pattern>.  <amount> - 1 is encoded in the SVE_imm4
> +   field.  */
> +int
> +aarch64_ext_sve_scale (const aarch64_operand *self,
> +		       aarch64_opnd_info *info, aarch64_insn code,
> +		       const aarch64_inst *inst)
> +{
> +  int val;
> +
> +  if (!aarch64_ext_imm (self, info, code, inst))
> +    return 0;
> +  val = extract_field (FLD_SVE_imm4, code, 0);
> +  info->shifter.kind = AARCH64_MOD_MUL;
> +  info->shifter.amount = val + 1;
> +  info->shifter.operator_present = (val != 0);
> +  info->shifter.amount_present = (val != 0);
> +  return 1;
> +}
>  \f
>  /* Bitfields that are commonly used to encode certain operands' information
>     may be partially used as part of the base opcode in some instructions.
> diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
> index 5efb904..92f5ad4 100644
> --- a/opcodes/aarch64-dis.h
> +++ b/opcodes/aarch64-dis.h
> @@ -93,6 +93,7 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_scale);
>  
>  #undef AARCH64_DECL_OPD_EXTRACTOR
>  
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index 3905053..8f221b8 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -114,6 +114,7 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN_SCALED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pd}, "an SVE predicate register"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pg3", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pg3}, "an SVE predicate register"},
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 934c14d..326b94e 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -279,6 +279,7 @@ const aarch64_field fields[] =
>      { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
>      {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
>      {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
> +    { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
>      {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
>      {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
>      { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
> @@ -359,6 +360,7 @@ const struct aarch64_name_value_pair aarch64_operand_modifiers [] =
>      {"sxth", 0x5},
>      {"sxtw", 0x6},
>      {"sxtx", 0x7},
> +    {"mul", 0x0},
>      {NULL, 0},
>  };
>  
> @@ -1303,6 +1305,18 @@ set_sft_amount_out_of_range_error (aarch64_operand_error *mismatch_detail,
>  			  _("shift amount"));
>  }
>  
> +/* Report that the MUL modifier in operand IDX should be in the range
> +   [LOWER_BOUND, UPPER_BOUND].  */
> +static inline void
> +set_multiplier_out_of_range_error (aarch64_operand_error *mismatch_detail,
> +				   int idx, int lower_bound, int upper_bound)
> +{
> +  if (mismatch_detail == NULL)
> +    return;
> +  set_out_of_range_error (mismatch_detail, idx, lower_bound, upper_bound,
> +			  _("multiplier"));
> +}
> +
>  static inline void
>  set_unaligned_error (aarch64_operand_error *mismatch_detail, int idx,
>  		     int alignment)
> @@ -2001,6 +2015,15 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SVE_PATTERN_SCALED:
> +	  assert (opnd->shifter.kind == AARCH64_MOD_MUL);
> +	  if (!value_in_range_p (opnd->shifter.amount, 1, 16))
> +	    {
> +	      set_multiplier_out_of_range_error (mismatch_detail, idx, 1, 16);
> +	      return 0;
> +	    }
> +	  break;
> +
>  	default:
>  	  break;
>  	}
> @@ -2525,7 +2548,8 @@ print_register_offset_address (char *buf, size_t size,
>    if (print_extend_p)
>      {
>        if (print_amount_p)
> -	snprintf (tb, sizeof (tb), ",%s #%d", shift_name, opnd->shifter.amount);
> +	snprintf (tb, sizeof (tb), ",%s #%" PRIi64, shift_name,
> +		  opnd->shifter.amount);
>        else
>  	snprintf (tb, sizeof (tb), ",%s", shift_name);
>      }
> @@ -2620,7 +2644,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>  	    }
>  	}
>        if (opnd->shifter.amount)
> -	snprintf (buf, size, "%s, %s #%d",
> +	snprintf (buf, size, "%s, %s #%" PRIi64,
>  		  get_int_reg_name (opnd->reg.regno, opnd->qualifier, 0),
>  		  aarch64_operand_modifiers[kind].name,
>  		  opnd->shifter.amount);
> @@ -2637,7 +2661,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>  	snprintf (buf, size, "%s",
>  		  get_int_reg_name (opnd->reg.regno, opnd->qualifier, 0));
>        else
> -	snprintf (buf, size, "%s, %s #%d",
> +	snprintf (buf, size, "%s, %s #%" PRIi64,
>  		  get_int_reg_name (opnd->reg.regno, opnd->qualifier, 0),
>  		  aarch64_operand_modifiers[opnd->shifter.kind].name,
>  		  opnd->shifter.amount);
> @@ -2760,6 +2784,26 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>  	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
>        break;
>  
> +    case AARCH64_OPND_SVE_PATTERN_SCALED:
> +      if (optional_operand_p (opcode, idx)
> +	  && !opnd->shifter.operator_present
> +	  && opnd->imm.value == get_optional_operand_default_value (opcode))
> +	break;
> +      enum_value = opnd->imm.value;
> +      assert (enum_value < ARRAY_SIZE (aarch64_sve_pattern_array));
> +      if (aarch64_sve_pattern_array[opnd->imm.value])
> +	snprintf (buf, size, "%s", aarch64_sve_pattern_array[opnd->imm.value]);
> +      else
> +	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
> +      if (opnd->shifter.operator_present)
> +	{
> +	  size_t len = strlen (buf);
> +	  snprintf (buf + len, size - len, ", %s #%" PRIi64,
> +		    aarch64_operand_modifiers[opnd->shifter.kind].name,
> +		    opnd->shifter.amount);
> +	}
> +      break;
> +
>      case AARCH64_OPND_SVE_PRFOP:
>        enum_value = opnd->imm.value;
>        assert (enum_value < ARRAY_SIZE (aarch64_sve_prfop_array));
> @@ -2794,7 +2838,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_AIMM:
>      case AARCH64_OPND_HALF:
>        if (opnd->shifter.amount)
> -	snprintf (buf, size, "#0x%" PRIx64 ", lsl #%d", opnd->imm.value,
> +	snprintf (buf, size, "#0x%" PRIx64 ", lsl #%" PRIi64, opnd->imm.value,
>  		  opnd->shifter.amount);
>        else
>  	snprintf (buf, size, "#0x%" PRIx64, opnd->imm.value);
> @@ -2806,7 +2850,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>  	  || opnd->shifter.kind == AARCH64_MOD_NONE)
>  	snprintf (buf, size, "#0x%" PRIx64, opnd->imm.value);
>        else
> -	snprintf (buf, size, "#0x%" PRIx64 ", %s #%d", opnd->imm.value,
> +	snprintf (buf, size, "#0x%" PRIx64 ", %s #%" PRIi64, opnd->imm.value,
>  		  aarch64_operand_modifiers[opnd->shifter.kind].name,
>  		  opnd->shifter.amount);
>        break;
> diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
> index b54f35e..3406f6e 100644
> --- a/opcodes/aarch64-opc.h
> +++ b/opcodes/aarch64-opc.h
> @@ -106,6 +106,7 @@ enum aarch64_field_kind
>    FLD_SVE_Zm_16,
>    FLD_SVE_Zn,
>    FLD_SVE_Zt,
> +  FLD_SVE_imm4,
>    FLD_SVE_pattern,
>    FLD_SVE_prfop,
>    FLD_SVE_tszh,
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index 73415f7..491235f 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2822,6 +2822,8 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "the PSB option name CSYNC")					\
>      Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
>        "an enumeration value such as POW2")				\
> +    Y(IMMEDIATE, sve_scale, "SVE_PATTERN_SCALED", 0,			\
> +      F(FLD_SVE_pattern), "an enumeration value such as POW2")		\
>      Y(IMMEDIATE, imm, "SVE_PRFOP", 0, F(FLD_SVE_prfop),			\
>        "an enumeration value such as PLDL1KEEP")				\
>      Y(PRED_REG, regno, "SVE_Pd", 0, F(FLD_SVE_Pd),			\
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 25/32] Add support for SVE addressing modes
  2016-08-23  9:21 ` [AArch64][SVE 25/32] Add support for SVE addressing modes Richard Sandiford
@ 2016-08-25 14:38   ` Richard Earnshaw (lists)
  2016-09-16 12:06     ` Richard Sandiford
  0 siblings, 1 reply; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 14:38 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:21, Richard Sandiford wrote:
> This patch adds most of the new SVE addressing modes and associated
> operands.  A follow-on patch adds MUL VL, since handling it separately
> makes the changes easier to read.
> 
> The patch also introduces a new "operand-dependent data" field to the
> operand flags, based closely on the existing one for opcode flags.
> For SVE this new field needs only 2 bits, but it could be widened
> in future if necessary.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_U6): New aarch64_opnd.
> 	(AARCH64_OPND_SVE_ADDR_RI_U6x2, AARCH64_OPND_SVE_ADDR_RI_U6x4)
> 	(AARCH64_OPND_SVE_ADDR_RI_U6x8, AARCH64_OPND_SVE_ADDR_RR)
> 	(AARCH64_OPND_SVE_ADDR_RR_LSL1, AARCH64_OPND_SVE_ADDR_RR_LSL2)
> 	(AARCH64_OPND_SVE_ADDR_RR_LSL3, AARCH64_OPND_SVE_ADDR_RX)
> 	(AARCH64_OPND_SVE_ADDR_RX_LSL1, AARCH64_OPND_SVE_ADDR_RX_LSL2)
> 	(AARCH64_OPND_SVE_ADDR_RX_LSL3, AARCH64_OPND_SVE_ADDR_RZ)
> 	(AARCH64_OPND_SVE_ADDR_RZ_LSL1, AARCH64_OPND_SVE_ADDR_RZ_LSL2)
> 	(AARCH64_OPND_SVE_ADDR_RZ_LSL3, AARCH64_OPND_SVE_ADDR_RZ_XTW_14)
> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW_22, AARCH64_OPND_SVE_ADDR_RZ_XTW1_14)
> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW1_22, AARCH64_OPND_SVE_ADDR_RZ_XTW2_14)
> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW2_22, AARCH64_OPND_SVE_ADDR_RZ_XTW3_14)
> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW3_22, AARCH64_OPND_SVE_ADDR_ZI_U5)
> 	(AARCH64_OPND_SVE_ADDR_ZI_U5x2, AARCH64_OPND_SVE_ADDR_ZI_U5x4)
> 	(AARCH64_OPND_SVE_ADDR_ZI_U5x8, AARCH64_OPND_SVE_ADDR_ZZ_LSL)
> 	(AARCH64_OPND_SVE_ADDR_ZZ_SXTW, AARCH64_OPND_SVE_ADDR_ZZ_UXTW):
> 	Likewise.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE
> 	address operands.
> 	* aarch64-opc.h (FLD_SVE_imm6, FLD_SVE_msz, FLD_SVE_xs_14)
> 	(FLD_SVE_xs_22): New aarch64_field_kinds.
> 	(OPD_F_OD_MASK, OPD_F_OD_LSB, OPD_F_NO_ZR): New flags.
> 	(get_operand_specific_data): New function.
> 	* aarch64-opc.c (fields): Add entries for FLD_SVE_imm6, FLD_SVE_msz,
> 	FLD_SVE_xs_14 and FLD_SVE_xs_22.
> 	(operand_general_constraint_met_p): Handle the new SVE address
> 	operands.
> 	(sve_reg): New array.
> 	(get_addr_sve_reg_name): New function.
> 	(aarch64_print_operand): Handle the new SVE address operands.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm.h (ins_sve_addr_ri_u6, ins_sve_addr_rr_lsl)
> 	(ins_sve_addr_rz_xtw, ins_sve_addr_zi_u5, ins_sve_addr_zz_lsl)
> 	(ins_sve_addr_zz_sxtw, ins_sve_addr_zz_uxtw): New inserters.
> 	* aarch64-asm.c (aarch64_ins_sve_addr_ri_u6): New function.
> 	(aarch64_ins_sve_addr_rr_lsl): Likewise.
> 	(aarch64_ins_sve_addr_rz_xtw): Likewise.
> 	(aarch64_ins_sve_addr_zi_u5): Likewise.
> 	(aarch64_ins_sve_addr_zz): Likewise.
> 	(aarch64_ins_sve_addr_zz_lsl): Likewise.
> 	(aarch64_ins_sve_addr_zz_sxtw): Likewise.
> 	(aarch64_ins_sve_addr_zz_uxtw): Likewise.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.h (ext_sve_addr_ri_u6, ext_sve_addr_rr_lsl)
> 	(ext_sve_addr_rz_xtw, ext_sve_addr_zi_u5, ext_sve_addr_zz_lsl)
> 	(ext_sve_addr_zz_sxtw, ext_sve_addr_zz_uxtw): New extractors.
> 	* aarch64-dis.c (aarch64_ext_sve_add_reg_imm): New function.
> 	(aarch64_ext_sve_addr_ri_u6): Likewise.
> 	(aarch64_ext_sve_addr_rr_lsl): Likewise.
> 	(aarch64_ext_sve_addr_rz_xtw): Likewise.
> 	(aarch64_ext_sve_addr_zi_u5): Likewise.
> 	(aarch64_ext_sve_addr_zz): Likewise.
> 	(aarch64_ext_sve_addr_zz_lsl): Likewise.
> 	(aarch64_ext_sve_addr_zz_sxtw): Likewise.
> 	(aarch64_ext_sve_addr_zz_uxtw): Likewise.
> 	* aarch64-dis-2.c: Regenerate.
> 
> gas/
> 	* config/tc-aarch64.c (aarch64_addr_reg_parse): New function,
> 	split out from aarch64_reg_parse_32_64.  Handle Z registers too.
> 	(aarch64_reg_parse_32_64): Call it.
> 	(parse_address_main): Add base_qualifier, offset_qualifier
> 	and accept_sve parameters.  Handle SVE base and offset registers.

Ug!  Another bool parameter.

> 	(parse_address): Update call to parse_address_main.
> 	(parse_address_reloc): Likewise.
> 	(parse_sve_address): New function.
> 	(parse_operands): Parse the new SVE address operands.
> 

OK (somewhat reluctantly).

R.

> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 079f1c9..f9d89ce 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -705,7 +705,8 @@ aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
>  }
>  
>  /* Try to parse a base or offset register.  ACCEPT_SP says whether {W}SP
> -   should be considered valid and ACCEPT_RZ says whether zero registers
> +   should be considered valid, ACCEPT_RZ says whether zero registers
> +   should be considered valid, and ACCEPT_SVE says whether SVE registers
>     should be considered valid.
>  
>     Return the register number on success, setting *QUALIFIER to the
> @@ -715,10 +716,10 @@ aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
>     Note that this function does not issue any diagnostics.  */
>  
>  static int
> -aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
> -			 bfd_boolean accept_rz,
> -			 aarch64_opnd_qualifier_t *qualifier,
> -			 bfd_boolean *isregzero)
> +aarch64_addr_reg_parse (char **ccp, bfd_boolean accept_sp,
> +			bfd_boolean accept_rz, bfd_boolean accept_sve,
> +			aarch64_opnd_qualifier_t *qualifier,
> +			bfd_boolean *isregzero)
>  {
>    char *str = *ccp;
>    const reg_entry *reg = parse_reg (&str);
> @@ -726,9 +727,6 @@ aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
>    if (reg == NULL)
>      return PARSE_FAIL;
>  
> -  if (! aarch64_check_reg_type (reg, REG_TYPE_R_Z_SP))
> -    return PARSE_FAIL;
> -
>    switch (reg->type)
>      {
>      case REG_TYPE_SP_32:
> @@ -756,6 +754,23 @@ aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
>  		    : AARCH64_OPND_QLF_X);
>        *isregzero = TRUE;
>        break;
> +    case REG_TYPE_ZN:
> +      if (!accept_sve || str[0] != '.')
> +	return PARSE_FAIL;
> +      switch (TOLOWER (str[1]))
> +	{
> +	case 's':
> +	  *qualifier = AARCH64_OPND_QLF_S_S;
> +	  break;
> +	case 'd':
> +	  *qualifier = AARCH64_OPND_QLF_S_D;
> +	  break;
> +	default:
> +	  return PARSE_FAIL;
> +	}
> +      str += 2;
> +      *isregzero = FALSE;
> +      break;
>      default:
>        return PARSE_FAIL;
>      }
> @@ -765,6 +780,26 @@ aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
>    return reg->number;
>  }
>  
> +/* Try to parse a scalar base or offset register.  ACCEPT_SP says whether
> +   {W}SP should be considered valid and ACCEPT_RZ says whether zero
> +   registers should be considered valid.
> +
> +   Return the register number on success, setting *QUALIFIER to the
> +   register qualifier and *ISREGZERO to whether the register is a zero
> +   register.  Return PARSE_FAIL otherwise.
> +
> +   Note that this function does not issue any diagnostics.  */
> +
> +static int
> +aarch64_reg_parse_32_64 (char **ccp, bfd_boolean accept_sp,
> +			 bfd_boolean accept_rz,
> +			 aarch64_opnd_qualifier_t *qualifier,
> +			 bfd_boolean *isregzero)
> +{
> +  return aarch64_addr_reg_parse (ccp, accept_sp, accept_rz, FALSE,
> +				 qualifier, isregzero);
> +}
> +
>  /* Parse the qualifier of a vector register or vector element of type
>     REG_TYPE.  Fill in *PARSED_TYPE and return TRUE if the parsing
>     succeeds; otherwise return FALSE.
> @@ -3240,8 +3275,8 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>     The A64 instruction set has the following addressing modes:
>  
>     Offset
> -     [base]			// in SIMD ld/st structure
> -     [base{,#0}]		// in ld/st exclusive
> +     [base]			 // in SIMD ld/st structure
> +     [base{,#0}]		 // in ld/st exclusive
>       [base{,#imm}]
>       [base,Xm{,LSL #imm}]
>       [base,Xm,SXTX {#imm}]
> @@ -3250,10 +3285,18 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>       [base,#imm]!
>     Post-indexed
>       [base],#imm
> -     [base],Xm			// in SIMD ld/st structure
> +     [base],Xm			 // in SIMD ld/st structure
>     PC-relative (literal)
>       label
> -     =immediate
> +   SVE:
> +     [base,Zm.D{,LSL #imm}]
> +     [base,Zm.S,(S|U)XTW {#imm}]
> +     [base,Zm.D,(S|U)XTW {#imm}] // ignores top 32 bits of Zm.D elements
> +     [Zn.S,#imm]
> +     [Zn.D,#imm]
> +     [Zn.S,Zm.S{,LSL #imm}]      // }
> +     [Zn.D,Zm.D{,LSL #imm}]      // } in ADR
> +     [Zn.D,Zm.D,(S|U)XTW {#imm}] // }
>  
>     (As a convenience, the notation "=immediate" is permitted in conjunction
>     with the pc-relative literal load instructions to automatically place an
> @@ -3280,26 +3323,37 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>       .pcrel=1; .preind=1; .postind=0; .writeback=0
>  
>     The shift/extension information, if any, will be stored in .shifter.
> +   The base and offset qualifiers will be stored in *BASE_QUALIFIER and
> +   *OFFSET_QUALIFIER respectively, with NIL being used if there's no
> +   corresponding register.
>  
>     RELOC says whether relocation operators should be accepted
>     and ACCEPT_REG_POST_INDEX says whether post-indexed register
>     addressing should be accepted.
>  
> +   Likewise ACCEPT_SVE says whether the SVE addressing modes should be
> +   accepted.  We use context-dependent parsing for this case because
> +   (for compatibility) we should accept symbolic constants like z0 and
> +   z0.s in base AArch64 code.
> +
>     In all other cases, it is the caller's responsibility to check whether
>     the addressing mode is supported by the instruction.  It is also the
>     caller's responsibility to set inst.reloc.type.  */
>  
>  static bfd_boolean
> -parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
> -		    bfd_boolean accept_reg_post_index)
> +parse_address_main (char **str, aarch64_opnd_info *operand,
> +		    aarch64_opnd_qualifier_t *base_qualifier,
> +		    aarch64_opnd_qualifier_t *offset_qualifier,
> +		    bfd_boolean reloc, bfd_boolean accept_reg_post_index,
> +		    bfd_boolean accept_sve)
>  {
>    char *p = *str;
>    int reg;
> -  aarch64_opnd_qualifier_t base_qualifier;
> -  aarch64_opnd_qualifier_t offset_qualifier;
>    bfd_boolean isregzero;
>    expressionS *exp = &inst.reloc.exp;
>  
> +  *base_qualifier = AARCH64_OPND_QLF_NIL;
> +  *offset_qualifier = AARCH64_OPND_QLF_NIL;
>    if (! skip_past_char (&p, '['))
>      {
>        /* =immediate or label.  */
> @@ -3375,8 +3429,14 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
>    /* [ */
>  
>    /* Accept SP and reject ZR */
> -  reg = aarch64_reg_parse_32_64 (&p, TRUE, FALSE, &base_qualifier, &isregzero);
> -  if (reg == PARSE_FAIL || base_qualifier == AARCH64_OPND_QLF_W)
> +  reg = aarch64_addr_reg_parse (&p, TRUE, FALSE, accept_sve, base_qualifier,
> +				&isregzero);
> +  if (reg == PARSE_FAIL)
> +    {
> +      set_syntax_error (_("base register expected"));
> +      return FALSE;
> +    }
> +  else if (*base_qualifier == AARCH64_OPND_QLF_W)
>      {
>        set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_64)));
>        return FALSE;
> @@ -3390,8 +3450,8 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
>        operand->addr.preind = 1;
>  
>        /* Reject SP and accept ZR */
> -      reg = aarch64_reg_parse_32_64 (&p, FALSE, TRUE, &offset_qualifier,
> -				     &isregzero);
> +      reg = aarch64_addr_reg_parse (&p, FALSE, TRUE, accept_sve,
> +				    offset_qualifier, &isregzero);
>        if (reg != PARSE_FAIL)
>  	{
>  	  /* [Xn,Rm  */
> @@ -3414,13 +3474,19 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
>  	      || operand->shifter.kind == AARCH64_MOD_LSL
>  	      || operand->shifter.kind == AARCH64_MOD_SXTX)
>  	    {
> -	      if (offset_qualifier == AARCH64_OPND_QLF_W)
> +	      if (*offset_qualifier == AARCH64_OPND_QLF_W)
>  		{
>  		  set_syntax_error (_("invalid use of 32-bit register offset"));
>  		  return FALSE;
>  		}
> +	      if (aarch64_get_qualifier_esize (*base_qualifier)
> +		  != aarch64_get_qualifier_esize (*offset_qualifier))
> +		{
> +		  set_syntax_error (_("offset has different size from base"));
> +		  return FALSE;
> +		}
>  	    }
> -	  else if (offset_qualifier == AARCH64_OPND_QLF_X)
> +	  else if (*offset_qualifier == AARCH64_OPND_QLF_X)
>  	    {
>  	      set_syntax_error (_("invalid use of 64-bit register offset"));
>  	      return FALSE;
> @@ -3465,12 +3531,20 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
>  	      inst.reloc.type = entry->ldst_type;
>  	      inst.reloc.pc_rel = entry->pc_rel;
>  	    }
> -	  else if (! my_get_expression (exp, &p, GE_OPT_PREFIX, 1))
> +	  else
>  	    {
> -	      set_syntax_error (_("invalid expression in the address"));
> -	      return FALSE;
> +	      if (! my_get_expression (exp, &p, GE_OPT_PREFIX, 1))
> +		{
> +		  set_syntax_error (_("invalid expression in the address"));
> +		  return FALSE;
> +		}
> +	      /* [Xn,<expr>  */
> +	      if (accept_sve && exp->X_op != O_constant)
> +		{
> +		  set_syntax_error (_("constant offset required"));
> +		  return FALSE;
> +		}
>  	    }
> -	  /* [Xn,<expr>  */
>  	}
>      }
>  
> @@ -3505,11 +3579,11 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
>  
>        if (accept_reg_post_index
>  	  && (reg = aarch64_reg_parse_32_64 (&p, FALSE, FALSE,
> -					     &offset_qualifier,
> +					     offset_qualifier,
>  					     &isregzero)) != PARSE_FAIL)
>  	{
>  	  /* [Xn],Xm */
> -	  if (offset_qualifier == AARCH64_OPND_QLF_W)
> +	  if (*offset_qualifier == AARCH64_OPND_QLF_W)
>  	    {
>  	      set_syntax_error (_("invalid 32-bit register offset"));
>  	      return FALSE;
> @@ -3544,7 +3618,8 @@ parse_address_main (char **str, aarch64_opnd_info *operand, bfd_boolean reloc,
>    return TRUE;
>  }
>  
> -/* Parse an address that cannot contain relocation operators.
> +/* Parse a base AArch64 address, i.e. one that cannot contain SVE base
> +   registers or SVE offset registers.  Do not allow relocation operators.
>     Look for and parse "[Xn], (Xm|#m)" as post-indexed addressing
>     if ACCEPT_REG_POST_INDEX is true.
>  
> @@ -3553,17 +3628,34 @@ static bfd_boolean
>  parse_address (char **str, aarch64_opnd_info *operand,
>  	       bfd_boolean accept_reg_post_index)
>  {
> -  return parse_address_main (str, operand, FALSE, accept_reg_post_index);
> +  aarch64_opnd_qualifier_t base_qualifier, offset_qualifier;
> +  return parse_address_main (str, operand, &base_qualifier, &offset_qualifier,
> +			     FALSE, accept_reg_post_index, FALSE);
>  }
>  
> -/* Parse an address that can contain relocation operators.  Do not
> -   accept post-indexed addressing.
> +/* Parse a base AArch64 address, i.e. one that cannot contain SVE base
> +   registers or SVE offset registers.  Allow relocation operators but
> +   disallow post-indexed addressing.
>  
>     Return TRUE on success.  */
>  static bfd_boolean
>  parse_address_reloc (char **str, aarch64_opnd_info *operand)
>  {
> -  return parse_address_main (str, operand, TRUE, FALSE);
> +  aarch64_opnd_qualifier_t base_qualifier, offset_qualifier;
> +  return parse_address_main (str, operand, &base_qualifier, &offset_qualifier,
> +			     TRUE, FALSE, FALSE);
> +}
> +
> +/* Parse an address in which SVE vector registers are allowed.
> +   The arguments have the same meaning as for parse_address_main.
> +   Return TRUE on success.  */
> +static bfd_boolean
> +parse_sve_address (char **str, aarch64_opnd_info *operand,
> +		   aarch64_opnd_qualifier_t *base_qualifier,
> +		   aarch64_opnd_qualifier_t *offset_qualifier)
> +{
> +  return parse_address_main (str, operand, base_qualifier, offset_qualifier,
> +			     FALSE, FALSE, TRUE);
>  }
>  
>  /* Parse an operand for a MOVZ, MOVN or MOVK instruction.
> @@ -5174,7 +5266,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>        int comma_skipped_p = 0;
>        aarch64_reg_type rtype;
>        struct vector_type_el vectype;
> -      aarch64_opnd_qualifier_t qualifier;
> +      aarch64_opnd_qualifier_t qualifier, base_qualifier, offset_qualifier;
>        aarch64_opnd_info *info = &inst.base.operands[i];
>        aarch64_reg_type reg_type;
>  
> @@ -5793,6 +5885,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	case AARCH64_OPND_ADDR_REGOFF:
>  	  /* [<Xn|SP>, <R><m>{, <extend> {<amount>}}]  */
>  	  po_misc_or_fail (parse_address (&str, info, FALSE));
> +	regoff_addr:
>  	  if (info->addr.pcrel || !info->addr.offset.is_reg
>  	      || !info->addr.preind || info->addr.postind
>  	      || info->addr.writeback)
> @@ -5887,6 +5980,116 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  /* No qualifier.  */
>  	  break;
>  
> +	case AARCH64_OPND_SVE_ADDR_RI_U6:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
> +	  /* [X<n>{, #imm}]
> +	     but recognizing SVE registers.  */
> +	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
> +					      &offset_qualifier));
> +	  if (base_qualifier != AARCH64_OPND_QLF_X)
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	sve_regimm:
> +	  if (info->addr.pcrel || info->addr.offset.is_reg
> +	      || !info->addr.preind || info->addr.writeback)
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	  gas_assert (inst.reloc.exp.X_op == O_constant);
> +	  info->addr.offset.imm = inst.reloc.exp.X_add_number;
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ADDR_RR:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL3:
> +	case AARCH64_OPND_SVE_ADDR_RX:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL3:
> +	  /* [<Xn|SP>, <R><m>{, lsl #<amount>}]
> +	     but recognizing SVE registers.  */
> +	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
> +					      &offset_qualifier));
> +	  if (base_qualifier != AARCH64_OPND_QLF_X
> +	      || offset_qualifier != AARCH64_OPND_QLF_X)
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	  goto regoff_addr;
> +
> +	case AARCH64_OPND_SVE_ADDR_RZ:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
> +	  /* [<Xn|SP>, Z<m>.D{, LSL #<amount>}]
> +	     [<Xn|SP>, Z<m>.<T>, <extend> {#<amount>}]  */
> +	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
> +					      &offset_qualifier));
> +	  if (base_qualifier != AARCH64_OPND_QLF_X
> +	      || (offset_qualifier != AARCH64_OPND_QLF_S_S
> +		  && offset_qualifier != AARCH64_OPND_QLF_S_D))
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	  info->qualifier = offset_qualifier;
> +	  goto regoff_addr;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
> +	  /* [Z<n>.<T>{, #imm}]  */
> +	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
> +					      &offset_qualifier));
> +	  if (base_qualifier != AARCH64_OPND_QLF_S_S
> +	      && base_qualifier != AARCH64_OPND_QLF_S_D)
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	  info->qualifier = base_qualifier;
> +	  goto sve_regimm;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
> +	case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
> +	case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
> +	  /* [Z<n>.<T>, Z<m>.<T>{, LSL #<amount>}]
> +	     [Z<n>.D, Z<m>.D, <extend> {#<amount>}]
> +
> +	     We don't reject:
> +
> +	     [Z<n>.S, Z<m>.S, <extend> {#<amount>}]
> +
> +	     here since we get better error messages by leaving it to
> +	     the qualifier checking routines.  */
> +	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
> +					      &offset_qualifier));
> +	  if ((base_qualifier != AARCH64_OPND_QLF_S_S
> +	       && base_qualifier != AARCH64_OPND_QLF_S_D)
> +	      || offset_qualifier != base_qualifier)
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	  info->qualifier = base_qualifier;
> +	  goto regoff_addr;
> +
>  	case AARCH64_OPND_SYSREG:
>  	  if ((val = parse_sys_reg (&str, aarch64_sys_regs_hsh, 1, 0))
>  	      == PARSE_FAIL)
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index 49b4413..e61ac9c 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -244,6 +244,45 @@ enum aarch64_opnd
>    AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
>    AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
>  
> +  AARCH64_OPND_SVE_ADDR_RI_U6,	    /* SVE [<Xn|SP>, #<uimm6>].  */
> +  AARCH64_OPND_SVE_ADDR_RI_U6x2,    /* SVE [<Xn|SP>, #<uimm6>*2].  */
> +  AARCH64_OPND_SVE_ADDR_RI_U6x4,    /* SVE [<Xn|SP>, #<uimm6>*4].  */
> +  AARCH64_OPND_SVE_ADDR_RI_U6x8,    /* SVE [<Xn|SP>, #<uimm6>*8].  */
> +  AARCH64_OPND_SVE_ADDR_RR,	    /* SVE [<Xn|SP>, <Xm|XZR>].  */
> +  AARCH64_OPND_SVE_ADDR_RR_LSL1,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #1].  */
> +  AARCH64_OPND_SVE_ADDR_RR_LSL2,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #2].  */
> +  AARCH64_OPND_SVE_ADDR_RR_LSL3,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #3].  */
> +  AARCH64_OPND_SVE_ADDR_RX,	    /* SVE [<Xn|SP>, <Xm>].  */
> +  AARCH64_OPND_SVE_ADDR_RX_LSL1,    /* SVE [<Xn|SP>, <Xm>, LSL #1].  */
> +  AARCH64_OPND_SVE_ADDR_RX_LSL2,    /* SVE [<Xn|SP>, <Xm>, LSL #2].  */
> +  AARCH64_OPND_SVE_ADDR_RX_LSL3,    /* SVE [<Xn|SP>, <Xm>, LSL #3].  */
> +  AARCH64_OPND_SVE_ADDR_RZ,	    /* SVE [<Xn|SP>, Zm.D].  */
> +  AARCH64_OPND_SVE_ADDR_RZ_LSL1,    /* SVE [<Xn|SP>, Zm.D, LSL #1].  */
> +  AARCH64_OPND_SVE_ADDR_RZ_LSL2,    /* SVE [<Xn|SP>, Zm.D, LSL #2].  */
> +  AARCH64_OPND_SVE_ADDR_RZ_LSL3,    /* SVE [<Xn|SP>, Zm.D, LSL #3].  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW_14,  /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW].
> +				       Bit 14 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW_22,  /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW].
> +				       Bit 22 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW1_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #1].
> +				       Bit 14 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW1_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #1].
> +				       Bit 22 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW2_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #2].
> +				       Bit 14 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW2_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #2].
> +				       Bit 22 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW3_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #3].
> +				       Bit 14 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW3_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #3].
> +				       Bit 22 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_ZI_U5,	    /* SVE [Zn.<T>, #<uimm5>].  */
> +  AARCH64_OPND_SVE_ADDR_ZI_U5x2,    /* SVE [Zn.<T>, #<uimm5>*2].  */
> +  AARCH64_OPND_SVE_ADDR_ZI_U5x4,    /* SVE [Zn.<T>, #<uimm5>*4].  */
> +  AARCH64_OPND_SVE_ADDR_ZI_U5x8,    /* SVE [Zn.<T>, #<uimm5>*8].  */
> +  AARCH64_OPND_SVE_ADDR_ZZ_LSL,     /* SVE [Zn.<T>, Zm,<T>, LSL #<msz>].  */
> +  AARCH64_OPND_SVE_ADDR_ZZ_SXTW,    /* SVE [Zn.<T>, Zm,<T>, SXTW #<msz>].  */
> +  AARCH64_OPND_SVE_ADDR_ZZ_UXTW,    /* SVE [Zn.<T>, Zm,<T>, UXTW #<msz>].  */
>    AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
>    AARCH64_OPND_SVE_PATTERN_SCALED, /* Likewise, with additional MUL factor.  */
>    AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index 039b9be..47a414c 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -480,21 +480,21 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 92:
> -    case 93:
> -    case 94:
> -    case 95:
> -    case 96:
> -    case 97:
> -    case 98:
> -    case 99:
> -    case 100:
> -    case 101:
> -    case 102:
> -    case 103:
> -    case 104:
> -    case 105:
> -    case 108:
> +    case 123:
> +    case 124:
> +    case 125:
> +    case 126:
> +    case 127:
> +    case 128:
> +    case 129:
> +    case 130:
> +    case 131:
> +    case 132:
> +    case 133:
> +    case 134:
> +    case 135:
> +    case 136:
> +    case 139:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -531,8 +531,8 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 68:
>      case 69:
>      case 70:
> -    case 89:
> -    case 91:
> +    case 120:
> +    case 122:
>        return aarch64_ins_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -583,12 +583,50 @@ aarch64_insert_operand (const aarch64_operand *self,
>        return aarch64_ins_prfop (self, info, code, inst);
>      case 88:
>        return aarch64_ins_hint (self, info, code, inst);
> +    case 89:
>      case 90:
> -      return aarch64_ins_sve_scale (self, info, code, inst);
> +    case 91:
> +    case 92:
> +      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
> +    case 93:
> +    case 94:
> +    case 95:
> +    case 96:
> +    case 97:
> +    case 98:
> +    case 99:
> +    case 100:
> +    case 101:
> +    case 102:
> +    case 103:
> +    case 104:
> +      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
> +    case 105:
>      case 106:
> -      return aarch64_ins_sve_index (self, info, code, inst);
>      case 107:
> +    case 108:
>      case 109:
> +    case 110:
> +    case 111:
> +    case 112:
> +      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
> +    case 113:
> +    case 114:
> +    case 115:
> +    case 116:
> +      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
> +    case 117:
> +      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
> +    case 118:
> +      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 119:
> +      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 121:
> +      return aarch64_ins_sve_scale (self, info, code, inst);
> +    case 137:
> +      return aarch64_ins_sve_index (self, info, code, inst);
> +    case 138:
> +    case 140:
>        return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index 117a3c6..0d3b2c7 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -745,6 +745,114 @@ aarch64_ins_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return NULL;
>  }
>  
> +/* Encode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
> +   is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
> +   value.  fields[0] specifies the base register field.  */
> +const char *
> +aarch64_ins_sve_addr_ri_u6 (const aarch64_operand *self,
> +			    const aarch64_opnd_info *info, aarch64_insn *code,
> +			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int factor = 1 << get_operand_specific_data (self);
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (FLD_SVE_imm6, code, info->addr.offset.imm / factor, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [X<n>, X<m>{, LSL #<shift>}], where <shift>
> +   is SELF's operand-dependent value.  fields[0] specifies the base
> +   register field and fields[1] specifies the offset register field.  */
> +const char *
> +aarch64_ins_sve_addr_rr_lsl (const aarch64_operand *self,
> +			     const aarch64_opnd_info *info, aarch64_insn *code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [X<n>, Z<m>.<T>, (S|U)XTW {#<shift>}], where
> +   <shift> is SELF's operand-dependent value.  fields[0] specifies the
> +   base register field, fields[1] specifies the offset register field and
> +   fields[2] is a single-bit field that selects SXTW over UXTW.  */
> +const char *
> +aarch64_ins_sve_addr_rz_xtw (const aarch64_operand *self,
> +			     const aarch64_opnd_info *info, aarch64_insn *code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
> +  if (info->shifter.kind == AARCH64_MOD_UXTW)
> +    insert_field (self->fields[2], code, 0, 0);
> +  else
> +    insert_field (self->fields[2], code, 1, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [Z<n>.<T>, #<imm5> << <shift>], where <imm5> is a
> +   5-bit unsigned number and where <shift> is SELF's operand-dependent value.
> +   fields[0] specifies the base register field.  */
> +const char *
> +aarch64_ins_sve_addr_zi_u5 (const aarch64_operand *self,
> +			    const aarch64_opnd_info *info, aarch64_insn *code,
> +			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int factor = 1 << get_operand_specific_data (self);
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (FLD_imm5, code, info->addr.offset.imm / factor, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>{, <modifier> {#<msz>}}],
> +   where <modifier> is fixed by the instruction and where <msz> is a
> +   2-bit unsigned number.  fields[0] specifies the base register field
> +   and fields[1] specifies the offset register field.  */
> +static const char *
> +aarch64_ext_sve_addr_zz (const aarch64_operand *self,
> +			 const aarch64_opnd_info *info, aarch64_insn *code)
> +{
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
> +  insert_field (FLD_SVE_msz, code, info->shifter.amount, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>{, LSL #<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +const char *
> +aarch64_ins_sve_addr_zz_lsl (const aarch64_operand *self,
> +			     const aarch64_opnd_info *info, aarch64_insn *code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code);
> +}
> +
> +/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>, SXTW {#<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +const char *
> +aarch64_ins_sve_addr_zz_sxtw (const aarch64_operand *self,
> +			      const aarch64_opnd_info *info,
> +			      aarch64_insn *code,
> +			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code);
> +}
> +
> +/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>, UXTW {#<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +const char *
> +aarch64_ins_sve_addr_zz_uxtw (const aarch64_operand *self,
> +			      const aarch64_opnd_info *info,
> +			      aarch64_insn *code,
> +			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code);
> +}
> +
>  /* Encode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
>     array specifies which field to use for Zn.  MM is encoded in the
>     concatenation of imm5 and SVE_tszh, with imm5 being the less
> diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
> index ac5faeb..b81cfa1 100644
> --- a/opcodes/aarch64-asm.h
> +++ b/opcodes/aarch64-asm.h
> @@ -69,6 +69,13 @@ AARCH64_DECL_OPD_INSERTER (ins_hint);
>  AARCH64_DECL_OPD_INSERTER (ins_prfop);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_u6);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rr_lsl);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rz_xtw);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zi_u5);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_lsl);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_sxtw);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_uxtw);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_index);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_scale);
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 124385d..3dd714f 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10426,21 +10426,21 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 92:
> -    case 93:
> -    case 94:
> -    case 95:
> -    case 96:
> -    case 97:
> -    case 98:
> -    case 99:
> -    case 100:
> -    case 101:
> -    case 102:
> -    case 103:
> -    case 104:
> -    case 105:
> -    case 108:
> +    case 123:
> +    case 124:
> +    case 125:
> +    case 126:
> +    case 127:
> +    case 128:
> +    case 129:
> +    case 130:
> +    case 131:
> +    case 132:
> +    case 133:
> +    case 134:
> +    case 135:
> +    case 136:
> +    case 139:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10482,8 +10482,8 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 68:
>      case 69:
>      case 70:
> -    case 89:
> -    case 91:
> +    case 120:
> +    case 122:
>        return aarch64_ext_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -10536,12 +10536,50 @@ aarch64_extract_operand (const aarch64_operand *self,
>        return aarch64_ext_prfop (self, info, code, inst);
>      case 88:
>        return aarch64_ext_hint (self, info, code, inst);
> +    case 89:
>      case 90:
> -      return aarch64_ext_sve_scale (self, info, code, inst);
> +    case 91:
> +    case 92:
> +      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
> +    case 93:
> +    case 94:
> +    case 95:
> +    case 96:
> +    case 97:
> +    case 98:
> +    case 99:
> +    case 100:
> +    case 101:
> +    case 102:
> +    case 103:
> +    case 104:
> +      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
> +    case 105:
>      case 106:
> -      return aarch64_ext_sve_index (self, info, code, inst);
>      case 107:
> +    case 108:
>      case 109:
> +    case 110:
> +    case 111:
> +    case 112:
> +      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
> +    case 113:
> +    case 114:
> +    case 115:
> +    case 116:
> +      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
> +    case 117:
> +      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
> +    case 118:
> +      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 119:
> +      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 121:
> +      return aarch64_ext_sve_scale (self, info, code, inst);
> +    case 137:
> +      return aarch64_ext_sve_index (self, info, code, inst);
> +    case 138:
> +    case 140:
>        return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index 1d00c0a..ed77b4d 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -1186,6 +1186,152 @@ aarch64_ext_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return 1;
>  }
>  
> +/* Decode an SVE address [<base>, #<offset> << <shift>], where <offset>
> +   is given by the OFFSET parameter and where <shift> is SELF's operand-
> +   dependent value.  fields[0] specifies the base register field <base>.  */
> +static int
> +aarch64_ext_sve_addr_reg_imm (const aarch64_operand *self,
> +			      aarch64_opnd_info *info, aarch64_insn code,
> +			      int64_t offset)
> +{
> +  info->addr.base_regno = extract_field (self->fields[0], code, 0);
> +  info->addr.offset.imm = offset * (1 << get_operand_specific_data (self));
> +  info->addr.offset.is_reg = FALSE;
> +  info->addr.writeback = FALSE;
> +  info->addr.preind = TRUE;
> +  info->shifter.operator_present = FALSE;
> +  info->shifter.amount_present = FALSE;
> +  return 1;
> +}
> +
> +/* Decode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
> +   is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
> +   value.  fields[0] specifies the base register field.  */
> +int
> +aarch64_ext_sve_addr_ri_u6 (const aarch64_operand *self,
> +			    aarch64_opnd_info *info, aarch64_insn code,
> +			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int offset = extract_field (FLD_SVE_imm6, code, 0);
> +  return aarch64_ext_sve_addr_reg_imm (self, info, code, offset);
> +}
> +
> +/* Decode an SVE address [X<n>, X<m>{, LSL #<shift>}], where <shift>
> +   is SELF's operand-dependent value.  fields[0] specifies the base
> +   register field and fields[1] specifies the offset register field.  */
> +int
> +aarch64_ext_sve_addr_rr_lsl (const aarch64_operand *self,
> +			     aarch64_opnd_info *info, aarch64_insn code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int index;
> +
> +  index = extract_field (self->fields[1], code, 0);
> +  if (index == 31 && (self->flags & OPD_F_NO_ZR) != 0)
> +    return 0;
> +
> +  info->addr.base_regno = extract_field (self->fields[0], code, 0);
> +  info->addr.offset.regno = index;
> +  info->addr.offset.is_reg = TRUE;
> +  info->addr.writeback = FALSE;
> +  info->addr.preind = TRUE;
> +  info->shifter.kind = AARCH64_MOD_LSL;
> +  info->shifter.amount = get_operand_specific_data (self);
> +  info->shifter.operator_present = (info->shifter.amount != 0);
> +  info->shifter.amount_present = (info->shifter.amount != 0);
> +  return 1;
> +}
> +
> +/* Decode an SVE address [X<n>, Z<m>.<T>, (S|U)XTW {#<shift>}], where
> +   <shift> is SELF's operand-dependent value.  fields[0] specifies the
> +   base register field, fields[1] specifies the offset register field and
> +   fields[2] is a single-bit field that selects SXTW over UXTW.  */
> +int
> +aarch64_ext_sve_addr_rz_xtw (const aarch64_operand *self,
> +			     aarch64_opnd_info *info, aarch64_insn code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  info->addr.base_regno = extract_field (self->fields[0], code, 0);
> +  info->addr.offset.regno = extract_field (self->fields[1], code, 0);
> +  info->addr.offset.is_reg = TRUE;
> +  info->addr.writeback = FALSE;
> +  info->addr.preind = TRUE;
> +  if (extract_field (self->fields[2], code, 0))
> +    info->shifter.kind = AARCH64_MOD_SXTW;
> +  else
> +    info->shifter.kind = AARCH64_MOD_UXTW;
> +  info->shifter.amount = get_operand_specific_data (self);
> +  info->shifter.operator_present = TRUE;
> +  info->shifter.amount_present = (info->shifter.amount != 0);
> +  return 1;
> +}
> +
> +/* Decode an SVE address [Z<n>.<T>, #<imm5> << <shift>], where <imm5> is a
> +   5-bit unsigned number and where <shift> is SELF's operand-dependent value.
> +   fields[0] specifies the base register field.  */
> +int
> +aarch64_ext_sve_addr_zi_u5 (const aarch64_operand *self,
> +			    aarch64_opnd_info *info, aarch64_insn code,
> +			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int offset = extract_field (FLD_imm5, code, 0);
> +  return aarch64_ext_sve_addr_reg_imm (self, info, code, offset);
> +}
> +
> +/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>{, <modifier> {#<msz>}}],
> +   where <modifier> is given by KIND and where <msz> is a 2-bit unsigned
> +   number.  fields[0] specifies the base register field and fields[1]
> +   specifies the offset register field.  */
> +static int
> +aarch64_ext_sve_addr_zz (const aarch64_operand *self, aarch64_opnd_info *info,
> +			 aarch64_insn code, enum aarch64_modifier_kind kind)
> +{
> +  info->addr.base_regno = extract_field (self->fields[0], code, 0);
> +  info->addr.offset.regno = extract_field (self->fields[1], code, 0);
> +  info->addr.offset.is_reg = TRUE;
> +  info->addr.writeback = FALSE;
> +  info->addr.preind = TRUE;
> +  info->shifter.kind = kind;
> +  info->shifter.amount = extract_field (FLD_SVE_msz, code, 0);
> +  info->shifter.operator_present = (kind != AARCH64_MOD_LSL
> +				    || info->shifter.amount != 0);
> +  info->shifter.amount_present = (info->shifter.amount != 0);
> +  return 1;
> +}
> +
> +/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>{, LSL #<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +int
> +aarch64_ext_sve_addr_zz_lsl (const aarch64_operand *self,
> +			     aarch64_opnd_info *info, aarch64_insn code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_LSL);
> +}
> +
> +/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>, SXTW {#<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +int
> +aarch64_ext_sve_addr_zz_sxtw (const aarch64_operand *self,
> +			      aarch64_opnd_info *info, aarch64_insn code,
> +			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_SXTW);
> +}
> +
> +/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>, UXTW {#<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +int
> +aarch64_ext_sve_addr_zz_uxtw (const aarch64_operand *self,
> +			      aarch64_opnd_info *info, aarch64_insn code,
> +			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_UXTW);
> +}
> +
>  /* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
>     array specifies which field to use for Zn.  MM is encoded in the
>     concatenation of imm5 and SVE_tszh, with imm5 being the less
> diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
> index 92f5ad4..0ce2d89 100644
> --- a/opcodes/aarch64-dis.h
> +++ b/opcodes/aarch64-dis.h
> @@ -91,6 +91,13 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_hint);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_prfop);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_u6);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rr_lsl);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rz_xtw);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zi_u5);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_lsl);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_sxtw);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_uxtw);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_scale);
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index 8f221b8..ed2b70b 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -113,6 +113,37 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 2"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 4"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x8", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 8"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL1", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL2", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL3", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX", (0 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL1", (1 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL2", (2 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL3", (3 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL1", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL2", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL3", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW_14", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW_22", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW1_14", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW1_22", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW2_14", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW2_22", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW3_14", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW3_22", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 2"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 4"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x8", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 8"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_LSL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_SXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_UXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN_SCALED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 326b94e..6617e28 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -280,9 +280,13 @@ const aarch64_field fields[] =
>      {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
>      {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
>      { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
> +    { 16,  6 }, /* SVE_imm6: 6-bit immediate field.  */
> +    { 10,  2 }, /* SVE_msz: 2-bit shift amount for ADR.  */
>      {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
>      {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
>      { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
> +    { 14,  1 }, /* SVE_xs_14: UXTW/SXTW select (bit 14).  */
> +    { 22,  1 }  /* SVE_xs_22: UXTW/SXTW select (bit 22).  */
>  };
>  
>  enum aarch64_operand_class
> @@ -1368,9 +1372,9 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  				  const aarch64_opcode *opcode,
>  				  aarch64_operand_error *mismatch_detail)
>  {
> -  unsigned num;
> +  unsigned num, modifiers;
>    unsigned char size;
> -  int64_t imm;
> +  int64_t imm, min_value, max_value;
>    const aarch64_opnd_info *opnd = opnds + idx;
>    aarch64_opnd_qualifier_t qualifier = opnd->qualifier;
>  
> @@ -1662,6 +1666,113 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SVE_ADDR_RI_U6:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
> +	  min_value = 0;
> +	  max_value = 63;
> +	sve_imm_offset:
> +	  assert (!opnd->addr.offset.is_reg);
> +	  assert (opnd->addr.preind);
> +	  num = 1 << get_operand_specific_data (&aarch64_operands[type]);
> +	  min_value *= num;
> +	  max_value *= num;
> +	  if (opnd->shifter.operator_present
> +	      || opnd->shifter.amount_present)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid addressing mode"));
> +	      return 0;
> +	    }
> +	  if (!value_in_range_p (opnd->addr.offset.imm, min_value, max_value))
> +	    {
> +	      set_offset_out_of_range_error (mismatch_detail, idx,
> +					     min_value, max_value);
> +	      return 0;
> +	    }
> +	  if (!value_aligned_p (opnd->addr.offset.imm, num))
> +	    {
> +	      set_unaligned_error (mismatch_detail, idx, num);
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ADDR_RR:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL3:
> +	case AARCH64_OPND_SVE_ADDR_RX:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL3:
> +	case AARCH64_OPND_SVE_ADDR_RZ:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
> +	  modifiers = 1 << AARCH64_MOD_LSL;
> +	sve_rr_operand:
> +	  assert (opnd->addr.offset.is_reg);
> +	  assert (opnd->addr.preind);
> +	  if ((aarch64_operands[type].flags & OPD_F_NO_ZR) != 0
> +	      && opnd->addr.offset.regno == 31)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("index register xzr is not allowed"));
> +	      return 0;
> +	    }
> +	  if (((1 << opnd->shifter.kind) & modifiers) == 0
> +	      || (opnd->shifter.amount
> +		  != get_operand_specific_data (&aarch64_operands[type])))
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid addressing mode"));
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
> +	  modifiers = (1 << AARCH64_MOD_SXTW) | (1 << AARCH64_MOD_UXTW);
> +	  goto sve_rr_operand;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
> +	  min_value = 0;
> +	  max_value = 31;
> +	  goto sve_imm_offset;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
> +	  modifiers = 1 << AARCH64_MOD_LSL;
> +	sve_zz_operand:
> +	  assert (opnd->addr.offset.is_reg);
> +	  assert (opnd->addr.preind);
> +	  if (((1 << opnd->shifter.kind) & modifiers) == 0
> +	      || opnd->shifter.amount < 0
> +	      || opnd->shifter.amount > 3)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid addressing mode"));
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
> +	  modifiers = (1 << AARCH64_MOD_SXTW);
> +	  goto sve_zz_operand;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
> +	  modifiers = 1 << AARCH64_MOD_UXTW;
> +	  goto sve_zz_operand;
> +
>  	default:
>  	  break;
>  	}
> @@ -2330,6 +2441,17 @@ static const char *int_reg[2][2][32] = {
>  #undef R64
>  #undef R32
>  };
> +
> +/* Names of the SVE vector registers, first with .S suffixes,
> +   then with .D suffixes.  */
> +
> +static const char *sve_reg[2][32] = {
> +#define ZS(X) "z" #X ".s"
> +#define ZD(X) "z" #X ".d"
> +  BANK (ZS, ZS (31)), BANK (ZD, ZD (31))
> +#undef ZD
> +#undef ZS
> +};
>  #undef BANK
>  
>  /* Return the integer register name.
> @@ -2373,6 +2495,17 @@ get_offset_int_reg_name (const aarch64_opnd_info *opnd)
>      }
>  }
>  
> +/* Get the name of the SVE vector offset register in OPND, using the operand
> +   qualifier to decide whether the suffix should be .S or .D.  */
> +
> +static inline const char *
> +get_addr_sve_reg_name (int regno, aarch64_opnd_qualifier_t qualifier)
> +{
> +  assert (qualifier == AARCH64_OPND_QLF_S_S
> +	  || qualifier == AARCH64_OPND_QLF_S_D);
> +  return sve_reg[qualifier == AARCH64_OPND_QLF_S_D][regno];
> +}
> +
>  /* Types for expanding an encoded 8-bit value to a floating-point value.  */
>  
>  typedef union
> @@ -2948,18 +3081,65 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>        break;
>  
>      case AARCH64_OPND_ADDR_REGOFF:
> +    case AARCH64_OPND_SVE_ADDR_RR:
> +    case AARCH64_OPND_SVE_ADDR_RR_LSL1:
> +    case AARCH64_OPND_SVE_ADDR_RR_LSL2:
> +    case AARCH64_OPND_SVE_ADDR_RR_LSL3:
> +    case AARCH64_OPND_SVE_ADDR_RX:
> +    case AARCH64_OPND_SVE_ADDR_RX_LSL1:
> +    case AARCH64_OPND_SVE_ADDR_RX_LSL2:
> +    case AARCH64_OPND_SVE_ADDR_RX_LSL3:
>        print_register_offset_address
>  	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1),
>  	 get_offset_int_reg_name (opnd));
>        break;
>  
> +    case AARCH64_OPND_SVE_ADDR_RZ:
> +    case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
> +    case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
> +    case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
> +      print_register_offset_address
> +	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1),
> +	 get_addr_sve_reg_name (opnd->addr.offset.regno, opnd->qualifier));
> +      break;
> +
>      case AARCH64_OPND_ADDR_SIMM7:
>      case AARCH64_OPND_ADDR_SIMM9:
>      case AARCH64_OPND_ADDR_SIMM9_2:
> +    case AARCH64_OPND_SVE_ADDR_RI_U6:
> +    case AARCH64_OPND_SVE_ADDR_RI_U6x2:
> +    case AARCH64_OPND_SVE_ADDR_RI_U6x4:
> +    case AARCH64_OPND_SVE_ADDR_RI_U6x8:
>        print_immediate_offset_address
>  	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1));
>        break;
>  
> +    case AARCH64_OPND_SVE_ADDR_ZI_U5:
> +    case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
> +    case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
> +    case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
> +      print_immediate_offset_address
> +	(buf, size, opnd,
> +	 get_addr_sve_reg_name (opnd->addr.base_regno, opnd->qualifier));
> +      break;
> +
> +    case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
> +    case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
> +    case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
> +      print_register_offset_address
> +	(buf, size, opnd,
> +	 get_addr_sve_reg_name (opnd->addr.base_regno, opnd->qualifier),
> +	 get_addr_sve_reg_name (opnd->addr.offset.regno, opnd->qualifier));
> +      break;
> +
>      case AARCH64_OPND_ADDR_UIMM12:
>        name = get_64bit_int_reg_name (opnd->addr.base_regno, 1);
>        if (opnd->addr.offset.imm)
> diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
> index 3406f6e..e823146 100644
> --- a/opcodes/aarch64-opc.h
> +++ b/opcodes/aarch64-opc.h
> @@ -107,9 +107,13 @@ enum aarch64_field_kind
>    FLD_SVE_Zn,
>    FLD_SVE_Zt,
>    FLD_SVE_imm4,
> +  FLD_SVE_imm6,
> +  FLD_SVE_msz,
>    FLD_SVE_pattern,
>    FLD_SVE_prfop,
>    FLD_SVE_tszh,
> +  FLD_SVE_xs_14,
> +  FLD_SVE_xs_22,
>  };
>  
>  /* Field description.  */
> @@ -156,6 +160,9 @@ extern const aarch64_operand aarch64_operands[];
>  						   value by 2 to get the value
>  						   of an immediate operand.  */
>  #define OPD_F_MAYBE_SP		0x00000010	/* May potentially be SP.  */
> +#define OPD_F_OD_MASK		0x00000060	/* Operand-dependent data.  */
> +#define OPD_F_OD_LSB		5
> +#define OPD_F_NO_ZR		0x00000080	/* ZR index not allowed.  */
>  
>  static inline bfd_boolean
>  operand_has_inserter (const aarch64_operand *operand)
> @@ -187,6 +194,13 @@ operand_maybe_stack_pointer (const aarch64_operand *operand)
>    return (operand->flags & OPD_F_MAYBE_SP) ? TRUE : FALSE;
>  }
>  
> +/* Return the value of the operand-specific data field (OPD_F_OD_MASK).  */
> +static inline unsigned int
> +get_operand_specific_data (const aarch64_operand *operand)
> +{
> +  return (operand->flags & OPD_F_OD_MASK) >> OPD_F_OD_LSB;
> +}
> +
>  /* Return the total width of the operand *OPERAND.  */
>  static inline unsigned
>  get_operand_fields_width (const aarch64_operand *operand)
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index 491235f..ef32e19 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2820,6 +2820,93 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "a prefetch operation specifier")					\
>      Y (SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
>        "the PSB option name CSYNC")					\
> +    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6", 0 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn), "an address with a 6-bit unsigned offset")		\
> +    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn),							\
> +      "an address with a 6-bit unsigned offset, multiplied by 2")	\
> +    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn),							\
> +      "an address with a 6-bit unsigned offset, multiplied by 4")	\
> +    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x8", 3 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn),							\
> +      "an address with a 6-bit unsigned offset, multiplied by 8")	\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR", 0 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL1", 1 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL2", 2 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL3", 3 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX",				\
> +      (0 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
> +      "an address with a scalar register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL1",			\
> +      (1 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
> +      "an address with a scalar register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL2",			\
> +      (2 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
> +      "an address with a scalar register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL3",			\
> +      (3 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
> +      "an address with a scalar register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ", 0 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_SVE_Zm_16),						\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL1", 1 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_SVE_Zm_16),						\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL2", 2 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_SVE_Zm_16),						\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL3", 3 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_SVE_Zm_16),						\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW_14",			\
> +      0 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW_22",			\
> +      0 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW1_14",			\
> +      1 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW1_22",			\
> +      1 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW2_14",			\
> +      2 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW2_22",			\
> +      2 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW3_14",			\
> +      3 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW3_22",			\
> +      3 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5", 0 << OPD_F_OD_LSB,	\
> +      F(FLD_SVE_Zn), "an address with a 5-bit unsigned offset")		\
> +    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x2", 1 << OPD_F_OD_LSB,	\
> +      F(FLD_SVE_Zn),							\
> +      "an address with a 5-bit unsigned offset, multiplied by 2")	\
> +    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x4", 2 << OPD_F_OD_LSB,	\
> +      F(FLD_SVE_Zn),							\
> +      "an address with a 5-bit unsigned offset, multiplied by 4")	\
> +    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x8", 3 << OPD_F_OD_LSB,	\
> +      F(FLD_SVE_Zn),							\
> +      "an address with a 5-bit unsigned offset, multiplied by 8")	\
> +    Y(ADDRESS, sve_addr_zz_lsl, "SVE_ADDR_ZZ_LSL", 0,			\
> +      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_zz_sxtw, "SVE_ADDR_ZZ_SXTW", 0,			\
> +      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_zz_uxtw, "SVE_ADDR_ZZ_UXTW", 0,			\
> +      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
> +      "an address with a vector register offset")			\
>      Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
>        "an enumeration value such as POW2")				\
>      Y(IMMEDIATE, sve_scale, "SVE_PATTERN_SCALED", 0,			\
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 26/32] Add SVE MUL VL addressing modes
  2016-08-23  9:23 ` [AArch64][SVE 26/32] Add SVE MUL VL addressing modes Richard Sandiford
@ 2016-08-25 14:44   ` Richard Earnshaw (lists)
  2016-09-16 12:10     ` Richard Sandiford
  0 siblings, 1 reply; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 14:44 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:23, Richard Sandiford wrote:
> This patch adds support for addresses of the form:
> 
>    [<base>, #<offset>, MUL VL]
> 
> This involves adding a new AARCH64_MOD_MUL_VL modifier, which is
> why I split it out from the other addressing modes.
> 
> For LD2, LD3 and LD4, the offset must be a multiple of the structure
> size, so for LD3 the possible values are 0, 3, 6, ....  The patch
> therefore extends value_aligned_p to handle non-power-of-2 alignments.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_S4xVL): New aarch64_opnd.
> 	(AARCH64_OPND_SVE_ADDR_RI_S4x2xVL, AARCH64_OPND_SVE_ADDR_RI_S4x3xVL)
> 	(AARCH64_OPND_SVE_ADDR_RI_S4x4xVL, AARCH64_OPND_SVE_ADDR_RI_S6xVL)
> 	(AARCH64_OPND_SVE_ADDR_RI_S9xVL): Likewise.
> 	(AARCH64_MOD_MUL_VL): New aarch64_modifier_kind.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for new MUL VL
> 	operands.
> 	* aarch64-opc.c (aarch64_operand_modifiers): Initialize
> 	the AARCH64_MOD_MUL_VL entry.
> 	(value_aligned_p): Cope with non-power-of-two alignments.
> 	(operand_general_constraint_met_p): Handle the new MUL VL addresses.
> 	(print_immediate_offset_address): Likewise.
> 	(aarch64_print_operand): Likewise.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm.h (ins_sve_addr_ri_s4xvl, ins_sve_addr_ri_s6xvl)
> 	(ins_sve_addr_ri_s9xvl): New inserters.
> 	* aarch64-asm.c (aarch64_ins_sve_addr_ri_s4xvl): New function.
> 	(aarch64_ins_sve_addr_ri_s6xvl): Likewise.
> 	(aarch64_ins_sve_addr_ri_s9xvl): Likewise.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.h (ext_sve_addr_ri_s4xvl, ext_sve_addr_ri_s6xvl)
> 	(ext_sve_addr_ri_s9xvl): New extractors.
> 	* aarch64-dis.c (aarch64_ext_sve_addr_reg_mul_vl): New function.
> 	(aarch64_ext_sve_addr_ri_s4xvl): Likewise.
> 	(aarch64_ext_sve_addr_ri_s6xvl): Likewise.
> 	(aarch64_ext_sve_addr_ri_s9xvl): Likewise.
> 	* aarch64-dis-2.c: Regenerate.
> 
> gas/
> 	* config/tc-aarch64.c (SHIFTED_MUL_VL): New parse_shift_mode.
> 	(parse_shift): Handle it.
> 	(parse_address_main): Handle the new MUL VL addresses.
> 	(parse_operands): Likewise.
> 

OK.

R.

> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index f9d89ce..37fce5b 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -2949,6 +2949,7 @@ enum parse_shift_mode
>    SHIFTED_LSL,			/* bare "lsl #n"  */
>    SHIFTED_MUL,			/* bare "mul #n"  */
>    SHIFTED_LSL_MSL,		/* "lsl|msl #n"  */
> +  SHIFTED_MUL_VL,		/* "mul vl"  */
>    SHIFTED_REG_OFFSET		/* [su]xtw|sxtx {#n} or lsl #n  */
>  };
>  
> @@ -2990,7 +2991,8 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>      }
>  
>    if (kind == AARCH64_MOD_MUL
> -      && mode != SHIFTED_MUL)
> +      && mode != SHIFTED_MUL
> +      && mode != SHIFTED_MUL_VL)
>      {
>        set_syntax_error (_("invalid use of 'MUL'"));
>        return FALSE;
> @@ -3030,6 +3032,20 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>  	}
>        break;
>  
> +    case SHIFTED_MUL_VL:
> +      if (kind == AARCH64_MOD_MUL)
> +	{
> +	  skip_whitespace (p);
> +	  if (strncasecmp (p, "vl", 2) == 0 && !ISALPHA (p[2]))
> +	    {
> +	      p += 2;
> +	      kind = AARCH64_MOD_MUL_VL;
> +	      break;
> +	    }
> +	}
> +      set_syntax_error (_("only 'MUL VL' is permitted"));
> +      return FALSE;
> +
>      case SHIFTED_REG_OFFSET:
>        if (kind != AARCH64_MOD_UXTW && kind != AARCH64_MOD_LSL
>  	  && kind != AARCH64_MOD_SXTW && kind != AARCH64_MOD_SXTX)
> @@ -3057,7 +3073,7 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>  
>    /* Parse shift amount.  */
>    exp_has_prefix = 0;
> -  if (mode == SHIFTED_REG_OFFSET && *p == ']')
> +  if ((mode == SHIFTED_REG_OFFSET && *p == ']') || kind == AARCH64_MOD_MUL_VL)
>      exp.X_op = O_absent;
>    else
>      {
> @@ -3068,7 +3084,11 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>  	}
>        my_get_expression (&exp, &p, GE_NO_PREFIX, 0);
>      }
> -  if (exp.X_op == O_absent)
> +  if (kind == AARCH64_MOD_MUL_VL)
> +    /* For consistency, give MUL VL the same shift amount as an implicit
> +       MUL #1.  */
> +    operand->shifter.amount = 1;
> +  else if (exp.X_op == O_absent)
>      {
>        if (aarch64_extend_operator_p (kind) == FALSE || exp_has_prefix)
>  	{
> @@ -3289,6 +3309,7 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>     PC-relative (literal)
>       label
>     SVE:
> +     [base,#imm,MUL VL]
>       [base,Zm.D{,LSL #imm}]
>       [base,Zm.S,(S|U)XTW {#imm}]
>       [base,Zm.D,(S|U)XTW {#imm}] // ignores top 32 bits of Zm.D elements
> @@ -3334,7 +3355,9 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>     Likewise ACCEPT_SVE says whether the SVE addressing modes should be
>     accepted.  We use context-dependent parsing for this case because
>     (for compatibility) we should accept symbolic constants like z0 and
> -   z0.s in base AArch64 code.
> +   z0.s in base AArch64 code.  Also, the error message "only 'MUL VL'
> +   is permitted" is likely to be confusing in non-SVE addresses, where
> +   no immediate modifiers are permitted.
>  
>     In all other cases, it is the caller's responsibility to check whether
>     the addressing mode is supported by the instruction.  It is also the
> @@ -3544,6 +3567,10 @@ parse_address_main (char **str, aarch64_opnd_info *operand,
>  		  set_syntax_error (_("constant offset required"));
>  		  return FALSE;
>  		}
> +	      if (accept_sve && skip_past_comma (&p))
> +		/* [Xn,<expr>,MUL VL  */
> +		if (! parse_shift (&p, operand, SHIFTED_MUL_VL))
> +		  return FALSE;
>  	    }
>  	}
>      }
> @@ -3619,9 +3646,9 @@ parse_address_main (char **str, aarch64_opnd_info *operand,
>  }
>  
>  /* Parse a base AArch64 address, i.e. one that cannot contain SVE base
> -   registers or SVE offset registers.  Do not allow relocation operators.
> -   Look for and parse "[Xn], (Xm|#m)" as post-indexed addressing
> -   if ACCEPT_REG_POST_INDEX is true.
> +   registers, SVE offset registers, or MUL VL.  Do not allow relocation
> +   operators.  Look for and parse "[Xn], (Xm|#m)" as post-indexed
> +   addressing if ACCEPT_REG_POST_INDEX is true.
>  
>     Return TRUE on success.  */
>  static bfd_boolean
> @@ -3634,8 +3661,8 @@ parse_address (char **str, aarch64_opnd_info *operand,
>  }
>  
>  /* Parse a base AArch64 address, i.e. one that cannot contain SVE base
> -   registers or SVE offset registers.  Allow relocation operators but
> -   disallow post-indexed addressing.
> +   registers, SVE offset registers, or MUL VL.  Allow relocation operators
> +   but disallow post-indexed addressing.
>  
>     Return TRUE on success.  */
>  static bfd_boolean
> @@ -3646,7 +3673,7 @@ parse_address_reloc (char **str, aarch64_opnd_info *operand)
>  			     TRUE, FALSE, FALSE);
>  }
>  
> -/* Parse an address in which SVE vector registers are allowed.
> +/* Parse an address in which SVE vector registers and MUL VL are allowed.
>     The arguments have the same meaning as for parse_address_main.
>     Return TRUE on success.  */
>  static bfd_boolean
> @@ -5980,11 +6007,18 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  /* No qualifier.  */
>  	  break;
>  
> +	case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
> -	  /* [X<n>{, #imm}]
> +	  /* [X<n>{, #imm, MUL VL}]
> +	     [X<n>{, #imm}]
>  	     but recognizing SVE registers.  */
>  	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
>  					      &offset_qualifier));
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index e61ac9c..837d6bd 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -244,6 +244,12 @@ enum aarch64_opnd
>    AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
>    AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
>  
> +  AARCH64_OPND_SVE_ADDR_RI_S4xVL,   /* SVE [<Xn|SP>, #<simm4>, MUL VL].  */
> +  AARCH64_OPND_SVE_ADDR_RI_S4x2xVL, /* SVE [<Xn|SP>, #<simm4>*2, MUL VL].  */
> +  AARCH64_OPND_SVE_ADDR_RI_S4x3xVL, /* SVE [<Xn|SP>, #<simm4>*3, MUL VL].  */
> +  AARCH64_OPND_SVE_ADDR_RI_S4x4xVL, /* SVE [<Xn|SP>, #<simm4>*4, MUL VL].  */
> +  AARCH64_OPND_SVE_ADDR_RI_S6xVL,   /* SVE [<Xn|SP>, #<simm6>, MUL VL].  */
> +  AARCH64_OPND_SVE_ADDR_RI_S9xVL,   /* SVE [<Xn|SP>, #<simm9>, MUL VL].  */
>    AARCH64_OPND_SVE_ADDR_RI_U6,	    /* SVE [<Xn|SP>, #<uimm6>].  */
>    AARCH64_OPND_SVE_ADDR_RI_U6x2,    /* SVE [<Xn|SP>, #<uimm6>*2].  */
>    AARCH64_OPND_SVE_ADDR_RI_U6x4,    /* SVE [<Xn|SP>, #<uimm6>*4].  */
> @@ -786,6 +792,7 @@ enum aarch64_modifier_kind
>    AARCH64_MOD_SXTW,
>    AARCH64_MOD_SXTX,
>    AARCH64_MOD_MUL,
> +  AARCH64_MOD_MUL_VL,
>  };
>  
>  bfd_boolean
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index 47a414c..da590ca 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -480,12 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 123:
> -    case 124:
> -    case 125:
> -    case 126:
> -    case 127:
> -    case 128:
>      case 129:
>      case 130:
>      case 131:
> @@ -494,7 +488,13 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 134:
>      case 135:
>      case 136:
> +    case 137:
> +    case 138:
>      case 139:
> +    case 140:
> +    case 141:
> +    case 142:
> +    case 145:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -531,8 +531,8 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 68:
>      case 69:
>      case 70:
> -    case 120:
> -    case 122:
> +    case 126:
> +    case 128:
>        return aarch64_ins_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -587,46 +587,55 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 90:
>      case 91:
>      case 92:
> -      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
> +      return aarch64_ins_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 93:
> +      return aarch64_ins_sve_addr_ri_s6xvl (self, info, code, inst);
>      case 94:
> +      return aarch64_ins_sve_addr_ri_s9xvl (self, info, code, inst);
>      case 95:
>      case 96:
>      case 97:
>      case 98:
> +      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
>      case 99:
>      case 100:
>      case 101:
>      case 102:
>      case 103:
>      case 104:
> -      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
>      case 105:
>      case 106:
>      case 107:
>      case 108:
>      case 109:
>      case 110:
> +      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
>      case 111:
>      case 112:
> -      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
>      case 113:
>      case 114:
>      case 115:
>      case 116:
> -      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
>      case 117:
> -      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
>      case 118:
> -      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
> +      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
>      case 119:
> -      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 120:
>      case 121:
> +    case 122:
> +      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
> +    case 123:
> +      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
> +    case 124:
> +      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 125:
> +      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 127:
>        return aarch64_ins_sve_scale (self, info, code, inst);
> -    case 137:
> +    case 143:
>        return aarch64_ins_sve_index (self, info, code, inst);
> -    case 138:
> -    case 140:
> +    case 144:
> +    case 146:
>        return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index 0d3b2c7..944a9eb 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -745,6 +745,56 @@ aarch64_ins_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return NULL;
>  }
>  
> +/* Encode an SVE address [<base>, #<simm4>*<factor>, MUL VL],
> +   where <simm4> is a 4-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm4> is encoded in the SVE_imm4 field.  */
> +const char *
> +aarch64_ins_sve_addr_ri_s4xvl (const aarch64_operand *self,
> +			       const aarch64_opnd_info *info,
> +			       aarch64_insn *code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int factor = 1 + get_operand_specific_data (self);
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (FLD_SVE_imm4, code, info->addr.offset.imm / factor, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [<base>, #<simm6>*<factor>, MUL VL],
> +   where <simm6> is a 6-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm6> is encoded in the SVE_imm6 field.  */
> +const char *
> +aarch64_ins_sve_addr_ri_s6xvl (const aarch64_operand *self,
> +			       const aarch64_opnd_info *info,
> +			       aarch64_insn *code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int factor = 1 + get_operand_specific_data (self);
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (FLD_SVE_imm6, code, info->addr.offset.imm / factor, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [<base>, #<simm9>*<factor>, MUL VL],
> +   where <simm9> is a 9-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm9> is encoded in the concatenation of the SVE_imm6
> +   and imm3 fields, with imm3 being the less-significant part.  */
> +const char *
> +aarch64_ins_sve_addr_ri_s9xvl (const aarch64_operand *self,
> +			       const aarch64_opnd_info *info,
> +			       aarch64_insn *code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int factor = 1 + get_operand_specific_data (self);
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_fields (code, info->addr.offset.imm / factor, 0,
> +		 2, FLD_imm3, FLD_SVE_imm6);
> +  return NULL;
> +}
> +
>  /* Encode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
>     is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
>     value.  fields[0] specifies the base register field.  */
> diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
> index b81cfa1..5e13de0 100644
> --- a/opcodes/aarch64-asm.h
> +++ b/opcodes/aarch64-asm.h
> @@ -69,6 +69,9 @@ AARCH64_DECL_OPD_INSERTER (ins_hint);
>  AARCH64_DECL_OPD_INSERTER (ins_prfop);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s4xvl);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s6xvl);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s9xvl);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_u6);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rr_lsl);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rz_xtw);
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 3dd714f..48d6ce7 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10426,12 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 123:
> -    case 124:
> -    case 125:
> -    case 126:
> -    case 127:
> -    case 128:
>      case 129:
>      case 130:
>      case 131:
> @@ -10440,7 +10434,13 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 134:
>      case 135:
>      case 136:
> +    case 137:
> +    case 138:
>      case 139:
> +    case 140:
> +    case 141:
> +    case 142:
> +    case 145:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10482,8 +10482,8 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 68:
>      case 69:
>      case 70:
> -    case 120:
> -    case 122:
> +    case 126:
> +    case 128:
>        return aarch64_ext_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -10540,46 +10540,55 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 90:
>      case 91:
>      case 92:
> -      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
> +      return aarch64_ext_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 93:
> +      return aarch64_ext_sve_addr_ri_s6xvl (self, info, code, inst);
>      case 94:
> +      return aarch64_ext_sve_addr_ri_s9xvl (self, info, code, inst);
>      case 95:
>      case 96:
>      case 97:
>      case 98:
> +      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
>      case 99:
>      case 100:
>      case 101:
>      case 102:
>      case 103:
>      case 104:
> -      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
>      case 105:
>      case 106:
>      case 107:
>      case 108:
>      case 109:
>      case 110:
> +      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
>      case 111:
>      case 112:
> -      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
>      case 113:
>      case 114:
>      case 115:
>      case 116:
> -      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
>      case 117:
> -      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
>      case 118:
> -      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
> +      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
>      case 119:
> -      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 120:
>      case 121:
> +    case 122:
> +      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
> +    case 123:
> +      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
> +    case 124:
> +      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 125:
> +      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 127:
>        return aarch64_ext_sve_scale (self, info, code, inst);
> -    case 137:
> +    case 143:
>        return aarch64_ext_sve_index (self, info, code, inst);
> -    case 138:
> -    case 140:
> +    case 144:
> +    case 146:
>        return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index ed77b4d..ba6befd 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -1186,6 +1186,78 @@ aarch64_ext_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return 1;
>  }
>  
> +/* Decode an SVE address [<base>, #<offset>*<factor>, MUL VL],
> +   where <offset> is given by the OFFSET parameter and where <factor> is
> +   1 plus SELF's operand-dependent value.  fields[0] specifies the field
> +   that holds <base>.  */
> +static int
> +aarch64_ext_sve_addr_reg_mul_vl (const aarch64_operand *self,
> +				 aarch64_opnd_info *info, aarch64_insn code,
> +				 int64_t offset)
> +{
> +  info->addr.base_regno = extract_field (self->fields[0], code, 0);
> +  info->addr.offset.imm = offset * (1 + get_operand_specific_data (self));
> +  info->addr.offset.is_reg = FALSE;
> +  info->addr.writeback = FALSE;
> +  info->addr.preind = TRUE;
> +  if (offset != 0)
> +    info->shifter.kind = AARCH64_MOD_MUL_VL;
> +  info->shifter.amount = 1;
> +  info->shifter.operator_present = (info->addr.offset.imm != 0);
> +  info->shifter.amount_present = FALSE;
> +  return 1;
> +}
> +
> +/* Decode an SVE address [<base>, #<simm4>*<factor>, MUL VL],
> +   where <simm4> is a 4-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm4> is encoded in the SVE_imm4 field.  */
> +int
> +aarch64_ext_sve_addr_ri_s4xvl (const aarch64_operand *self,
> +			       aarch64_opnd_info *info, aarch64_insn code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int offset;
> +
> +  offset = extract_field (FLD_SVE_imm4, code, 0);
> +  offset = ((offset + 8) & 15) - 8;
> +  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
> +}
> +
> +/* Decode an SVE address [<base>, #<simm6>*<factor>, MUL VL],
> +   where <simm6> is a 6-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm6> is encoded in the SVE_imm6 field.  */
> +int
> +aarch64_ext_sve_addr_ri_s6xvl (const aarch64_operand *self,
> +			       aarch64_opnd_info *info, aarch64_insn code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int offset;
> +
> +  offset = extract_field (FLD_SVE_imm6, code, 0);
> +  offset = (((offset + 32) & 63) - 32);
> +  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
> +}
> +
> +/* Decode an SVE address [<base>, #<simm9>*<factor>, MUL VL],
> +   where <simm9> is a 9-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm9> is encoded in the concatenation of the SVE_imm6
> +   and imm3 fields, with imm3 being the less-significant part.  */
> +int
> +aarch64_ext_sve_addr_ri_s9xvl (const aarch64_operand *self,
> +			       aarch64_opnd_info *info,
> +			       aarch64_insn code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int offset;
> +
> +  offset = extract_fields (code, 0, 2, FLD_SVE_imm6, FLD_imm3);
> +  offset = (((offset + 256) & 511) - 256);
> +  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
> +}
> +
>  /* Decode an SVE address [<base>, #<offset> << <shift>], where <offset>
>     is given by the OFFSET parameter and where <shift> is SELF's operand-
>     dependent value.  fields[0] specifies the base register field <base>.  */
> diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
> index 0ce2d89..5619877 100644
> --- a/opcodes/aarch64-dis.h
> +++ b/opcodes/aarch64-dis.h
> @@ -91,6 +91,9 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_hint);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_prfop);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s4xvl);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s6xvl);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s9xvl);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_u6);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rr_lsl);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rz_xtw);
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index ed2b70b..a72f577 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -113,6 +113,12 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by VL"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x2xVL", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 2*VL"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x3xVL", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 3*VL"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x4xVL", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 4*VL"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S6xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit signed offset, multiplied by VL"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S9xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 9-bit signed offset, multiplied by VL"},
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset"},
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 2"},
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 4"},
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 6617e28..d0959b5 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -365,6 +365,7 @@ const struct aarch64_name_value_pair aarch64_operand_modifiers [] =
>      {"sxtw", 0x6},
>      {"sxtx", 0x7},
>      {"mul", 0x0},
> +    {"mul vl", 0x0},
>      {NULL, 0},
>  };
>  
> @@ -486,10 +487,11 @@ value_in_range_p (int64_t value, int low, int high)
>    return (value >= low && value <= high) ? 1 : 0;
>  }
>  
> +/* Return true if VALUE is a multiple of ALIGN.  */
>  static inline int
>  value_aligned_p (int64_t value, int align)
>  {
> -  return ((value & (align - 1)) == 0) ? 1 : 0;
> +  return (value % align) == 0;
>  }
>  
>  /* A signed value fits in a field.  */
> @@ -1666,6 +1668,49 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
> +	  min_value = -8;
> +	  max_value = 7;
> +	sve_imm_offset_vl:
> +	  assert (!opnd->addr.offset.is_reg);
> +	  assert (opnd->addr.preind);
> +	  num = 1 + get_operand_specific_data (&aarch64_operands[type]);
> +	  min_value *= num;
> +	  max_value *= num;
> +	  if ((opnd->addr.offset.imm != 0 && !opnd->shifter.operator_present)
> +	      || (opnd->shifter.operator_present
> +		  && opnd->shifter.kind != AARCH64_MOD_MUL_VL))
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid addressing mode"));
> +	      return 0;
> +	    }
> +	  if (!value_in_range_p (opnd->addr.offset.imm, min_value, max_value))
> +	    {
> +	      set_offset_out_of_range_error (mismatch_detail, idx,
> +					     min_value, max_value);
> +	      return 0;
> +	    }
> +	  if (!value_aligned_p (opnd->addr.offset.imm, num))
> +	    {
> +	      set_unaligned_error (mismatch_detail, idx, num);
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
> +	  min_value = -32;
> +	  max_value = 31;
> +	  goto sve_imm_offset_vl;
> +
> +	case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
> +	  min_value = -256;
> +	  max_value = 255;
> +	  goto sve_imm_offset_vl;
> +
>  	case AARCH64_OPND_SVE_ADDR_RI_U6:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
> @@ -2645,7 +2690,13 @@ print_immediate_offset_address (char *buf, size_t size,
>      }
>    else
>      {
> -      if (opnd->addr.offset.imm)
> +      if (opnd->shifter.operator_present)
> +	{
> +	  assert (opnd->shifter.kind == AARCH64_MOD_MUL_VL);
> +	  snprintf (buf, size, "[%s,#%d,mul vl]",
> +		    base, opnd->addr.offset.imm);
> +	}
> +      else if (opnd->addr.offset.imm)
>  	snprintf (buf, size, "[%s,#%d]", base, opnd->addr.offset.imm);
>        else
>  	snprintf (buf, size, "[%s]", base);
> @@ -3114,6 +3165,12 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_ADDR_SIMM7:
>      case AARCH64_OPND_ADDR_SIMM9:
>      case AARCH64_OPND_ADDR_SIMM9_2:
> +    case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
> +    case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
> +    case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
> +    case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
> +    case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
> +    case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
>      case AARCH64_OPND_SVE_ADDR_RI_U6:
>      case AARCH64_OPND_SVE_ADDR_RI_U6x2:
>      case AARCH64_OPND_SVE_ADDR_RI_U6x4:
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index ef32e19..ac7ccf0 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2820,6 +2820,24 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "a prefetch operation specifier")					\
>      Y (SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
>        "the PSB option name CSYNC")					\
> +    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4xVL",			\
> +      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 4-bit signed offset, multiplied by VL")	\
> +    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x2xVL",		\
> +      1 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 4-bit signed offset, multiplied by 2*VL")	\
> +    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x3xVL",		\
> +      2 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 4-bit signed offset, multiplied by 3*VL")	\
> +    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x4xVL",		\
> +      3 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 4-bit signed offset, multiplied by 4*VL")	\
> +    Y(ADDRESS, sve_addr_ri_s6xvl, "SVE_ADDR_RI_S6xVL",			\
> +      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 6-bit signed offset, multiplied by VL")	\
> +    Y(ADDRESS, sve_addr_ri_s9xvl, "SVE_ADDR_RI_S9xVL",			\
> +      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 9-bit signed offset, multiplied by VL")	\
>      Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6", 0 << OPD_F_OD_LSB,	\
>        F(FLD_Rn), "an address with a 6-bit unsigned offset")		\
>      Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB,	\
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 27/32] Add SVE integer immediate operands
  2016-08-23  9:24 ` [AArch64][SVE 27/32] Add SVE integer immediate operands Richard Sandiford
@ 2016-08-25 14:51   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 14:51 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:24, Richard Sandiford wrote:
> This patch adds the new SVE integer immediate operands.  There are
> three kinds:
> 
> - simple signed and unsigned ranges, but with new widths and positions.
> 
> - 13-bit logical immediates.  These have the same form as in base AArch64,
>   but at a different bit position.
> 
>   In the case of the "MOV Zn.<T>, #<limm>" alias of DUPM, the logical
>   immediate <limm> is not allowed to be a valid DUP immediate, since DUP
>   is preferred over DUPM for constants that both instructions can handle.
> 
> - a new 9-bit arithmetic immediate, of the form "<imm8>{, LSL #8}".
>   In some contexts the operand is signed and in others it's unsigned.
>   As an extension, we allow shifted immediates to be written as a single
>   integer, e.g. "#256" is equivalent to "#1, LSL #8".  We also use the
>   shiftless form as the preferred disassembly, except for the special
>   case of "#0, LSL #8" (a redundant encoding of 0).
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_SIMM5): New aarch64_opnd.
> 	(AARCH64_OPND_SVE_AIMM, AARCH64_OPND_SVE_ASIMM)
> 	(AARCH64_OPND_SVE_INV_LIMM, AARCH64_OPND_SVE_LIMM)
> 	(AARCH64_OPND_SVE_LIMM_MOV, AARCH64_OPND_SVE_SHLIMM_PRED)
> 	(AARCH64_OPND_SVE_SHLIMM_UNPRED, AARCH64_OPND_SVE_SHRIMM_PRED)
> 	(AARCH64_OPND_SVE_SHRIMM_UNPRED, AARCH64_OPND_SVE_SIMM5)
> 	(AARCH64_OPND_SVE_SIMM5B, AARCH64_OPND_SVE_SIMM6)
> 	(AARCH64_OPND_SVE_SIMM8, AARCH64_OPND_SVE_UIMM3)
> 	(AARCH64_OPND_SVE_UIMM7, AARCH64_OPND_SVE_UIMM8)
> 	(AARCH64_OPND_SVE_UIMM8_53): Likewise.
> 	(aarch64_sve_dupm_mov_immediate_p): Declare.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE
> 	integer immediate operands.
> 	* aarch64-opc.h (FLD_SVE_immN, FLD_SVE_imm3, FLD_SVE_imm5)
> 	(FLD_SVE_imm5b, FLD_SVE_imm7, FLD_SVE_imm8, FLD_SVE_imm9)
> 	(FLD_SVE_immr, FLD_SVE_imms, FLD_SVE_tszh): New aarch64_field_kinds.
> 	* aarch64-opc.c (fields): Add corresponding entries.
> 	(operand_general_constraint_met_p): Handle the new SVE integer
> 	immediate operands.
> 	(aarch64_print_operand): Likewise.
> 	(aarch64_sve_dupm_mov_immediate_p): New function.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm.h (ins_inv_limm, ins_sve_aimm, ins_sve_asimm)
> 	(ins_sve_limm_mov, ins_sve_shlimm, ins_sve_shrimm): New inserters.
> 	* aarch64-asm.c (aarch64_ins_limm_1): New function, split out from...
> 	(aarch64_ins_limm): ...here.
> 	(aarch64_ins_inv_limm): New function.
> 	(aarch64_ins_sve_aimm): Likewise.
> 	(aarch64_ins_sve_asimm): Likewise.
> 	(aarch64_ins_sve_limm_mov): Likewise.
> 	(aarch64_ins_sve_shlimm): Likewise.
> 	(aarch64_ins_sve_shrimm): Likewise.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.h (ext_inv_limm, ext_sve_aimm, ext_sve_asimm)
> 	(ext_sve_limm_mov, ext_sve_shlimm, ext_sve_shrimm): New extractors.
> 	* aarch64-dis.c (decode_limm): New function, split out from...
> 	(aarch64_ext_limm): ...here.
> 	(aarch64_ext_inv_limm): New function.
> 	(decode_sve_aimm): Likewise.
> 	(aarch64_ext_sve_aimm): Likewise.
> 	(aarch64_ext_sve_asimm): Likewise.
> 	(aarch64_ext_sve_limm_mov): Likewise.
> 	(aarch64_top_bit): Likewise.
> 	(aarch64_ext_sve_shlimm): Likewise.
> 	(aarch64_ext_sve_shrimm): Likewise.
> 	* aarch64-dis-2.c: Regenerate.
> 
> gas/
> 	* config/tc-aarch64.c (parse_operands): Handle the new SVE integer
> 	immediate operands.

+		  set_other_error (mismatch_detail, idx,
+				   _("shift amount should be 0 or 8"));

I think the error message should use 'must' rather than 'should'.
'Should' implies a degree of optionality that just doesn't apply here.

OK with that change.

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 37fce5b..cb39cf8 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -5553,6 +5553,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  break;
>  
>  	case AARCH64_OPND_CCMP_IMM:
> +	case AARCH64_OPND_SIMM5:
>  	case AARCH64_OPND_FBITS:
>  	case AARCH64_OPND_UIMM4:
>  	case AARCH64_OPND_UIMM3_OP1:
> @@ -5560,10 +5561,36 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	case AARCH64_OPND_IMM_VLSL:
>  	case AARCH64_OPND_IMM:
>  	case AARCH64_OPND_WIDTH:
> +	case AARCH64_OPND_SVE_INV_LIMM:
> +	case AARCH64_OPND_SVE_LIMM:
> +	case AARCH64_OPND_SVE_LIMM_MOV:
> +	case AARCH64_OPND_SVE_SHLIMM_PRED:
> +	case AARCH64_OPND_SVE_SHLIMM_UNPRED:
> +	case AARCH64_OPND_SVE_SHRIMM_PRED:
> +	case AARCH64_OPND_SVE_SHRIMM_UNPRED:
> +	case AARCH64_OPND_SVE_SIMM5:
> +	case AARCH64_OPND_SVE_SIMM5B:
> +	case AARCH64_OPND_SVE_SIMM6:
> +	case AARCH64_OPND_SVE_SIMM8:
> +	case AARCH64_OPND_SVE_UIMM3:
> +	case AARCH64_OPND_SVE_UIMM7:
> +	case AARCH64_OPND_SVE_UIMM8:
> +	case AARCH64_OPND_SVE_UIMM8_53:
>  	  po_imm_nc_or_fail ();
>  	  info->imm.value = val;
>  	  break;
>  
> +	case AARCH64_OPND_SVE_AIMM:
> +	case AARCH64_OPND_SVE_ASIMM:
> +	  po_imm_nc_or_fail ();
> +	  info->imm.value = val;
> +	  skip_whitespace (str);
> +	  if (skip_past_comma (&str))
> +	    po_misc_or_fail (parse_shift (&str, info, SHIFTED_LSL));
> +	  else
> +	    inst.base.operands[i].shifter.kind = AARCH64_MOD_LSL;
> +	  break;
> +
>  	case AARCH64_OPND_SVE_PATTERN:
>  	  po_enum_or_fail (aarch64_sve_pattern_array);
>  	  info->imm.value = val;
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index 837d6bd..36e95b4 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -200,6 +200,7 @@ enum aarch64_opnd
>    AARCH64_OPND_BIT_NUM,	/* Immediate.  */
>    AARCH64_OPND_EXCEPTION,/* imm16 operand in exception instructions.  */
>    AARCH64_OPND_CCMP_IMM,/* Immediate in conditional compare instructions.  */
> +  AARCH64_OPND_SIMM5,	/* 5-bit signed immediate in the imm5 field.  */
>    AARCH64_OPND_NZCV,	/* Flag bit specifier giving an alternative value for
>  			   each condition flag.  */
>  
> @@ -289,6 +290,11 @@ enum aarch64_opnd
>    AARCH64_OPND_SVE_ADDR_ZZ_LSL,     /* SVE [Zn.<T>, Zm,<T>, LSL #<msz>].  */
>    AARCH64_OPND_SVE_ADDR_ZZ_SXTW,    /* SVE [Zn.<T>, Zm,<T>, SXTW #<msz>].  */
>    AARCH64_OPND_SVE_ADDR_ZZ_UXTW,    /* SVE [Zn.<T>, Zm,<T>, UXTW #<msz>].  */
> +  AARCH64_OPND_SVE_AIMM,	/* SVE unsigned arithmetic immediate.  */
> +  AARCH64_OPND_SVE_ASIMM,	/* SVE signed arithmetic immediate.  */
> +  AARCH64_OPND_SVE_INV_LIMM,	/* SVE inverted logical immediate.  */
> +  AARCH64_OPND_SVE_LIMM,	/* SVE logical immediate.  */
> +  AARCH64_OPND_SVE_LIMM_MOV,	/* SVE logical immediate for MOV.  */
>    AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
>    AARCH64_OPND_SVE_PATTERN_SCALED, /* Likewise, with additional MUL factor.  */
>    AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
> @@ -300,6 +306,18 @@ enum aarch64_opnd
>    AARCH64_OPND_SVE_Pm,		/* SVE p0-p15 in Pm.  */
>    AARCH64_OPND_SVE_Pn,		/* SVE p0-p15 in Pn.  */
>    AARCH64_OPND_SVE_Pt,		/* SVE p0-p15 in Pt.  */
> +  AARCH64_OPND_SVE_SHLIMM_PRED,	  /* SVE shift left amount (predicated).  */
> +  AARCH64_OPND_SVE_SHLIMM_UNPRED, /* SVE shift left amount (unpredicated).  */
> +  AARCH64_OPND_SVE_SHRIMM_PRED,	  /* SVE shift right amount (predicated).  */
> +  AARCH64_OPND_SVE_SHRIMM_UNPRED, /* SVE shift right amount (unpredicated).  */
> +  AARCH64_OPND_SVE_SIMM5,	/* SVE signed 5-bit immediate.  */
> +  AARCH64_OPND_SVE_SIMM5B,	/* SVE secondary signed 5-bit immediate.  */
> +  AARCH64_OPND_SVE_SIMM6,	/* SVE signed 6-bit immediate.  */
> +  AARCH64_OPND_SVE_SIMM8,	/* SVE signed 8-bit immediate.  */
> +  AARCH64_OPND_SVE_UIMM3,	/* SVE unsigned 3-bit immediate.  */
> +  AARCH64_OPND_SVE_UIMM7,	/* SVE unsigned 7-bit immediate.  */
> +  AARCH64_OPND_SVE_UIMM8,	/* SVE unsigned 8-bit immediate.  */
> +  AARCH64_OPND_SVE_UIMM8_53,	/* SVE split unsigned 8-bit immediate.  */
>    AARCH64_OPND_SVE_Za_5,	/* SVE vector register in Za, bits [9,5].  */
>    AARCH64_OPND_SVE_Za_16,	/* SVE vector register in Za, bits [20,16].  */
>    AARCH64_OPND_SVE_Zd,		/* SVE vector register in Zd.  */
> @@ -1065,6 +1083,9 @@ aarch64_get_operand_name (enum aarch64_opnd);
>  extern const char *
>  aarch64_get_operand_desc (enum aarch64_opnd);
>  
> +extern bfd_boolean
> +aarch64_sve_dupm_mov_immediate_p (uint64_t, int);
> +
>  #ifdef DEBUG_AARCH64
>  extern int debug_dump;
>  
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index da590ca..491ea53 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -480,12 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 129:
> -    case 130:
> -    case 131:
> -    case 132:
> -    case 133:
> -    case 134:
>      case 135:
>      case 136:
>      case 137:
> @@ -494,7 +488,13 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 140:
>      case 141:
>      case 142:
> -    case 145:
> +    case 155:
> +    case 156:
> +    case 157:
> +    case 158:
> +    case 159:
> +    case 160:
> +    case 163:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -527,12 +527,21 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 56:
>      case 57:
>      case 58:
> -    case 67:
> +    case 59:
>      case 68:
>      case 69:
>      case 70:
> -    case 126:
> -    case 128:
> +    case 71:
> +    case 132:
> +    case 134:
> +    case 147:
> +    case 148:
> +    case 149:
> +    case 150:
> +    case 151:
> +    case 152:
> +    case 153:
> +    case 154:
>        return aarch64_ins_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -543,61 +552,61 @@ aarch64_insert_operand (const aarch64_operand *self,
>        return aarch64_ins_advsimd_imm_modified (self, info, code, inst);
>      case 46:
>        return aarch64_ins_fpimm (self, info, code, inst);
> -    case 59:
> -      return aarch64_ins_limm (self, info, code, inst);
>      case 60:
> -      return aarch64_ins_aimm (self, info, code, inst);
> +    case 130:
> +      return aarch64_ins_limm (self, info, code, inst);
>      case 61:
> -      return aarch64_ins_imm_half (self, info, code, inst);
> +      return aarch64_ins_aimm (self, info, code, inst);
>      case 62:
> +      return aarch64_ins_imm_half (self, info, code, inst);
> +    case 63:
>        return aarch64_ins_fbits (self, info, code, inst);
> -    case 64:
>      case 65:
> +    case 66:
>        return aarch64_ins_cond (self, info, code, inst);
> -    case 71:
> -    case 77:
> -      return aarch64_ins_addr_simple (self, info, code, inst);
>      case 72:
> -      return aarch64_ins_addr_regoff (self, info, code, inst);
> +    case 78:
> +      return aarch64_ins_addr_simple (self, info, code, inst);
>      case 73:
> +      return aarch64_ins_addr_regoff (self, info, code, inst);
>      case 74:
>      case 75:
> -      return aarch64_ins_addr_simm (self, info, code, inst);
>      case 76:
> +      return aarch64_ins_addr_simm (self, info, code, inst);
> +    case 77:
>        return aarch64_ins_addr_uimm12 (self, info, code, inst);
> -    case 78:
> -      return aarch64_ins_simd_addr_post (self, info, code, inst);
>      case 79:
> -      return aarch64_ins_sysreg (self, info, code, inst);
> +      return aarch64_ins_simd_addr_post (self, info, code, inst);
>      case 80:
> -      return aarch64_ins_pstatefield (self, info, code, inst);
> +      return aarch64_ins_sysreg (self, info, code, inst);
>      case 81:
> +      return aarch64_ins_pstatefield (self, info, code, inst);
>      case 82:
>      case 83:
>      case 84:
> -      return aarch64_ins_sysins_op (self, info, code, inst);
>      case 85:
> +      return aarch64_ins_sysins_op (self, info, code, inst);
>      case 86:
> -      return aarch64_ins_barrier (self, info, code, inst);
>      case 87:
> -      return aarch64_ins_prfop (self, info, code, inst);
> +      return aarch64_ins_barrier (self, info, code, inst);
>      case 88:
> -      return aarch64_ins_hint (self, info, code, inst);
> +      return aarch64_ins_prfop (self, info, code, inst);
>      case 89:
> +      return aarch64_ins_hint (self, info, code, inst);
>      case 90:
>      case 91:
>      case 92:
> -      return aarch64_ins_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 93:
> -      return aarch64_ins_sve_addr_ri_s6xvl (self, info, code, inst);
> +      return aarch64_ins_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 94:
> -      return aarch64_ins_sve_addr_ri_s9xvl (self, info, code, inst);
> +      return aarch64_ins_sve_addr_ri_s6xvl (self, info, code, inst);
>      case 95:
> +      return aarch64_ins_sve_addr_ri_s9xvl (self, info, code, inst);
>      case 96:
>      case 97:
>      case 98:
> -      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
>      case 99:
> +      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
>      case 100:
>      case 101:
>      case 102:
> @@ -609,8 +618,8 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 108:
>      case 109:
>      case 110:
> -      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
>      case 111:
> +      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
>      case 112:
>      case 113:
>      case 114:
> @@ -618,24 +627,39 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 116:
>      case 117:
>      case 118:
> -      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
>      case 119:
> +      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
>      case 120:
>      case 121:
>      case 122:
> -      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
>      case 123:
> -      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
> +      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
>      case 124:
> -      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
> +      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
>      case 125:
> +      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 126:
>        return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
>      case 127:
> +      return aarch64_ins_sve_aimm (self, info, code, inst);
> +    case 128:
> +      return aarch64_ins_sve_asimm (self, info, code, inst);
> +    case 129:
> +      return aarch64_ins_inv_limm (self, info, code, inst);
> +    case 131:
> +      return aarch64_ins_sve_limm_mov (self, info, code, inst);
> +    case 133:
>        return aarch64_ins_sve_scale (self, info, code, inst);
>      case 143:
> -      return aarch64_ins_sve_index (self, info, code, inst);
>      case 144:
> +      return aarch64_ins_sve_shlimm (self, info, code, inst);
> +    case 145:
>      case 146:
> +      return aarch64_ins_sve_shrimm (self, info, code, inst);
> +    case 161:
> +      return aarch64_ins_sve_index (self, info, code, inst);
> +    case 162:
> +    case 164:
>        return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index 944a9eb..61d0d95 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -452,17 +452,18 @@ aarch64_ins_aimm (const aarch64_operand *self, const aarch64_opnd_info *info,
>    return NULL;
>  }
>  
> -/* Insert logical/bitmask immediate for e.g. the last operand in
> -     ORR <Wd|WSP>, <Wn>, #<imm>.  */
> -const char *
> -aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
> -		  aarch64_insn *code, const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +/* Common routine shared by aarch64_ins{,_inv}_limm.  INVERT_P says whether
> +   the operand should be inverted before encoding.  */
> +static const char *
> +aarch64_ins_limm_1 (const aarch64_operand *self,
> +		    const aarch64_opnd_info *info, aarch64_insn *code,
> +		    const aarch64_inst *inst, bfd_boolean invert_p)
>  {
>    aarch64_insn value;
>    uint64_t imm = info->imm.value;
>    int esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
>  
> -  if (inst->opcode->op == OP_BIC)
> +  if (invert_p)
>      imm = ~imm;
>    if (aarch64_logical_immediate_p (imm, esize, &value) == FALSE)
>      /* The constraint check should have guaranteed this wouldn't happen.  */
> @@ -473,6 +474,25 @@ aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
>    return NULL;
>  }
>  
> +/* Insert logical/bitmask immediate for e.g. the last operand in
> +     ORR <Wd|WSP>, <Wn>, #<imm>.  */
> +const char *
> +aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
> +		  aarch64_insn *code, const aarch64_inst *inst)
> +{
> +  return aarch64_ins_limm_1 (self, info, code, inst,
> +			     inst->opcode->op == OP_BIC);
> +}
> +
> +/* Insert a logical/bitmask immediate for the BIC alias of AND (etc.).  */
> +const char *
> +aarch64_ins_inv_limm (const aarch64_operand *self,
> +		      const aarch64_opnd_info *info, aarch64_insn *code,
> +		      const aarch64_inst *inst)
> +{
> +  return aarch64_ins_limm_1 (self, info, code, inst, TRUE);
> +}
> +
>  /* Encode Ft for e.g. STR <Qt>, [<Xn|SP>, <R><m>{, <extend> {<amount>}}]
>     or LDP <Qt1>, <Qt2>, [<Xn|SP>], #<imm>.  */
>  const char *
> @@ -903,6 +923,30 @@ aarch64_ins_sve_addr_zz_uxtw (const aarch64_operand *self,
>    return aarch64_ext_sve_addr_zz (self, info, code);
>  }
>  
> +/* Encode an SVE ADD/SUB immediate.  */
> +const char *
> +aarch64_ins_sve_aimm (const aarch64_operand *self,
> +		      const aarch64_opnd_info *info, aarch64_insn *code,
> +		      const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  if (info->shifter.amount == 8)
> +    insert_all_fields (self, code, (info->imm.value & 0xff) | 256);
> +  else if (info->imm.value != 0 && (info->imm.value & 0xff) == 0)
> +    insert_all_fields (self, code, ((info->imm.value / 256) & 0xff) | 256);
> +  else
> +    insert_all_fields (self, code, info->imm.value & 0xff);
> +  return NULL;
> +}
> +
> +/* Encode an SVE CPY/DUP immediate.  */
> +const char *
> +aarch64_ins_sve_asimm (const aarch64_operand *self,
> +		       const aarch64_opnd_info *info, aarch64_insn *code,
> +		       const aarch64_inst *inst)
> +{
> +  return aarch64_ins_sve_aimm (self, info, code, inst);
> +}
> +
>  /* Encode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
>     array specifies which field to use for Zn.  MM is encoded in the
>     concatenation of imm5 and SVE_tszh, with imm5 being the less
> @@ -919,6 +963,15 @@ aarch64_ins_sve_index (const aarch64_operand *self,
>    return NULL;
>  }
>  
> +/* Encode a logical/bitmask immediate for the MOV alias of SVE DUPM.  */
> +const char *
> +aarch64_ins_sve_limm_mov (const aarch64_operand *self,
> +			  const aarch64_opnd_info *info, aarch64_insn *code,
> +			  const aarch64_inst *inst)
> +{
> +  return aarch64_ins_limm (self, info, code, inst);
> +}
> +
>  /* Encode {Zn.<T> - Zm.<T>}.  The fields array specifies which field
>     to use for Zn.  */
>  const char *
> @@ -943,6 +996,38 @@ aarch64_ins_sve_scale (const aarch64_operand *self,
>    return NULL;
>  }
>  
> +/* Encode an SVE shift left immediate.  */
> +const char *
> +aarch64_ins_sve_shlimm (const aarch64_operand *self,
> +			const aarch64_opnd_info *info, aarch64_insn *code,
> +			const aarch64_inst *inst)
> +{
> +  const aarch64_opnd_info *prev_operand;
> +  unsigned int esize;
> +
> +  assert (info->idx > 0);
> +  prev_operand = &inst->operands[info->idx - 1];
> +  esize = aarch64_get_qualifier_esize (prev_operand->qualifier);
> +  insert_all_fields (self, code, 8 * esize + info->imm.value);
> +  return NULL;
> +}
> +
> +/* Encode an SVE shift right immediate.  */
> +const char *
> +aarch64_ins_sve_shrimm (const aarch64_operand *self,
> +			const aarch64_opnd_info *info, aarch64_insn *code,
> +			const aarch64_inst *inst)
> +{
> +  const aarch64_opnd_info *prev_operand;
> +  unsigned int esize;
> +
> +  assert (info->idx > 0);
> +  prev_operand = &inst->operands[info->idx - 1];
> +  esize = aarch64_get_qualifier_esize (prev_operand->qualifier);
> +  insert_all_fields (self, code, 16 * esize - info->imm.value);
> +  return NULL;
> +}
> +
>  /* Miscellaneous encoding functions.  */
>  
>  /* Encode size[0], i.e. bit 22, for
> diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
> index 5e13de0..bbd320e 100644
> --- a/opcodes/aarch64-asm.h
> +++ b/opcodes/aarch64-asm.h
> @@ -54,6 +54,7 @@ AARCH64_DECL_OPD_INSERTER (ins_fpimm);
>  AARCH64_DECL_OPD_INSERTER (ins_fbits);
>  AARCH64_DECL_OPD_INSERTER (ins_aimm);
>  AARCH64_DECL_OPD_INSERTER (ins_limm);
> +AARCH64_DECL_OPD_INSERTER (ins_inv_limm);
>  AARCH64_DECL_OPD_INSERTER (ins_ft);
>  AARCH64_DECL_OPD_INSERTER (ins_addr_simple);
>  AARCH64_DECL_OPD_INSERTER (ins_addr_regoff);
> @@ -79,9 +80,14 @@ AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zi_u5);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_lsl);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_sxtw);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_uxtw);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_aimm);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_asimm);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_index);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_limm_mov);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_scale);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_shlimm);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_shrimm);
>  
>  #undef AARCH64_DECL_OPD_INSERTER
>  
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 48d6ce7..4527456 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10426,12 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 129:
> -    case 130:
> -    case 131:
> -    case 132:
> -    case 133:
> -    case 134:
>      case 135:
>      case 136:
>      case 137:
> @@ -10440,7 +10434,13 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 140:
>      case 141:
>      case 142:
> -    case 145:
> +    case 155:
> +    case 156:
> +    case 157:
> +    case 158:
> +    case 159:
> +    case 160:
> +    case 163:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10477,13 +10477,22 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 56:
>      case 57:
>      case 58:
> -    case 66:
> +    case 59:
>      case 67:
>      case 68:
>      case 69:
>      case 70:
> -    case 126:
> -    case 128:
> +    case 71:
> +    case 132:
> +    case 134:
> +    case 147:
> +    case 148:
> +    case 149:
> +    case 150:
> +    case 151:
> +    case 152:
> +    case 153:
> +    case 154:
>        return aarch64_ext_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -10496,61 +10505,61 @@ aarch64_extract_operand (const aarch64_operand *self,
>        return aarch64_ext_shll_imm (self, info, code, inst);
>      case 46:
>        return aarch64_ext_fpimm (self, info, code, inst);
> -    case 59:
> -      return aarch64_ext_limm (self, info, code, inst);
>      case 60:
> -      return aarch64_ext_aimm (self, info, code, inst);
> +    case 130:
> +      return aarch64_ext_limm (self, info, code, inst);
>      case 61:
> -      return aarch64_ext_imm_half (self, info, code, inst);
> +      return aarch64_ext_aimm (self, info, code, inst);
>      case 62:
> +      return aarch64_ext_imm_half (self, info, code, inst);
> +    case 63:
>        return aarch64_ext_fbits (self, info, code, inst);
> -    case 64:
>      case 65:
> +    case 66:
>        return aarch64_ext_cond (self, info, code, inst);
> -    case 71:
> -    case 77:
> -      return aarch64_ext_addr_simple (self, info, code, inst);
>      case 72:
> -      return aarch64_ext_addr_regoff (self, info, code, inst);
> +    case 78:
> +      return aarch64_ext_addr_simple (self, info, code, inst);
>      case 73:
> +      return aarch64_ext_addr_regoff (self, info, code, inst);
>      case 74:
>      case 75:
> -      return aarch64_ext_addr_simm (self, info, code, inst);
>      case 76:
> +      return aarch64_ext_addr_simm (self, info, code, inst);
> +    case 77:
>        return aarch64_ext_addr_uimm12 (self, info, code, inst);
> -    case 78:
> -      return aarch64_ext_simd_addr_post (self, info, code, inst);
>      case 79:
> -      return aarch64_ext_sysreg (self, info, code, inst);
> +      return aarch64_ext_simd_addr_post (self, info, code, inst);
>      case 80:
> -      return aarch64_ext_pstatefield (self, info, code, inst);
> +      return aarch64_ext_sysreg (self, info, code, inst);
>      case 81:
> +      return aarch64_ext_pstatefield (self, info, code, inst);
>      case 82:
>      case 83:
>      case 84:
> -      return aarch64_ext_sysins_op (self, info, code, inst);
>      case 85:
> +      return aarch64_ext_sysins_op (self, info, code, inst);
>      case 86:
> -      return aarch64_ext_barrier (self, info, code, inst);
>      case 87:
> -      return aarch64_ext_prfop (self, info, code, inst);
> +      return aarch64_ext_barrier (self, info, code, inst);
>      case 88:
> -      return aarch64_ext_hint (self, info, code, inst);
> +      return aarch64_ext_prfop (self, info, code, inst);
>      case 89:
> +      return aarch64_ext_hint (self, info, code, inst);
>      case 90:
>      case 91:
>      case 92:
> -      return aarch64_ext_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 93:
> -      return aarch64_ext_sve_addr_ri_s6xvl (self, info, code, inst);
> +      return aarch64_ext_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 94:
> -      return aarch64_ext_sve_addr_ri_s9xvl (self, info, code, inst);
> +      return aarch64_ext_sve_addr_ri_s6xvl (self, info, code, inst);
>      case 95:
> +      return aarch64_ext_sve_addr_ri_s9xvl (self, info, code, inst);
>      case 96:
>      case 97:
>      case 98:
> -      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
>      case 99:
> +      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
>      case 100:
>      case 101:
>      case 102:
> @@ -10562,8 +10571,8 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 108:
>      case 109:
>      case 110:
> -      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
>      case 111:
> +      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
>      case 112:
>      case 113:
>      case 114:
> @@ -10571,24 +10580,39 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 116:
>      case 117:
>      case 118:
> -      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
>      case 119:
> +      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
>      case 120:
>      case 121:
>      case 122:
> -      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
>      case 123:
> -      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
> +      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
>      case 124:
> -      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
> +      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
>      case 125:
> +      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 126:
>        return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
>      case 127:
> +      return aarch64_ext_sve_aimm (self, info, code, inst);
> +    case 128:
> +      return aarch64_ext_sve_asimm (self, info, code, inst);
> +    case 129:
> +      return aarch64_ext_inv_limm (self, info, code, inst);
> +    case 131:
> +      return aarch64_ext_sve_limm_mov (self, info, code, inst);
> +    case 133:
>        return aarch64_ext_sve_scale (self, info, code, inst);
>      case 143:
> -      return aarch64_ext_sve_index (self, info, code, inst);
>      case 144:
> +      return aarch64_ext_sve_shlimm (self, info, code, inst);
> +    case 145:
>      case 146:
> +      return aarch64_ext_sve_shrimm (self, info, code, inst);
> +    case 161:
> +      return aarch64_ext_sve_index (self, info, code, inst);
> +    case 162:
> +    case 164:
>        return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index ba6befd..ed050cd 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -734,32 +734,21 @@ aarch64_ext_aimm (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return 1;
>  }
>  
> -/* Decode logical immediate for e.g. ORR <Wd|WSP>, <Wn>, #<imm>.  */
> -
> -int
> -aarch64_ext_limm (const aarch64_operand *self ATTRIBUTE_UNUSED,
> -		  aarch64_opnd_info *info, const aarch64_insn code,
> -		  const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +/* Return true if VALUE is a valid logical immediate encoding, storing the
> +   decoded value in *RESULT if so.  ESIZE is the number of bytes in the
> +   decoded immediate.  */
> +static int
> +decode_limm (uint32_t esize, aarch64_insn value, int64_t *result)
>  {
>    uint64_t imm, mask;
> -  uint32_t sf;
>    uint32_t N, R, S;
>    unsigned simd_size;
> -  aarch64_insn value;
> -
> -  value = extract_fields (code, 0, 3, FLD_N, FLD_immr, FLD_imms);
> -  assert (inst->operands[0].qualifier == AARCH64_OPND_QLF_W
> -	  || inst->operands[0].qualifier == AARCH64_OPND_QLF_X);
> -  sf = aarch64_get_qualifier_esize (inst->operands[0].qualifier) != 4;
>  
>    /* value is N:immr:imms.  */
>    S = value & 0x3f;
>    R = (value >> 6) & 0x3f;
>    N = (value >> 12) & 0x1;
>  
> -  if (sf == 0 && N == 1)
> -    return 0;
> -
>    /* The immediate value is S+1 bits to 1, left rotated by SIMDsize - R
>       (in other words, right rotated by R), then replicated.  */
>    if (N != 0)
> @@ -782,6 +771,10 @@ aarch64_ext_limm (const aarch64_operand *self ATTRIBUTE_UNUSED,
>        /* Top bits are IGNORED.  */
>        R &= simd_size - 1;
>      }
> +
> +  if (simd_size > esize * 8)
> +    return 0;
> +
>    /* NOTE: if S = simd_size - 1 we get 0xf..f which is rejected.  */
>    if (S == simd_size - 1)
>      return 0;
> @@ -803,8 +796,35 @@ aarch64_ext_limm (const aarch64_operand *self ATTRIBUTE_UNUSED,
>      default: assert (0); return 0;
>      }
>  
> -  info->imm.value = sf ? imm : imm & 0xffffffff;
> +  *result = imm & ~((uint64_t) -1 << (esize * 4) << (esize * 4));
> +
> +  return 1;
> +}
> +
> +/* Decode a logical immediate for e.g. ORR <Wd|WSP>, <Wn>, #<imm>.  */
> +int
> +aarch64_ext_limm (const aarch64_operand *self,
> +		  aarch64_opnd_info *info, const aarch64_insn code,
> +		  const aarch64_inst *inst)
> +{
> +  uint32_t esize;
> +  aarch64_insn value;
> +
> +  value = extract_fields (code, 0, 3, self->fields[0], self->fields[1],
> +			  self->fields[2]);
> +  esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
> +  return decode_limm (esize, value, &info->imm.value);
> +}
>  
> +/* Decode a logical immediate for the BIC alias of AND (etc.).  */
> +int
> +aarch64_ext_inv_limm (const aarch64_operand *self,
> +		      aarch64_opnd_info *info, const aarch64_insn code,
> +		      const aarch64_inst *inst)
> +{
> +  if (!aarch64_ext_limm (self, info, code, inst))
> +    return 0;
> +  info->imm.value = ~info->imm.value;
>    return 1;
>  }
>  
> @@ -1404,6 +1424,47 @@ aarch64_ext_sve_addr_zz_uxtw (const aarch64_operand *self,
>    return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_UXTW);
>  }
>  
> +/* Finish decoding an SVE arithmetic immediate, given that INFO already
> +   has the raw field value and that the low 8 bits decode to VALUE.  */
> +static int
> +decode_sve_aimm (aarch64_opnd_info *info, int64_t value)
> +{
> +  info->shifter.kind = AARCH64_MOD_LSL;
> +  info->shifter.amount = 0;
> +  if (info->imm.value & 0x100)
> +    {
> +      if (value == 0)
> +	/* Decode 0x100 as #0, LSL #8.  */
> +	info->shifter.amount = 8;
> +      else
> +	value *= 256;
> +    }
> +  info->shifter.operator_present = (info->shifter.amount != 0);
> +  info->shifter.amount_present = (info->shifter.amount != 0);
> +  info->imm.value = value;
> +  return 1;
> +}
> +
> +/* Decode an SVE ADD/SUB immediate.  */
> +int
> +aarch64_ext_sve_aimm (const aarch64_operand *self,
> +		      aarch64_opnd_info *info, const aarch64_insn code,
> +		      const aarch64_inst *inst)
> +{
> +  return (aarch64_ext_imm (self, info, code, inst)
> +	  && decode_sve_aimm (info, (uint8_t) info->imm.value));
> +}
> +
> +/* Decode an SVE CPY/DUP immediate.  */
> +int
> +aarch64_ext_sve_asimm (const aarch64_operand *self,
> +		       aarch64_opnd_info *info, const aarch64_insn code,
> +		       const aarch64_inst *inst)
> +{
> +  return (aarch64_ext_imm (self, info, code, inst)
> +	  && decode_sve_aimm (info, (int8_t) info->imm.value));
> +}
> +
>  /* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
>     array specifies which field to use for Zn.  MM is encoded in the
>     concatenation of imm5 and SVE_tszh, with imm5 being the less
> @@ -1425,6 +1486,17 @@ aarch64_ext_sve_index (const aarch64_operand *self,
>    return 1;
>  }
>  
> +/* Decode a logical immediate for the MOV alias of SVE DUPM.  */
> +int
> +aarch64_ext_sve_limm_mov (const aarch64_operand *self,
> +			  aarch64_opnd_info *info, const aarch64_insn code,
> +			  const aarch64_inst *inst)
> +{
> +  int esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
> +  return (aarch64_ext_limm (self, info, code, inst)
> +	  && aarch64_sve_dupm_mov_immediate_p (info->imm.value, esize));
> +}
> +
>  /* Decode {Zn.<T> - Zm.<T>}.  The fields array specifies which field
>     to use for Zn.  The opcode-dependent value specifies the number
>     of registers in the list.  */
> @@ -1457,6 +1529,44 @@ aarch64_ext_sve_scale (const aarch64_operand *self,
>    info->shifter.amount_present = (val != 0);
>    return 1;
>  }
> +
> +/* Return the top set bit in VALUE, which is expected to be relatively
> +   small.  */
> +static uint64_t
> +get_top_bit (uint64_t value)
> +{
> +  while ((value & -value) != value)
> +    value -= value & -value;
> +  return value;
> +}
> +
> +/* Decode an SVE shift-left immediate.  */
> +int
> +aarch64_ext_sve_shlimm (const aarch64_operand *self,
> +			aarch64_opnd_info *info, const aarch64_insn code,
> +			const aarch64_inst *inst)
> +{
> +  if (!aarch64_ext_imm (self, info, code, inst)
> +      || info->imm.value == 0)
> +    return 0;
> +
> +  info->imm.value -= get_top_bit (info->imm.value);
> +  return 1;
> +}
> +
> +/* Decode an SVE shift-right immediate.  */
> +int
> +aarch64_ext_sve_shrimm (const aarch64_operand *self,
> +			aarch64_opnd_info *info, const aarch64_insn code,
> +			const aarch64_inst *inst)
> +{
> +  if (!aarch64_ext_imm (self, info, code, inst)
> +      || info->imm.value == 0)
> +    return 0;
> +
> +  info->imm.value = get_top_bit (info->imm.value) * 2 - info->imm.value;
> +  return 1;
> +}
>  \f
>  /* Bitfields that are commonly used to encode certain operands' information
>     may be partially used as part of the base opcode in some instructions.
> diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
> index 5619877..10983d1 100644
> --- a/opcodes/aarch64-dis.h
> +++ b/opcodes/aarch64-dis.h
> @@ -76,6 +76,7 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_fpimm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_fbits);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_aimm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_limm);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_inv_limm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_ft);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_addr_simple);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_addr_regoff);
> @@ -101,9 +102,14 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zi_u5);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_lsl);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_sxtw);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_uxtw);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_aimm);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_asimm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_limm_mov);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_scale);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_shlimm);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_shrimm);
>  
>  #undef AARCH64_DECL_OPD_EXTRACTOR
>  
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index a72f577..d86e7dc 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -82,6 +82,7 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_IMMEDIATE, "BIT_NUM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_b5, FLD_b40}, "the bit number to be tested"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "EXCEPTION", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm16}, "a 16-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "CCMP_IMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5}, "a 5-bit unsigned immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SIMM5", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5}, "a 5-bit signed immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "NZCV", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_nzcv}, "a flag bit specifier giving an alternative value for each flag"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_N,FLD_immr,FLD_imms}, "Logical immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "AIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_shift,FLD_imm12}, "a 12-bit unsigned immediate with optional left shift of 12 bits"},
> @@ -150,6 +151,11 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_LSL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_SXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_UXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_AIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm9}, "a 9-bit unsigned arithmetic operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_ASIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm9}, "a 9-bit signed arithmetic operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_INV_LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "an inverted 13-bit logical immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "a 13-bit logical immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_LIMM_MOV", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "a 13-bit logical move immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN_SCALED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
> @@ -161,6 +167,18 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pm}, "an SVE predicate register"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pn}, "an SVE predicate register"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pt", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pt}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHLIMM_PRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_SVE_imm5}, "a shift-left immediate operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHLIMM_UNPRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_imm5}, "a shift-left immediate operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHRIMM_PRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_SVE_imm5}, "a shift-right immediate operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHRIMM_UNPRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_imm5}, "a shift-right immediate operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM5", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm5}, "a 5-bit signed immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM5B", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm5b}, "a 5-bit signed immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM6", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imms}, "a 6-bit signed immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM8", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm8}, "an 8-bit signed immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM3", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm3}, "a 3-bit unsigned immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM7", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm7}, "a 7-bit unsigned immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM8", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm8}, "an 8-bit unsigned immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM8_53", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5,FLD_imm3}, "an 8-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_5}, "an SVE vector register"},
>    {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_16}, "an SVE vector register"},
>    {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zd}, "an SVE vector register"},
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index d0959b5..dec7e06 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -264,6 +264,7 @@ const aarch64_field fields[] =
>      { 31,  1 },	/* b5: in the test bit and branch instructions.  */
>      { 19,  5 },	/* b40: in the test bit and branch instructions.  */
>      { 10,  6 },	/* scale: in the fixed-point scalar to fp converting inst.  */
> +    { 17,  1 }, /* SVE_N: SVE equivalent of N.  */
>      {  0,  4 }, /* SVE_Pd: p0-p15, bits [3,0].  */
>      { 10,  3 }, /* SVE_Pg3: p0-p7, bits [12,10].  */
>      {  5,  4 }, /* SVE_Pg4_5: p0-p15, bits [8,5].  */
> @@ -279,8 +280,16 @@ const aarch64_field fields[] =
>      { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
>      {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
>      {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
> +    { 16,  3 }, /* SVE_imm3: 3-bit immediate field.  */
>      { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
> +    {  5,  5 }, /* SVE_imm5: 5-bit immediate field.  */
> +    { 16,  5 }, /* SVE_imm5b: secondary 5-bit immediate field.  */
>      { 16,  6 }, /* SVE_imm6: 6-bit immediate field.  */
> +    { 14,  7 }, /* SVE_imm7: 7-bit immediate field.  */
> +    {  5,  8 }, /* SVE_imm8: 8-bit immediate field.  */
> +    {  5,  9 }, /* SVE_imm9: 9-bit immediate field.  */
> +    { 11,  6 }, /* SVE_immr: SVE equivalent of immr.  */
> +    {  5,  6 }, /* SVE_imms: SVE equivalent of imms.  */
>      { 10,  2 }, /* SVE_msz: 2-bit shift amount for ADR.  */
>      {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
>      {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
> @@ -1374,9 +1383,10 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  				  const aarch64_opcode *opcode,
>  				  aarch64_operand_error *mismatch_detail)
>  {
> -  unsigned num, modifiers;
> +  unsigned num, modifiers, shift;
>    unsigned char size;
>    int64_t imm, min_value, max_value;
> +  uint64_t uvalue, mask;
>    const aarch64_opnd_info *opnd = opnds + idx;
>    aarch64_opnd_qualifier_t qualifier = opnd->qualifier;
>  
> @@ -1977,6 +1987,10 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	case AARCH64_OPND_UIMM7:
>  	case AARCH64_OPND_UIMM3_OP1:
>  	case AARCH64_OPND_UIMM3_OP2:
> +	case AARCH64_OPND_SVE_UIMM3:
> +	case AARCH64_OPND_SVE_UIMM7:
> +	case AARCH64_OPND_SVE_UIMM8:
> +	case AARCH64_OPND_SVE_UIMM8_53:
>  	  size = get_operand_fields_width (get_operand_from_code (type));
>  	  assert (size < 32);
>  	  if (!value_fit_unsigned_field_p (opnd->imm.value, size))
> @@ -1987,6 +2001,22 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SIMM5:
> +	case AARCH64_OPND_SVE_SIMM5:
> +	case AARCH64_OPND_SVE_SIMM5B:
> +	case AARCH64_OPND_SVE_SIMM6:
> +	case AARCH64_OPND_SVE_SIMM8:
> +	  size = get_operand_fields_width (get_operand_from_code (type));
> +	  assert (size < 32);
> +	  if (!value_fit_signed_field_p (opnd->imm.value, size))
> +	    {
> +	      set_imm_out_of_range_error (mismatch_detail, idx,
> +					  -(1 << (size - 1)),
> +					  (1 << (size - 1)) - 1);
> +	      return 0;
> +	    }
> +	  break;
> +
>  	case AARCH64_OPND_WIDTH:
>  	  assert (idx > 1 && opnds[idx-1].type == AARCH64_OPND_IMM
>  		  && opnds[0].type == AARCH64_OPND_Rd);
> @@ -2001,6 +2031,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	  break;
>  
>  	case AARCH64_OPND_LIMM:
> +	case AARCH64_OPND_SVE_LIMM:
>  	  {
>  	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
>  	    uint64_t uimm = opnd->imm.value;
> @@ -2171,6 +2202,90 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SVE_AIMM:
> +	  min_value = 0;
> +	sve_aimm:
> +	  assert (opnd->shifter.kind == AARCH64_MOD_LSL);
> +	  size = aarch64_get_qualifier_esize (opnds[0].qualifier);
> +	  mask = ~((uint64_t) -1 << (size * 4) << (size * 4));
> +	  uvalue = opnd->imm.value;
> +	  shift = opnd->shifter.amount;
> +	  if (size == 1)
> +	    {
> +	      if (shift != 0)
> +		{
> +		  set_other_error (mismatch_detail, idx,
> +				   _("no shift amount allowed for"
> +				     " 8-bit constants"));
> +		  return 0;
> +		}
> +	    }
> +	  else
> +	    {
> +	      if (shift != 0 && shift != 8)
> +		{
> +		  set_other_error (mismatch_detail, idx,
> +				   _("shift amount should be 0 or 8"));
> +		  return 0;
> +		}
> +	      if (shift == 0 && (uvalue & 0xff) == 0)
> +		{
> +		  shift = 8;
> +		  uvalue = (int64_t) uvalue / 256;
> +		}
> +	    }
> +	  mask >>= shift;
> +	  if ((uvalue & mask) != uvalue && (uvalue | ~mask) != uvalue)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("immediate too big for element size"));
> +	      return 0;
> +	    }
> +	  uvalue = (uvalue - min_value) & mask;
> +	  if (uvalue > 0xff)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid arithmetic immediate"));
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ASIMM:
> +	  min_value = -128;
> +	  goto sve_aimm;
> +
> +	case AARCH64_OPND_SVE_INV_LIMM:
> +	  {
> +	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
> +	    uint64_t uimm = ~opnd->imm.value;
> +	    if (!aarch64_logical_immediate_p (uimm, esize, NULL))
> +	      {
> +		set_other_error (mismatch_detail, idx,
> +				 _("immediate out of range"));
> +		return 0;
> +	      }
> +	  }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_LIMM_MOV:
> +	  {
> +	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
> +	    uint64_t uimm = opnd->imm.value;
> +	    if (!aarch64_logical_immediate_p (uimm, esize, NULL))
> +	      {
> +		set_other_error (mismatch_detail, idx,
> +				 _("immediate out of range"));
> +		return 0;
> +	      }
> +	    if (!aarch64_sve_dupm_mov_immediate_p (uimm, esize))
> +	      {
> +		set_other_error (mismatch_detail, idx,
> +				 _("invalid replicated MOV immediate"));
> +		return 0;
> +	      }
> +	  }
> +	  break;
> +
>  	case AARCH64_OPND_SVE_PATTERN_SCALED:
>  	  assert (opnd->shifter.kind == AARCH64_MOD_MUL);
>  	  if (!value_in_range_p (opnd->shifter.amount, 1, 16))
> @@ -2180,6 +2295,27 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SVE_SHLIMM_PRED:
> +	case AARCH64_OPND_SVE_SHLIMM_UNPRED:
> +	  size = aarch64_get_qualifier_esize (opnds[idx - 1].qualifier);
> +	  if (!value_in_range_p (opnd->imm.value, 0, 8 * size - 1))
> +	    {
> +	      set_imm_out_of_range_error (mismatch_detail, idx,
> +					  0, 8 * size - 1);
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_SHRIMM_PRED:
> +	case AARCH64_OPND_SVE_SHRIMM_UNPRED:
> +	  size = aarch64_get_qualifier_esize (opnds[idx - 1].qualifier);
> +	  if (!value_in_range_p (opnd->imm.value, 1, 8 * size))
> +	    {
> +	      set_imm_out_of_range_error (mismatch_detail, idx, 1, 8 * size);
> +	      return 0;
> +	    }
> +	  break;
> +
>  	default:
>  	  break;
>  	}
> @@ -2953,6 +3089,19 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_IMMR:
>      case AARCH64_OPND_IMMS:
>      case AARCH64_OPND_FBITS:
> +    case AARCH64_OPND_SIMM5:
> +    case AARCH64_OPND_SVE_SHLIMM_PRED:
> +    case AARCH64_OPND_SVE_SHLIMM_UNPRED:
> +    case AARCH64_OPND_SVE_SHRIMM_PRED:
> +    case AARCH64_OPND_SVE_SHRIMM_UNPRED:
> +    case AARCH64_OPND_SVE_SIMM5:
> +    case AARCH64_OPND_SVE_SIMM5B:
> +    case AARCH64_OPND_SVE_SIMM6:
> +    case AARCH64_OPND_SVE_SIMM8:
> +    case AARCH64_OPND_SVE_UIMM3:
> +    case AARCH64_OPND_SVE_UIMM7:
> +    case AARCH64_OPND_SVE_UIMM8:
> +    case AARCH64_OPND_SVE_UIMM8_53:
>        snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
>        break;
>  
> @@ -3021,6 +3170,9 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_LIMM:
>      case AARCH64_OPND_AIMM:
>      case AARCH64_OPND_HALF:
> +    case AARCH64_OPND_SVE_INV_LIMM:
> +    case AARCH64_OPND_SVE_LIMM:
> +    case AARCH64_OPND_SVE_LIMM_MOV:
>        if (opnd->shifter.amount)
>  	snprintf (buf, size, "#0x%" PRIx64 ", lsl #%" PRIi64, opnd->imm.value,
>  		  opnd->shifter.amount);
> @@ -3039,6 +3191,15 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>  		  opnd->shifter.amount);
>        break;
>  
> +    case AARCH64_OPND_SVE_AIMM:
> +    case AARCH64_OPND_SVE_ASIMM:
> +      if (opnd->shifter.amount)
> +	snprintf (buf, size, "#%" PRIi64 ", lsl #%" PRIi64, opnd->imm.value,
> +		  opnd->shifter.amount);
> +      else
> +	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
> +      break;
> +
>      case AARCH64_OPND_FPIMM:
>      case AARCH64_OPND_SIMD_FPIMM:
>        switch (aarch64_get_qualifier_esize (opnds[0].qualifier))
> @@ -3967,6 +4128,33 @@ verify_ldpsw (const struct aarch64_opcode * opcode ATTRIBUTE_UNUSED,
>    return TRUE;
>  }
>  
> +/* Return true if VALUE cannot be moved into an SVE register using DUP
> +   (with any element size, not just ESIZE) and if using DUPM would
> +   therefore be OK.  ESIZE is the number of bytes in the immediate.  */
> +
> +bfd_boolean
> +aarch64_sve_dupm_mov_immediate_p (uint64_t uvalue, int esize)
> +{
> +  int64_t svalue = uvalue;
> +  uint64_t upper = (uint64_t) -1 << (esize * 4) << (esize * 4);
> +
> +  if ((uvalue & ~upper) != uvalue && (uvalue | upper) != uvalue)
> +    return FALSE;
> +  if (esize <= 4 || (uint32_t) uvalue == (uint32_t) (uvalue >> 32))
> +    {
> +      svalue = (int32_t) uvalue;
> +      if (esize <= 2 || (uint16_t) uvalue == (uint16_t) (uvalue >> 16))
> +	{
> +	  svalue = (int16_t) uvalue;
> +	  if (esize == 1 || (uint8_t) uvalue == (uint8_t) (uvalue >> 8))
> +	    return FALSE;
> +	}
> +    }
> +  if ((svalue & 0xff) == 0)
> +    svalue /= 256;
> +  return svalue < -128 || svalue >= 128;
> +}
> +
>  /* Include the opcode description table as well as the operand description
>     table.  */
>  #define VERIFIER(x) verify_##x
> diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
> index e823146..087376e 100644
> --- a/opcodes/aarch64-opc.h
> +++ b/opcodes/aarch64-opc.h
> @@ -91,6 +91,7 @@ enum aarch64_field_kind
>    FLD_b5,
>    FLD_b40,
>    FLD_scale,
> +  FLD_SVE_N,
>    FLD_SVE_Pd,
>    FLD_SVE_Pg3,
>    FLD_SVE_Pg4_5,
> @@ -106,8 +107,16 @@ enum aarch64_field_kind
>    FLD_SVE_Zm_16,
>    FLD_SVE_Zn,
>    FLD_SVE_Zt,
> +  FLD_SVE_imm3,
>    FLD_SVE_imm4,
> +  FLD_SVE_imm5,
> +  FLD_SVE_imm5b,
>    FLD_SVE_imm6,
> +  FLD_SVE_imm7,
> +  FLD_SVE_imm8,
> +  FLD_SVE_imm9,
> +  FLD_SVE_immr,
> +  FLD_SVE_imms,
>    FLD_SVE_msz,
>    FLD_SVE_pattern,
>    FLD_SVE_prfop,
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index ac7ccf0..d743e3b 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2761,6 +2761,8 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "a 16-bit unsigned immediate")					\
>      Y(IMMEDIATE, imm, "CCMP_IMM", 0, F(FLD_imm5),			\
>        "a 5-bit unsigned immediate")					\
> +    Y(IMMEDIATE, imm, "SIMM5", OPD_F_SEXT, F(FLD_imm5),			\
> +      "a 5-bit signed immediate")					\
>      Y(IMMEDIATE, imm, "NZCV", 0, F(FLD_nzcv),				\
>        "a flag bit specifier giving an alternative value for each flag")	\
>      Y(IMMEDIATE, limm, "LIMM", 0, F(FLD_N,FLD_immr,FLD_imms),		\
> @@ -2925,6 +2927,19 @@ struct aarch64_opcode aarch64_opcode_table[] =
>      Y(ADDRESS, sve_addr_zz_uxtw, "SVE_ADDR_ZZ_UXTW", 0,			\
>        F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
>        "an address with a vector register offset")			\
> +    Y(IMMEDIATE, sve_aimm, "SVE_AIMM", 0, F(FLD_SVE_imm9),		\
> +      "a 9-bit unsigned arithmetic operand")				\
> +    Y(IMMEDIATE, sve_asimm, "SVE_ASIMM", 0, F(FLD_SVE_imm9),		\
> +      "a 9-bit signed arithmetic operand")				\
> +    Y(IMMEDIATE, inv_limm, "SVE_INV_LIMM", 0,				\
> +      F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
> +      "an inverted 13-bit logical immediate")				\
> +    Y(IMMEDIATE, limm, "SVE_LIMM", 0,					\
> +      F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
> +      "a 13-bit logical immediate")					\
> +    Y(IMMEDIATE, sve_limm_mov, "SVE_LIMM_MOV", 0,			\
> +      F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
> +      "a 13-bit logical move immediate")				\
>      Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
>        "an enumeration value such as POW2")				\
>      Y(IMMEDIATE, sve_scale, "SVE_PATTERN_SCALED", 0,			\
> @@ -2947,6 +2962,30 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "an SVE predicate register")					\
>      Y(PRED_REG, regno, "SVE_Pt", 0, F(FLD_SVE_Pt),			\
>        "an SVE predicate register")					\
> +    Y(IMMEDIATE, sve_shlimm, "SVE_SHLIMM_PRED", 0,			\
> +      F(FLD_SVE_tszh,FLD_SVE_imm5), "a shift-left immediate operand")	\
> +    Y(IMMEDIATE, sve_shlimm, "SVE_SHLIMM_UNPRED", 0,			\
> +      F(FLD_SVE_tszh,FLD_imm5), "a shift-left immediate operand")	\
> +    Y(IMMEDIATE, sve_shrimm, "SVE_SHRIMM_PRED", 0,			\
> +      F(FLD_SVE_tszh,FLD_SVE_imm5), "a shift-right immediate operand")	\
> +    Y(IMMEDIATE, sve_shrimm, "SVE_SHRIMM_UNPRED", 0,			\
> +      F(FLD_SVE_tszh,FLD_imm5), "a shift-right immediate operand")	\
> +    Y(IMMEDIATE, imm, "SVE_SIMM5", OPD_F_SEXT, F(FLD_SVE_imm5),		\
> +      "a 5-bit signed immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_SIMM5B", OPD_F_SEXT, F(FLD_SVE_imm5b),	\
> +      "a 5-bit signed immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_SIMM6", OPD_F_SEXT, F(FLD_SVE_imms),		\
> +      "a 6-bit signed immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_SIMM8", OPD_F_SEXT, F(FLD_SVE_imm8),		\
> +      "an 8-bit signed immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_UIMM3", 0, F(FLD_SVE_imm3),			\
> +      "a 3-bit unsigned immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_UIMM7", 0, F(FLD_SVE_imm7),			\
> +      "a 7-bit unsigned immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_UIMM8", 0, F(FLD_SVE_imm8),			\
> +      "an 8-bit unsigned immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_UIMM8_53", 0, F(FLD_imm5,FLD_imm3),		\
> +      "an 8-bit unsigned immediate")					\
>      Y(SVE_REG, regno, "SVE_Za_5", 0, F(FLD_SVE_Za_5),			\
>        "an SVE vector register")						\
>      Y(SVE_REG, regno, "SVE_Za_16", 0, F(FLD_SVE_Za_16),			\
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 28/32] Add SVE FP immediate operands
  2016-08-23  9:25 ` [AArch64][SVE 28/32] Add SVE FP immediate operands Richard Sandiford
@ 2016-08-25 14:59   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 14:59 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:25, Richard Sandiford wrote:
> This patch adds support for the new SVE floating-point immediate
> operands.  One operand uses the same 8-bit encoding as base AArch64,
> but in a different position.  The others use a single bit to select
> between two values.
> 
> One of the single-bit operands is a choice between 0 and 1, where 0
> is not a valid 8-bit encoding.  I think the cleanest way of handling
> these single-bit immediates is therefore to use the IEEE float encoding
> itself as the immediate value and select between the two possible values
> when encoding and decoding.
> 
> As described in the covering note for the patch that added F_STRICT,
> we get better error messages by accepting unsuffixed vector registers
> and leaving the qualifier matching code to report an error.  This means
> that we carry on parsing the other operands, and so can try to parse FP
> immediates for invalid instructions like:
> 
> 	fcpy	z0, #2.5
> 
> In this case there is no suffix to tell us whether the immediate should
> be treated as single or double precision.  Again, we get better error
> messages by picking one (arbitrary) immediate size and reporting an error
> for the missing suffix later.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_SVE_FPIMM8): New aarch64_opnd.
> 	(AARCH64_OPND_SVE_I1_HALF_ONE, AARCH64_OPND_SVE_I1_HALF_TWO)
> 	(AARCH64_OPND_SVE_I1_ZERO_ONE): Likewise.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE FP
> 	immediate operands.
> 	* aarch64-opc.h (FLD_SVE_i1): New aarch64_field_kind.
> 	* aarch64-opc.c (fields): Add corresponding entry.
> 	(operand_general_constraint_met_p): Handle the new SVE FP immediate
> 	operands.
> 	(aarch64_print_operand): Likewise.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm.h (ins_sve_float_half_one, ins_sve_float_half_two)
> 	(ins_sve_float_zero_one): New inserters.
> 	* aarch64-asm.c (aarch64_ins_sve_float_half_one): New function.
> 	(aarch64_ins_sve_float_half_two): Likewise.
> 	(aarch64_ins_sve_float_zero_one): Likewise.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.h (ext_sve_float_half_one, ext_sve_float_half_two)
> 	(ext_sve_float_zero_one): New extractors.
> 	* aarch64-dis.c (aarch64_ext_sve_float_half_one): New function.
> 	(aarch64_ext_sve_float_half_two): Likewise.
> 	(aarch64_ext_sve_float_zero_one): Likewise.
> 	* aarch64-dis-2.c: Regenerate.
> 
> gas/
> 	* config/tc-aarch64.c (double_precision_operand_p): New function.
> 	(parse_operands): Use it to calculate the dp_p input to
> 	parse_aarch64_imm_float.  Handle the new SVE FP immediate operands.

OK.

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index cb39cf8..eddc6f8 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -2252,6 +2252,20 @@ can_convert_double_to_float (uint64_t imm, uint32_t *fpword)
>    return TRUE;
>  }
>  
> +/* Return true if we should treat OPERAND as a double-precision
> +   floating-point operand rather than a single-precision one.  */
> +static bfd_boolean
> +double_precision_operand_p (const aarch64_opnd_info *operand)
> +{
> +  /* Check for unsuffixed SVE registers, which are allowed
> +     for LDR and STR but not in instructions that require an
> +     immediate.  We get better error messages if we arbitrarily
> +     pick one size, parse the immediate normally, and then
> +     report the match failure in the normal way.  */
> +  return (operand->qualifier == AARCH64_OPND_QLF_NIL
> +	  || aarch64_get_qualifier_esize (operand->qualifier) == 8);
> +}
> +
>  /* Parse a floating-point immediate.  Return TRUE on success and return the
>     value in *IMMED in the format of IEEE754 single-precision encoding.
>     *CCP points to the start of the string; DP_P is TRUE when the immediate
> @@ -5707,11 +5721,12 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  
>  	case AARCH64_OPND_FPIMM:
>  	case AARCH64_OPND_SIMD_FPIMM:
> +	case AARCH64_OPND_SVE_FPIMM8:
>  	  {
>  	    int qfloat;
> -	    bfd_boolean dp_p
> -	      = (aarch64_get_qualifier_esize (inst.base.operands[0].qualifier)
> -		 == 8);
> +	    bfd_boolean dp_p;
> +
> +	    dp_p = double_precision_operand_p (&inst.base.operands[0]);
>  	    if (!parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type)
>  		|| !aarch64_imm_float_p (qfloat))
>  	      {
> @@ -5725,6 +5740,26 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  }
>  	  break;
>  
> +	case AARCH64_OPND_SVE_I1_HALF_ONE:
> +	case AARCH64_OPND_SVE_I1_HALF_TWO:
> +	case AARCH64_OPND_SVE_I1_ZERO_ONE:
> +	  {
> +	    int qfloat;
> +	    bfd_boolean dp_p;
> +
> +	    dp_p = double_precision_operand_p (&inst.base.operands[0]);
> +	    if (!parse_aarch64_imm_float (&str, &qfloat, dp_p, imm_reg_type))
> +	      {
> +		if (!error_p ())
> +		  set_fatal_syntax_error (_("invalid floating-point"
> +					    " constant"));
> +		goto failure;
> +	      }
> +	    inst.base.operands[i].imm.value = qfloat;
> +	    inst.base.operands[i].imm.is_fp = 1;
> +	  }
> +	  break;
> +
>  	case AARCH64_OPND_LIMM:
>  	  po_misc_or_fail (parse_shifter_operand (&str, info,
>  						  SHIFTED_LOGIC_IMM));
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index 36e95b4..9e7f5b5 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -292,6 +292,10 @@ enum aarch64_opnd
>    AARCH64_OPND_SVE_ADDR_ZZ_UXTW,    /* SVE [Zn.<T>, Zm,<T>, UXTW #<msz>].  */
>    AARCH64_OPND_SVE_AIMM,	/* SVE unsigned arithmetic immediate.  */
>    AARCH64_OPND_SVE_ASIMM,	/* SVE signed arithmetic immediate.  */
> +  AARCH64_OPND_SVE_FPIMM8,	/* SVE 8-bit floating-point immediate.  */
> +  AARCH64_OPND_SVE_I1_HALF_ONE,	/* SVE choice between 0.5 and 1.0.  */
> +  AARCH64_OPND_SVE_I1_HALF_TWO,	/* SVE choice between 0.5 and 2.0.  */
> +  AARCH64_OPND_SVE_I1_ZERO_ONE,	/* SVE choice between 0.0 and 1.0.  */
>    AARCH64_OPND_SVE_INV_LIMM,	/* SVE inverted logical immediate.  */
>    AARCH64_OPND_SVE_LIMM,	/* SVE logical immediate.  */
>    AARCH64_OPND_SVE_LIMM_MOV,	/* SVE logical immediate for MOV.  */
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index 491ea53..d9d1981 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -480,21 +480,21 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 135:
> -    case 136:
> -    case 137:
> -    case 138:
>      case 139:
>      case 140:
>      case 141:
>      case 142:
> -    case 155:
> -    case 156:
> -    case 157:
> -    case 158:
> +    case 143:
> +    case 144:
> +    case 145:
> +    case 146:
>      case 159:
>      case 160:
> +    case 161:
> +    case 162:
>      case 163:
> +    case 164:
> +    case 167:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -532,16 +532,16 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 69:
>      case 70:
>      case 71:
> -    case 132:
> -    case 134:
> -    case 147:
> -    case 148:
> -    case 149:
> -    case 150:
> +    case 136:
> +    case 138:
>      case 151:
>      case 152:
>      case 153:
>      case 154:
> +    case 155:
> +    case 156:
> +    case 157:
> +    case 158:
>        return aarch64_ins_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -551,9 +551,10 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 42:
>        return aarch64_ins_advsimd_imm_modified (self, info, code, inst);
>      case 46:
> +    case 129:
>        return aarch64_ins_fpimm (self, info, code, inst);
>      case 60:
> -    case 130:
> +    case 134:
>        return aarch64_ins_limm (self, info, code, inst);
>      case 61:
>        return aarch64_ins_aimm (self, info, code, inst);
> @@ -644,22 +645,28 @@ aarch64_insert_operand (const aarch64_operand *self,
>        return aarch64_ins_sve_aimm (self, info, code, inst);
>      case 128:
>        return aarch64_ins_sve_asimm (self, info, code, inst);
> -    case 129:
> -      return aarch64_ins_inv_limm (self, info, code, inst);
> +    case 130:
> +      return aarch64_ins_sve_float_half_one (self, info, code, inst);
>      case 131:
> -      return aarch64_ins_sve_limm_mov (self, info, code, inst);
> +      return aarch64_ins_sve_float_half_two (self, info, code, inst);
> +    case 132:
> +      return aarch64_ins_sve_float_zero_one (self, info, code, inst);
>      case 133:
> +      return aarch64_ins_inv_limm (self, info, code, inst);
> +    case 135:
> +      return aarch64_ins_sve_limm_mov (self, info, code, inst);
> +    case 137:
>        return aarch64_ins_sve_scale (self, info, code, inst);
> -    case 143:
> -    case 144:
> +    case 147:
> +    case 148:
>        return aarch64_ins_sve_shlimm (self, info, code, inst);
> -    case 145:
> -    case 146:
> +    case 149:
> +    case 150:
>        return aarch64_ins_sve_shrimm (self, info, code, inst);
> -    case 161:
> +    case 165:
>        return aarch64_ins_sve_index (self, info, code, inst);
> -    case 162:
> -    case 164:
> +    case 166:
> +    case 168:
>        return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index 61d0d95..fd356f4 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -1028,6 +1028,51 @@ aarch64_ins_sve_shrimm (const aarch64_operand *self,
>    return NULL;
>  }
>  
> +/* Encode a single-bit immediate that selects between #0.5 and #1.0.
> +   The fields array specifies which field to use.  */
> +const char *
> +aarch64_ins_sve_float_half_one (const aarch64_operand *self,
> +				const aarch64_opnd_info *info,
> +				aarch64_insn *code,
> +				const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  if (info->imm.value == 0x3f000000)
> +    insert_field (self->fields[0], code, 0, 0);
> +  else
> +    insert_field (self->fields[0], code, 1, 0);
> +  return NULL;
> +}
> +
> +/* Encode a single-bit immediate that selects between #0.5 and #2.0.
> +   The fields array specifies which field to use.  */
> +const char *
> +aarch64_ins_sve_float_half_two (const aarch64_operand *self,
> +				const aarch64_opnd_info *info,
> +				aarch64_insn *code,
> +				const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  if (info->imm.value == 0x3f000000)
> +    insert_field (self->fields[0], code, 0, 0);
> +  else
> +    insert_field (self->fields[0], code, 1, 0);
> +  return NULL;
> +}
> +
> +/* Encode a single-bit immediate that selects between #0.0 and #1.0.
> +   The fields array specifies which field to use.  */
> +const char *
> +aarch64_ins_sve_float_zero_one (const aarch64_operand *self,
> +				const aarch64_opnd_info *info,
> +				aarch64_insn *code,
> +				const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  if (info->imm.value == 0)
> +    insert_field (self->fields[0], code, 0, 0);
> +  else
> +    insert_field (self->fields[0], code, 1, 0);
> +  return NULL;
> +}
> +
>  /* Miscellaneous encoding functions.  */
>  
>  /* Encode size[0], i.e. bit 22, for
> diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
> index bbd320e..0cce71c 100644
> --- a/opcodes/aarch64-asm.h
> +++ b/opcodes/aarch64-asm.h
> @@ -82,6 +82,9 @@ AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_sxtw);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_uxtw);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_aimm);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_asimm);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_float_half_one);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_float_half_two);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_float_zero_one);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_index);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_limm_mov);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 4527456..110cf2e 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10426,21 +10426,21 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 135:
> -    case 136:
> -    case 137:
> -    case 138:
>      case 139:
>      case 140:
>      case 141:
>      case 142:
> -    case 155:
> -    case 156:
> -    case 157:
> -    case 158:
> +    case 143:
> +    case 144:
> +    case 145:
> +    case 146:
>      case 159:
>      case 160:
> +    case 161:
> +    case 162:
>      case 163:
> +    case 164:
> +    case 167:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10483,16 +10483,16 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 69:
>      case 70:
>      case 71:
> -    case 132:
> -    case 134:
> -    case 147:
> -    case 148:
> -    case 149:
> -    case 150:
> +    case 136:
> +    case 138:
>      case 151:
>      case 152:
>      case 153:
>      case 154:
> +    case 155:
> +    case 156:
> +    case 157:
> +    case 158:
>        return aarch64_ext_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -10504,9 +10504,10 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 43:
>        return aarch64_ext_shll_imm (self, info, code, inst);
>      case 46:
> +    case 129:
>        return aarch64_ext_fpimm (self, info, code, inst);
>      case 60:
> -    case 130:
> +    case 134:
>        return aarch64_ext_limm (self, info, code, inst);
>      case 61:
>        return aarch64_ext_aimm (self, info, code, inst);
> @@ -10597,22 +10598,28 @@ aarch64_extract_operand (const aarch64_operand *self,
>        return aarch64_ext_sve_aimm (self, info, code, inst);
>      case 128:
>        return aarch64_ext_sve_asimm (self, info, code, inst);
> -    case 129:
> -      return aarch64_ext_inv_limm (self, info, code, inst);
> +    case 130:
> +      return aarch64_ext_sve_float_half_one (self, info, code, inst);
>      case 131:
> -      return aarch64_ext_sve_limm_mov (self, info, code, inst);
> +      return aarch64_ext_sve_float_half_two (self, info, code, inst);
> +    case 132:
> +      return aarch64_ext_sve_float_zero_one (self, info, code, inst);
>      case 133:
> +      return aarch64_ext_inv_limm (self, info, code, inst);
> +    case 135:
> +      return aarch64_ext_sve_limm_mov (self, info, code, inst);
> +    case 137:
>        return aarch64_ext_sve_scale (self, info, code, inst);
> -    case 143:
> -    case 144:
> +    case 147:
> +    case 148:
>        return aarch64_ext_sve_shlimm (self, info, code, inst);
> -    case 145:
> -    case 146:
> +    case 149:
> +    case 150:
>        return aarch64_ext_sve_shrimm (self, info, code, inst);
> -    case 161:
> +    case 165:
>        return aarch64_ext_sve_index (self, info, code, inst);
> -    case 162:
> -    case 164:
> +    case 166:
> +    case 168:
>        return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index ed050cd..385286c 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -1465,6 +1465,51 @@ aarch64_ext_sve_asimm (const aarch64_operand *self,
>  	  && decode_sve_aimm (info, (int8_t) info->imm.value));
>  }
>  
> +/* Decode a single-bit immediate that selects between #0.5 and #1.0.
> +   The fields array specifies which field to use.  */
> +int
> +aarch64_ext_sve_float_half_one (const aarch64_operand *self,
> +				aarch64_opnd_info *info, aarch64_insn code,
> +				const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  if (extract_field (self->fields[0], code, 0))
> +    info->imm.value = 0x3f800000;
> +  else
> +    info->imm.value = 0x3f000000;
> +  info->imm.is_fp = TRUE;
> +  return 1;
> +}
> +
> +/* Decode a single-bit immediate that selects between #0.5 and #2.0.
> +   The fields array specifies which field to use.  */
> +int
> +aarch64_ext_sve_float_half_two (const aarch64_operand *self,
> +				aarch64_opnd_info *info, aarch64_insn code,
> +				const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  if (extract_field (self->fields[0], code, 0))
> +    info->imm.value = 0x40000000;
> +  else
> +    info->imm.value = 0x3f000000;
> +  info->imm.is_fp = TRUE;
> +  return 1;
> +}
> +
> +/* Decode a single-bit immediate that selects between #0.0 and #1.0.
> +   The fields array specifies which field to use.  */
> +int
> +aarch64_ext_sve_float_zero_one (const aarch64_operand *self,
> +				aarch64_opnd_info *info, aarch64_insn code,
> +				const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  if (extract_field (self->fields[0], code, 0))
> +    info->imm.value = 0x3f800000;
> +  else
> +    info->imm.value = 0x0;
> +  info->imm.is_fp = TRUE;
> +  return 1;
> +}
> +
>  /* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
>     array specifies which field to use for Zn.  MM is encoded in the
>     concatenation of imm5 and SVE_tszh, with imm5 being the less
> diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
> index 10983d1..474bc45 100644
> --- a/opcodes/aarch64-dis.h
> +++ b/opcodes/aarch64-dis.h
> @@ -104,6 +104,9 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_sxtw);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_uxtw);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_aimm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_asimm);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_float_half_one);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_float_half_two);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_float_zero_one);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_limm_mov);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index d86e7dc..58c3aed 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -153,6 +153,10 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_UXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_AIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm9}, "a 9-bit unsigned arithmetic operand"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_ASIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm9}, "a 9-bit signed arithmetic operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_FPIMM8", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm8}, "an 8-bit floating-point immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_I1_HALF_ONE", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_i1}, "either 0.5 or 1.0"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_I1_HALF_TWO", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_i1}, "either 0.5 or 2.0"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_I1_ZERO_ONE", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_i1}, "either 0.0 or 1.0"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_INV_LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "an inverted 13-bit logical immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "a 13-bit logical immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_LIMM_MOV", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "a 13-bit logical move immediate"},
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index dec7e06..3b0279c 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -280,6 +280,7 @@ const aarch64_field fields[] =
>      { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
>      {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
>      {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
> +    {  5,  1 }, /* SVE_i1: single-bit immediate.  */
>      { 16,  3 }, /* SVE_imm3: 3-bit immediate field.  */
>      { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
>      {  5,  5 }, /* SVE_imm5: 5-bit immediate field.  */
> @@ -2178,6 +2179,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  
>  	case AARCH64_OPND_FPIMM:
>  	case AARCH64_OPND_SIMD_FPIMM:
> +	case AARCH64_OPND_SVE_FPIMM8:
>  	  if (opnd->imm.is_fp == 0)
>  	    {
>  	      set_other_error (mismatch_detail, idx,
> @@ -2254,6 +2256,36 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	  min_value = -128;
>  	  goto sve_aimm;
>  
> +	case AARCH64_OPND_SVE_I1_HALF_ONE:
> +	  assert (opnd->imm.is_fp);
> +	  if (opnd->imm.value != 0x3f000000 && opnd->imm.value != 0x3f800000)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("floating-point value must be 0.5 or 1.0"));
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_I1_HALF_TWO:
> +	  assert (opnd->imm.is_fp);
> +	  if (opnd->imm.value != 0x3f000000 && opnd->imm.value != 0x40000000)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("floating-point value must be 0.5 or 2.0"));
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_I1_ZERO_ONE:
> +	  assert (opnd->imm.is_fp);
> +	  if (opnd->imm.value != 0 && opnd->imm.value != 0x3f800000)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("floating-point value must be 0.0 or 1.0"));
> +	      return 0;
> +	    }
> +	  break;
> +
>  	case AARCH64_OPND_SVE_INV_LIMM:
>  	  {
>  	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
> @@ -3105,6 +3137,16 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>        snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
>        break;
>  
> +    case AARCH64_OPND_SVE_I1_HALF_ONE:
> +    case AARCH64_OPND_SVE_I1_HALF_TWO:
> +    case AARCH64_OPND_SVE_I1_ZERO_ONE:
> +      {
> +	single_conv_t c;
> +	c.i = opnd->imm.value;
> +	snprintf (buf, size, "#%.1f", c.f);
> +	break;
> +      }
> +
>      case AARCH64_OPND_SVE_PATTERN:
>        if (optional_operand_p (opcode, idx)
>  	  && opnd->imm.value == get_optional_operand_default_value (opcode))
> @@ -3202,6 +3244,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>  
>      case AARCH64_OPND_FPIMM:
>      case AARCH64_OPND_SIMD_FPIMM:
> +    case AARCH64_OPND_SVE_FPIMM8:
>        switch (aarch64_get_qualifier_esize (opnds[0].qualifier))
>  	{
>  	case 2:	/* e.g. FMOV <Hd>, #<imm>.  */
> diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
> index 087376e..6c67786 100644
> --- a/opcodes/aarch64-opc.h
> +++ b/opcodes/aarch64-opc.h
> @@ -107,6 +107,7 @@ enum aarch64_field_kind
>    FLD_SVE_Zm_16,
>    FLD_SVE_Zn,
>    FLD_SVE_Zt,
> +  FLD_SVE_i1,
>    FLD_SVE_imm3,
>    FLD_SVE_imm4,
>    FLD_SVE_imm5,
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index d743e3b..562eea7 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2931,6 +2931,14 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "a 9-bit unsigned arithmetic operand")				\
>      Y(IMMEDIATE, sve_asimm, "SVE_ASIMM", 0, F(FLD_SVE_imm9),		\
>        "a 9-bit signed arithmetic operand")				\
> +    Y(IMMEDIATE, fpimm, "SVE_FPIMM8", 0, F(FLD_SVE_imm8),		\
> +      "an 8-bit floating-point immediate")				\
> +    Y(IMMEDIATE, sve_float_half_one, "SVE_I1_HALF_ONE", 0,		\
> +      F(FLD_SVE_i1), "either 0.5 or 1.0")				\
> +    Y(IMMEDIATE, sve_float_half_two, "SVE_I1_HALF_TWO", 0,		\
> +      F(FLD_SVE_i1), "either 0.5 or 2.0")				\
> +    Y(IMMEDIATE, sve_float_zero_one, "SVE_I1_ZERO_ONE", 0,		\
> +      F(FLD_SVE_i1), "either 0.0 or 1.0")				\
>      Y(IMMEDIATE, inv_limm, "SVE_INV_LIMM", 0,				\
>        F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
>        "an inverted 13-bit logical immediate")				\
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 29/32] Add new SVE core & FP register operands
  2016-08-23  9:25 ` [AArch64][SVE 29/32] Add new SVE core & FP register operands Richard Sandiford
@ 2016-08-25 15:01   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 15:01 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:25, Richard Sandiford wrote:
> SVE uses some new fields to store W, X and scalar FP registers.
> This patch adds corresponding operands.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_SVE_Rm): New aarch64_opnd.
> 	(AARCH64_OPND_SVE_Rn_SP, AARCH64_OPND_SVE_VZn, AARCH64_OPND_SVE_Vd)
> 	(AARCH64_OPND_SVE_Vm, AARCH64_OPND_SVE_Vn): Likewise.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE core
> 	and FP register operands.
> 	* aarch64-opc.h (FLD_SVE_Rm, FLD_SVE_Rn, FLD_SVE_Vd, FLD_SVE_Vm)
> 	(FLD_SVE_Vn): New aarch64_field_kinds.
> 	* aarch64-opc.c (fields): Add corresponding entries.
> 	(aarch64_print_operand): Handle the new SVE core and FP register
> 	operands.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm-2.c: Likewise.
> 	* aarch64-dis-2.c: Likewise.
> 
> gas/
> 	* config/tc-aarch64.c (parse_operands): Handle the new SVE core
> 	and FP register operands.

OK.

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index eddc6f8..15652fa 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -5344,11 +5344,13 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	case AARCH64_OPND_Ra:
>  	case AARCH64_OPND_Rt_SYS:
>  	case AARCH64_OPND_PAIRREG:
> +	case AARCH64_OPND_SVE_Rm:
>  	  po_int_reg_or_fail (FALSE, TRUE);
>  	  break;
>  
>  	case AARCH64_OPND_Rd_SP:
>  	case AARCH64_OPND_Rn_SP:
> +	case AARCH64_OPND_SVE_Rn_SP:
>  	  po_int_reg_or_fail (TRUE, FALSE);
>  	  break;
>  
> @@ -5380,6 +5382,10 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	case AARCH64_OPND_Sd:
>  	case AARCH64_OPND_Sn:
>  	case AARCH64_OPND_Sm:
> +	case AARCH64_OPND_SVE_VZn:
> +	case AARCH64_OPND_SVE_Vd:
> +	case AARCH64_OPND_SVE_Vm:
> +	case AARCH64_OPND_SVE_Vn:
>  	  val = aarch64_reg_parse (&str, REG_TYPE_BHSDQ, &rtype, NULL);
>  	  if (val == PARSE_FAIL)
>  	    {
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index 9e7f5b5..8d3fb21 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -310,6 +310,8 @@ enum aarch64_opnd
>    AARCH64_OPND_SVE_Pm,		/* SVE p0-p15 in Pm.  */
>    AARCH64_OPND_SVE_Pn,		/* SVE p0-p15 in Pn.  */
>    AARCH64_OPND_SVE_Pt,		/* SVE p0-p15 in Pt.  */
> +  AARCH64_OPND_SVE_Rm,		/* Integer Rm or ZR, alt. SVE position.  */
> +  AARCH64_OPND_SVE_Rn_SP,	/* Integer Rn or SP, alt. SVE position.  */
>    AARCH64_OPND_SVE_SHLIMM_PRED,	  /* SVE shift left amount (predicated).  */
>    AARCH64_OPND_SVE_SHLIMM_UNPRED, /* SVE shift left amount (unpredicated).  */
>    AARCH64_OPND_SVE_SHRIMM_PRED,	  /* SVE shift right amount (predicated).  */
> @@ -322,6 +324,10 @@ enum aarch64_opnd
>    AARCH64_OPND_SVE_UIMM7,	/* SVE unsigned 7-bit immediate.  */
>    AARCH64_OPND_SVE_UIMM8,	/* SVE unsigned 8-bit immediate.  */
>    AARCH64_OPND_SVE_UIMM8_53,	/* SVE split unsigned 8-bit immediate.  */
> +  AARCH64_OPND_SVE_VZn,		/* Scalar SIMD&FP register in Zn field.  */
> +  AARCH64_OPND_SVE_Vd,		/* Scalar SIMD&FP register in Vd.  */
> +  AARCH64_OPND_SVE_Vm,		/* Scalar SIMD&FP register in Vm.  */
> +  AARCH64_OPND_SVE_Vn,		/* Scalar SIMD&FP register in Vn.  */
>    AARCH64_OPND_SVE_Za_5,	/* SVE vector register in Za, bits [9,5].  */
>    AARCH64_OPND_SVE_Za_16,	/* SVE vector register in Za, bits [20,16].  */
>    AARCH64_OPND_SVE_Zd,		/* SVE vector register in Zd.  */
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index d9d1981..5dd6a81 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -488,13 +488,19 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 144:
>      case 145:
>      case 146:
> -    case 159:
> -    case 160:
> +    case 147:
> +    case 148:
>      case 161:
>      case 162:
>      case 163:
>      case 164:
> +    case 165:
> +    case 166:
>      case 167:
> +    case 168:
> +    case 169:
> +    case 170:
> +    case 173:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -534,14 +540,14 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 71:
>      case 136:
>      case 138:
> -    case 151:
> -    case 152:
>      case 153:
>      case 154:
>      case 155:
>      case 156:
>      case 157:
>      case 158:
> +    case 159:
> +    case 160:
>        return aarch64_ins_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -657,16 +663,16 @@ aarch64_insert_operand (const aarch64_operand *self,
>        return aarch64_ins_sve_limm_mov (self, info, code, inst);
>      case 137:
>        return aarch64_ins_sve_scale (self, info, code, inst);
> -    case 147:
> -    case 148:
> -      return aarch64_ins_sve_shlimm (self, info, code, inst);
>      case 149:
>      case 150:
> +      return aarch64_ins_sve_shlimm (self, info, code, inst);
> +    case 151:
> +    case 152:
>        return aarch64_ins_sve_shrimm (self, info, code, inst);
> -    case 165:
> +    case 171:
>        return aarch64_ins_sve_index (self, info, code, inst);
> -    case 166:
> -    case 168:
> +    case 172:
> +    case 174:
>        return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 110cf2e..c3bcfdb 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10434,13 +10434,19 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 144:
>      case 145:
>      case 146:
> -    case 159:
> -    case 160:
> +    case 147:
> +    case 148:
>      case 161:
>      case 162:
>      case 163:
>      case 164:
> +    case 165:
> +    case 166:
>      case 167:
> +    case 168:
> +    case 169:
> +    case 170:
> +    case 173:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10485,14 +10491,14 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 71:
>      case 136:
>      case 138:
> -    case 151:
> -    case 152:
>      case 153:
>      case 154:
>      case 155:
>      case 156:
>      case 157:
>      case 158:
> +    case 159:
> +    case 160:
>        return aarch64_ext_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -10610,16 +10616,16 @@ aarch64_extract_operand (const aarch64_operand *self,
>        return aarch64_ext_sve_limm_mov (self, info, code, inst);
>      case 137:
>        return aarch64_ext_sve_scale (self, info, code, inst);
> -    case 147:
> -    case 148:
> -      return aarch64_ext_sve_shlimm (self, info, code, inst);
>      case 149:
>      case 150:
> +      return aarch64_ext_sve_shlimm (self, info, code, inst);
> +    case 151:
> +    case 152:
>        return aarch64_ext_sve_shrimm (self, info, code, inst);
> -    case 165:
> +    case 171:
>        return aarch64_ext_sve_index (self, info, code, inst);
> -    case 166:
> -    case 168:
> +    case 172:
> +    case 174:
>        return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index 58c3aed..6028be4 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -171,6 +171,8 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pm}, "an SVE predicate register"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pn}, "an SVE predicate register"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pt", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pt}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_INT_REG, "SVE_Rm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Rm}, "an integer register or zero"},
> +  {AARCH64_OPND_CLASS_INT_REG, "SVE_Rn_SP", OPD_F_MAYBE_SP | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Rn}, "an integer register or SP"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHLIMM_PRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_SVE_imm5}, "a shift-left immediate operand"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHLIMM_UNPRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_imm5}, "a shift-left immediate operand"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHRIMM_PRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_SVE_imm5}, "a shift-right immediate operand"},
> @@ -183,6 +185,10 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM7", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm7}, "a 7-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM8", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm8}, "an 8-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM8_53", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5,FLD_imm3}, "an 8-bit unsigned immediate"},
> +  {AARCH64_OPND_CLASS_SIMD_REG, "SVE_VZn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "a SIMD register"},
> +  {AARCH64_OPND_CLASS_SIMD_REG, "SVE_Vd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Vd}, "a SIMD register"},
> +  {AARCH64_OPND_CLASS_SIMD_REG, "SVE_Vm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Vm}, "a SIMD register"},
> +  {AARCH64_OPND_CLASS_SIMD_REG, "SVE_Vn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Vn}, "a SIMD register"},
>    {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_5}, "an SVE vector register"},
>    {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_16}, "an SVE vector register"},
>    {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zd}, "an SVE vector register"},
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 3b0279c..1ad4ccf 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -273,6 +273,11 @@ const aarch64_field fields[] =
>      { 16,  4 }, /* SVE_Pm: p0-p15, bits [19,16].  */
>      {  5,  4 }, /* SVE_Pn: p0-p15, bits [8,5].  */
>      {  0,  4 }, /* SVE_Pt: p0-p15, bits [3,0].  */
> +    {  5,  5 }, /* SVE_Rm: SVE alternative position for Rm.  */
> +    { 16,  5 }, /* SVE_Rn: SVE alternative position for Rn.  */
> +    {  0,  5 }, /* SVE_Vd: Scalar SIMD&FP register, bits [4,0].  */
> +    {  5,  5 }, /* SVE_Vm: Scalar SIMD&FP register, bits [9,5].  */
> +    {  5,  5 }, /* SVE_Vn: Scalar SIMD&FP register, bits [9,5].  */
>      {  5,  5 }, /* SVE_Za_5: SVE vector register, bits [9,5].  */
>      { 16,  5 }, /* SVE_Za_16: SVE vector register, bits [20,16].  */
>      {  0,  5 }, /* SVE_Zd: SVE vector register. bits [4,0].  */
> @@ -2949,6 +2954,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_Ra:
>      case AARCH64_OPND_Rt_SYS:
>      case AARCH64_OPND_PAIRREG:
> +    case AARCH64_OPND_SVE_Rm:
>        /* The optional-ness of <Xt> in e.g. IC <ic_op>{, <Xt>} is determined by
>  	 the <ic_op>, therefore we we use opnd->present to override the
>  	 generic optional-ness information.  */
> @@ -2966,6 +2972,7 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>  
>      case AARCH64_OPND_Rd_SP:
>      case AARCH64_OPND_Rn_SP:
> +    case AARCH64_OPND_SVE_Rn_SP:
>        assert (opnd->qualifier == AARCH64_OPND_QLF_W
>  	      || opnd->qualifier == AARCH64_OPND_QLF_WSP
>  	      || opnd->qualifier == AARCH64_OPND_QLF_X
> @@ -3028,6 +3035,10 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_Sd:
>      case AARCH64_OPND_Sn:
>      case AARCH64_OPND_Sm:
> +    case AARCH64_OPND_SVE_VZn:
> +    case AARCH64_OPND_SVE_Vd:
> +    case AARCH64_OPND_SVE_Vm:
> +    case AARCH64_OPND_SVE_Vn:
>        snprintf (buf, size, "%s%d", aarch64_get_qualifier_name (opnd->qualifier),
>  		opnd->reg.regno);
>        break;
> diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
> index 6c67786..a7654d0 100644
> --- a/opcodes/aarch64-opc.h
> +++ b/opcodes/aarch64-opc.h
> @@ -100,6 +100,11 @@ enum aarch64_field_kind
>    FLD_SVE_Pm,
>    FLD_SVE_Pn,
>    FLD_SVE_Pt,
> +  FLD_SVE_Rm,
> +  FLD_SVE_Rn,
> +  FLD_SVE_Vd,
> +  FLD_SVE_Vm,
> +  FLD_SVE_Vn,
>    FLD_SVE_Za_5,
>    FLD_SVE_Za_16,
>    FLD_SVE_Zd,
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index 562eea7..988c239 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2970,6 +2970,10 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "an SVE predicate register")					\
>      Y(PRED_REG, regno, "SVE_Pt", 0, F(FLD_SVE_Pt),			\
>        "an SVE predicate register")					\
> +    Y(INT_REG, regno, "SVE_Rm", 0, F(FLD_SVE_Rm),			\
> +      "an integer register or zero")					\
> +    Y(INT_REG, regno, "SVE_Rn_SP", OPD_F_MAYBE_SP, F(FLD_SVE_Rn),	\
> +      "an integer register or SP")					\
>      Y(IMMEDIATE, sve_shlimm, "SVE_SHLIMM_PRED", 0,			\
>        F(FLD_SVE_tszh,FLD_SVE_imm5), "a shift-left immediate operand")	\
>      Y(IMMEDIATE, sve_shlimm, "SVE_SHLIMM_UNPRED", 0,			\
> @@ -2994,6 +2998,10 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "an 8-bit unsigned immediate")					\
>      Y(IMMEDIATE, imm, "SVE_UIMM8_53", 0, F(FLD_imm5,FLD_imm3),		\
>        "an 8-bit unsigned immediate")					\
> +    Y(SIMD_REG, regno, "SVE_VZn", 0, F(FLD_SVE_Zn), "a SIMD register")	\
> +    Y(SIMD_REG, regno, "SVE_Vd", 0, F(FLD_SVE_Vd), "a SIMD register")	\
> +    Y(SIMD_REG, regno, "SVE_Vm", 0, F(FLD_SVE_Vm), "a SIMD register")	\
> +    Y(SIMD_REG, regno, "SVE_Vn", 0, F(FLD_SVE_Vn), "a SIMD register")	\
>      Y(SVE_REG, regno, "SVE_Za_5", 0, F(FLD_SVE_Za_5),			\
>        "an SVE vector register")						\
>      Y(SVE_REG, regno, "SVE_Za_16", 0, F(FLD_SVE_Za_16),			\
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 30/32] Add SVE instruction classes
  2016-08-23  9:26 ` [AArch64][SVE 30/32] Add SVE instruction classes Richard Sandiford
@ 2016-08-25 15:07   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 15:07 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:25, Richard Sandiford wrote:
> The main purpose of the SVE aarch64_insn_classes is to describe how
> an index into an aarch64_opnd_qualifier_seq_t is represented in the
> instruction encoding.  Other instructions usually use flags for this
> information, but (a) we're running out of those and (b) the iclass
> would otherwise be unused for SVE.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (sve_cpy, sve_index, sve_limm, sve_misc, sve_movprfx)
> 	(sve_pred_zm, sve_shift_pred, sve_shift_unpred, sve_size_bhs)
> 	(sve_size_bhsd, sve_size_hsd, sve_size_sd): New aarch64_insn_classes.
> 
> opcodes/
> 	* aarch64-opc.h (FLD_SVE_M_4, FLD_SVE_M_14, FLD_SVE_M_16)
> 	(FLD_SVE_sz, FLD_SVE_tsz, FLD_SVE_tszl_8, FLD_SVE_tszl_19): New
> 	aarch64_field_kinds.
> 	* aarch64-opc.c (fields): Add corresponding entries.
> 	* aarch64-asm.c (aarch64_get_variant): New function.
> 	(aarch64_encode_variant_using_iclass): Likewise.
> 	(aarch64_opcode_encode): Call it.
> 	* aarch64-dis.c (aarch64_decode_variant_using_iclass): New function.
> 	(aarch64_opcode_decode): Call it.
> 

OK

R.

> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index 8d3fb21..01e6b2c 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -485,6 +485,18 @@ enum aarch64_insn_class
>    movewide,
>    pcreladdr,
>    ic_system,
> +  sve_cpy,
> +  sve_index,
> +  sve_limm,
> +  sve_misc,
> +  sve_movprfx,
> +  sve_pred_zm,
> +  sve_shift_pred,
> +  sve_shift_unpred,
> +  sve_size_bhs,
> +  sve_size_bhsd,
> +  sve_size_hsd,
> +  sve_size_sd,
>    testbranch,
>  };
>  
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index fd356f4..78fd272 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -1140,6 +1140,27 @@ encode_fcvt (aarch64_inst *inst)
>    return;
>  }
>  
> +/* Return the index in qualifiers_list that INST is using.  Should only
> +   be called once the qualifiers are known to be valid.  */
> +
> +static int
> +aarch64_get_variant (struct aarch64_inst *inst)
> +{
> +  int i, nops, variant;
> +
> +  nops = aarch64_num_of_operands (inst->opcode);
> +  for (variant = 0; variant < AARCH64_MAX_QLF_SEQ_NUM; ++variant)
> +    {
> +      for (i = 0; i < nops; ++i)
> +	if (inst->opcode->qualifiers_list[variant][i]
> +	    != inst->operands[i].qualifier)
> +	  break;
> +      if (i == nops)
> +	return variant;
> +    }
> +  abort ();
> +}
> +
>  /* Do miscellaneous encodings that are not common enough to be driven by
>     flags.  */
>  
> @@ -1318,6 +1339,65 @@ do_special_encoding (struct aarch64_inst *inst)
>    DEBUG_TRACE ("exit with coding 0x%x", (uint32_t) inst->value);
>  }
>  
> +/* Some instructions (including all SVE ones) use the instruction class
> +   to describe how a qualifiers_list index is represented in the instruction
> +   encoding.  If INST is such an instruction, encode the chosen qualifier
> +   variant.  */
> +
> +static void
> +aarch64_encode_variant_using_iclass (struct aarch64_inst *inst)
> +{
> +  switch (inst->opcode->iclass)
> +    {
> +    case sve_cpy:
> +      insert_fields (&inst->value, aarch64_get_variant (inst),
> +		     0, 2, FLD_SVE_M_14, FLD_size);
> +      break;
> +
> +    case sve_index:
> +    case sve_shift_pred:
> +    case sve_shift_unpred:
> +      /* For indices and shift amounts, the variant is encoded as
> +	 part of the immediate.  */
> +      break;
> +
> +    case sve_limm:
> +      /* For sve_limm, the .B, .H, and .S forms are just a convenience
> +	 and depend on the immediate.  They don't have a separate
> +	 encoding.  */
> +      break;
> +
> +    case sve_misc:
> +      /* sve_misc instructions have only a single variant.  */
> +      break;
> +
> +    case sve_movprfx:
> +      insert_fields (&inst->value, aarch64_get_variant (inst),
> +		     0, 2, FLD_SVE_M_16, FLD_size);
> +      break;
> +
> +    case sve_pred_zm:
> +      insert_field (FLD_SVE_M_4, &inst->value, aarch64_get_variant (inst), 0);
> +      break;
> +
> +    case sve_size_bhs:
> +    case sve_size_bhsd:
> +      insert_field (FLD_size, &inst->value, aarch64_get_variant (inst), 0);
> +      break;
> +
> +    case sve_size_hsd:
> +      insert_field (FLD_size, &inst->value, aarch64_get_variant (inst) + 1, 0);
> +      break;
> +
> +    case sve_size_sd:
> +      insert_field (FLD_SVE_sz, &inst->value, aarch64_get_variant (inst), 0);
> +      break;
> +
> +    default:
> +      break;
> +    }
> +}
> +
>  /* Converters converting an alias opcode instruction to its real form.  */
>  
>  /* ROR <Wd>, <Ws>, #<shift>
> @@ -1686,6 +1766,10 @@ aarch64_opcode_encode (const aarch64_opcode *opcode,
>    if (opcode_has_special_coder (opcode))
>      do_special_encoding (inst);
>  
> +  /* Possibly use the instruction class to encode the chosen qualifier
> +     variant.  */
> +  aarch64_encode_variant_using_iclass (inst);
> +
>  encoding_exit:
>    DEBUG_TRACE ("exit with %s", opcode->name);
>  
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index 385286c..f84f216 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -2444,6 +2444,105 @@ determine_disassembling_preference (struct aarch64_inst *inst)
>      }
>  }
>  
> +/* Some instructions (including all SVE ones) use the instruction class
> +   to describe how a qualifiers_list index is represented in the instruction
> +   encoding.  If INST is such an instruction, decode the appropriate fields
> +   and fill in the operand qualifiers accordingly.  Return true if no
> +   problems are found.  */
> +
> +static bfd_boolean
> +aarch64_decode_variant_using_iclass (aarch64_inst *inst)
> +{
> +  int i, variant;
> +
> +  variant = 0;
> +  switch (inst->opcode->iclass)
> +    {
> +    case sve_cpy:
> +      variant = extract_fields (inst->value, 0, 2, FLD_size, FLD_SVE_M_14);
> +      break;
> +
> +    case sve_index:
> +      i = extract_field (FLD_SVE_tsz, inst->value, 0);
> +      if (i == 0)
> +	return FALSE;
> +      while ((i & 1) == 0)
> +	{
> +	  i >>= 1;
> +	  variant += 1;
> +	}
> +      break;
> +
> +    case sve_limm:
> +      /* Pick the smallest applicable element size.  */
> +      if ((inst->value & 0x20600) == 0x600)
> +	variant = 0;
> +      else if ((inst->value & 0x20400) == 0x400)
> +	variant = 1;
> +      else if ((inst->value & 0x20000) == 0)
> +	variant = 2;
> +      else
> +	variant = 3;
> +      break;
> +
> +    case sve_misc:
> +      /* sve_misc instructions have only a single variant.  */
> +      break;
> +
> +    case sve_movprfx:
> +      variant = extract_fields (inst->value, 0, 2, FLD_size, FLD_SVE_M_16);
> +      break;
> +
> +    case sve_pred_zm:
> +      variant = extract_field (FLD_SVE_M_4, inst->value, 0);
> +      break;
> +
> +    case sve_shift_pred:
> +      i = extract_fields (inst->value, 0, 2, FLD_SVE_tszh, FLD_SVE_tszl_8);
> +    sve_shift:
> +      if (i == 0)
> +	return FALSE;
> +      while (i != 1)
> +	{
> +	  i >>= 1;
> +	  variant += 1;
> +	}
> +      break;
> +
> +    case sve_shift_unpred:
> +      i = extract_fields (inst->value, 0, 2, FLD_SVE_tszh, FLD_SVE_tszl_19);
> +      goto sve_shift;
> +
> +    case sve_size_bhs:
> +      variant = extract_field (FLD_size, inst->value, 0);
> +      if (variant >= 3)
> +	return FALSE;
> +      break;
> +
> +    case sve_size_bhsd:
> +      variant = extract_field (FLD_size, inst->value, 0);
> +      break;
> +
> +    case sve_size_hsd:
> +      i = extract_field (FLD_size, inst->value, 0);
> +      if (i < 1)
> +	return FALSE;
> +      variant = i - 1;
> +      break;
> +
> +    case sve_size_sd:
> +      variant = extract_field (FLD_SVE_sz, inst->value, 0);
> +      break;
> +
> +    default:
> +      /* No mapping between instruction class and qualifiers.  */
> +      return TRUE;
> +    }
> +
> +  for (i = 0; i < AARCH64_MAX_OPND_NUM; ++i)
> +    inst->operands[i].qualifier = inst->opcode->qualifiers_list[variant][i];
> +  return TRUE;
> +}
>  /* Decode the CODE according to OPCODE; fill INST.  Return 0 if the decoding
>     fails, which meanes that CODE is not an instruction of OPCODE; otherwise
>     return 1.
> @@ -2491,6 +2590,14 @@ aarch64_opcode_decode (const aarch64_opcode *opcode, const aarch64_insn code,
>        goto decode_fail;
>      }
>  
> +  /* Possibly use the instruction class to determine the correct
> +     qualifier.  */
> +  if (!aarch64_decode_variant_using_iclass (inst))
> +    {
> +      DEBUG_TRACE ("iclass-based decoder FAIL");
> +      goto decode_fail;
> +    }
> +
>    /* Call operand decoders.  */
>    for (i = 0; i < AARCH64_MAX_OPND_NUM; ++i)
>      {
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 1ad4ccf..2eb2a81 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -264,6 +264,9 @@ const aarch64_field fields[] =
>      { 31,  1 },	/* b5: in the test bit and branch instructions.  */
>      { 19,  5 },	/* b40: in the test bit and branch instructions.  */
>      { 10,  6 },	/* scale: in the fixed-point scalar to fp converting inst.  */
> +    {  4,  1 }, /* SVE_M_4: Merge/zero select, bit 4.  */
> +    { 14,  1 }, /* SVE_M_14: Merge/zero select, bit 14.  */
> +    { 16,  1 }, /* SVE_M_16: Merge/zero select, bit 16.  */
>      { 17,  1 }, /* SVE_N: SVE equivalent of N.  */
>      {  0,  4 }, /* SVE_Pd: p0-p15, bits [3,0].  */
>      { 10,  3 }, /* SVE_Pg3: p0-p7, bits [12,10].  */
> @@ -299,7 +302,11 @@ const aarch64_field fields[] =
>      { 10,  2 }, /* SVE_msz: 2-bit shift amount for ADR.  */
>      {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
>      {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
> +    { 22,  1 }, /* SVE_sz: 1-bit element size select.  */
> +    { 16,  4 }, /* SVE_tsz: triangular size select.  */
>      { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
> +    {  8,  2 }, /* SVE_tszl_8: triangular size select low, bits [9,8].  */
> +    { 19,  2 }, /* SVE_tszl_19: triangular size select low, bits [20,19].  */
>      { 14,  1 }, /* SVE_xs_14: UXTW/SXTW select (bit 14).  */
>      { 22,  1 }  /* SVE_xs_22: UXTW/SXTW select (bit 22).  */
>  };
> diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
> index a7654d0..0c3d90e 100644
> --- a/opcodes/aarch64-opc.h
> +++ b/opcodes/aarch64-opc.h
> @@ -91,6 +91,9 @@ enum aarch64_field_kind
>    FLD_b5,
>    FLD_b40,
>    FLD_scale,
> +  FLD_SVE_M_4,
> +  FLD_SVE_M_14,
> +  FLD_SVE_M_16,
>    FLD_SVE_N,
>    FLD_SVE_Pd,
>    FLD_SVE_Pg3,
> @@ -126,7 +129,11 @@ enum aarch64_field_kind
>    FLD_SVE_msz,
>    FLD_SVE_pattern,
>    FLD_SVE_prfop,
> +  FLD_SVE_sz,
> +  FLD_SVE_tsz,
>    FLD_SVE_tszh,
> +  FLD_SVE_tszl_8,
> +  FLD_SVE_tszl_19,
>    FLD_SVE_xs_14,
>    FLD_SVE_xs_22,
>  };
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 31/32] Add SVE instructions
  2016-08-23  9:29 ` [AArch64][SVE 31/32] Add SVE instructions Richard Sandiford
@ 2016-08-25 15:18   ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 15:18 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:29, Richard Sandiford wrote:
> This patch adds the SVE instruction definitions and associated OP_*
> enum values.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_FEATURE_SVE): New macro.
> 	(OP_MOV_P_P, OP_MOV_Z_P_Z, OP_MOV_Z_V, OP_MOV_Z_Z, OP_MOV_Z_Zi)
> 	(OP_MOVM_P_P_P, OP_MOVS_P_P, OP_MOVZS_P_P_P, OP_MOVZ_P_P_P)
> 	(OP_NOTS_P_P_P_Z, OP_NOT_P_P_P_Z): New aarch64_ops.
> 
> opcodes/
> 	* aarch64-tbl.h (OP_SVE_B, OP_SVE_BB, OP_SVE_BBBU, OP_SVE_BMB)
> 	(OP_SVE_BPB, OP_SVE_BUB, OP_SVE_BUBB, OP_SVE_BUU, OP_SVE_BZ)
> 	(OP_SVE_BZB, OP_SVE_BZBB, OP_SVE_BZU, OP_SVE_DD, OP_SVE_DDD)
> 	(OP_SVE_DMD, OP_SVE_DMH, OP_SVE_DMS, OP_SVE_DU, OP_SVE_DUD, OP_SVE_DUU)
> 	(OP_SVE_DUV_BHS, OP_SVE_DUV_BHSD, OP_SVE_DZD, OP_SVE_DZU, OP_SVE_HB)
> 	(OP_SVE_HMD, OP_SVE_HMS, OP_SVE_HU, OP_SVE_HUU, OP_SVE_HZU, OP_SVE_RR)
> 	(OP_SVE_RURV_BHSD, OP_SVE_RUV_BHSD, OP_SVE_SMD, OP_SVE_SMH, OP_SVE_SMS)
> 	(OP_SVE_SU, OP_SVE_SUS, OP_SVE_SUU, OP_SVE_SZS, OP_SVE_SZU, OP_SVE_UB)
> 	(OP_SVE_UUD, OP_SVE_UUS, OP_SVE_VMR_BHSD, OP_SVE_VMU_SD)
> 	(OP_SVE_VMVD_BHS, OP_SVE_VMVU_BHSD, OP_SVE_VMVU_SD, OP_SVE_VMVV_BHSD)
> 	(OP_SVE_VMVV_SD, OP_SVE_VMV_BHSD, OP_SVE_VMV_HSD, OP_SVE_VMV_SD)
> 	(OP_SVE_VM_SD, OP_SVE_VPU_BHSD, OP_SVE_VPV_BHSD, OP_SVE_VRR_BHSD)
> 	(OP_SVE_VRU_BHSD, OP_SVE_VR_BHSD, OP_SVE_VUR_BHSD, OP_SVE_VUU_BHSD)
> 	(OP_SVE_VUVV_BHSD, OP_SVE_VUVV_SD, OP_SVE_VUV_BHSD, OP_SVE_VUV_SD)
> 	(OP_SVE_VU_BHSD, OP_SVE_VU_HSD, OP_SVE_VU_SD, OP_SVE_VVD_BHS)
> 	(OP_SVE_VVU_BHSD, OP_SVE_VVVU_SD, OP_SVE_VVV_BHSD, OP_SVE_VVV_SD)
> 	(OP_SVE_VV_BHSD, OP_SVE_VV_HSD_BHS, OP_SVE_VV_SD, OP_SVE_VWW_BHSD)
> 	(OP_SVE_VXX_BHSD, OP_SVE_VZVD_BHS, OP_SVE_VZVU_BHSD, OP_SVE_VZVV_BHSD)
> 	(OP_SVE_VZVV_SD, OP_SVE_VZV_SD, OP_SVE_V_SD, OP_SVE_WU, OP_SVE_WV_BHSD)
> 	(OP_SVE_XU, OP_SVE_XUV_BHSD, OP_SVE_XVW_BHSD, OP_SVE_XV_BHSD)
> 	(OP_SVE_XWU, OP_SVE_XXU): New macros.
> 	(aarch64_feature_sve): New variable.
> 	(SVE): New macro.
> 	(_SVE_INSN): Likewise.
> 	(aarch64_opcode_table): Add SVE instructions.
> 	* aarch64-opc.h (extract_fields): Declare.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm.c (do_misc_encoding): Handle the new SVE aarch64_ops.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.c (extract_fields): Make global.
> 	(do_misc_decoding): Handle the new SVE aarch64_ops.
> 	* aarch64-dis-2.c: Regenerate.
> 
> gas/
> 	* doc/c-aarch64.texi: Document the "sve" feature.
> 	* config/tc-aarch64.c (REG_TYPE_R_Z_BHSDQ_VZP): New register type.
> 	(get_reg_expected_msg): Handle it.
> 	(aarch64_check_reg_type): Likewise.
> 	(parse_operands): When parsing operands of an SVE instruction,
> 	disallow immediates that match REG_TYPE_R_Z_BHSDQ_VZP.
> 	(aarch64_features): Add an entry for SVE.
> 


I presume the idea of using _SVE_INSN (like _LSE_INSN and others of
similar form) is to make the macro name have a fixed width.  I'll point
out that such names encroach upon the compiler and library reserved name
spaces so aren't strictly portable.  I think it would generally be
better if we used SVE__INSN and similar if we really want a fixed length
name.

OK, but we should look into fixing the above at some point.

R.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 32/32] Add SVE tests
  2016-08-23  9:31 ` [AArch64][SVE 32/32] Add SVE tests Richard Sandiford
@ 2016-08-25 15:23   ` Richard Earnshaw (lists)
  2016-08-30 21:23     ` Richard Sandiford
  0 siblings, 1 reply; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-25 15:23 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 23/08/16 10:31, Richard Sandiford wrote:
> This patch adds new tests for SVE.  It also extends diagnostic.[sl] with
> checks for some inappropriate uses of MUL and MUL VL in base AArch64
> instructions.
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> gas/testsuite/
> 	* gas/aarch64/diagnostic.s, gas/aarch64/diagnostic.l: Add tests for
> 	invalid uses of MUL VL and MUL in base AArch64 instructions.
> 	* gas/aarch64/sve-add.s, gas/aarch64/sve-add.d, gas/aarch64/sve-dup.s,
> 	gas/aarch64/sve-dup.d, gas/aarch64/sve-invalid.s,
> 	gas/aarch64/sve-invalid.d, gas/aarch64/sve-invalid.l,
> 	gas/aarch64/sve-reg-diagnostic.s, gas/aarch64/sve-reg-diagnostic.d,
> 	gas/aarch64/sve-reg-diagnostic.l, gas/aarch64/sve.s,
> 	gas/aarch64/sve.d: New tests.
> 

I noticed while quickly going over this patch that there are some more
cases where error messages use 'should' when 'must' is more appropriate.
 Can we please ensure that these are fixed as well.

OK apart from that issue.

R.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension
  2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
                   ` (31 preceding siblings ...)
  2016-08-23  9:31 ` [AArch64][SVE 32/32] Add SVE tests Richard Sandiford
@ 2016-08-30 13:04 ` Nick Clifton
  32 siblings, 0 replies; 76+ messages in thread
From: Nick Clifton @ 2016-08-30 13:04 UTC (permalink / raw)
  To: binutils, richard.sandiford

Hi Richard,

> This series of patches adds support for the ARMv8-A Scalable Vector
> Extension (SVE), which was announced at Hot Chips yesterday.  

It occurs to me that an extensive enhancement like this deserves an entry in gas/NEWS...

Cheers
  Nick

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 32/32] Add SVE tests
  2016-08-25 15:23   ` Richard Earnshaw (lists)
@ 2016-08-30 21:23     ` Richard Sandiford
  2016-08-31  9:47       ` Richard Earnshaw (lists)
  0 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-08-30 21:23 UTC (permalink / raw)
  To: Richard Earnshaw (lists); +Cc: binutils

"Richard Earnshaw (lists)" <Richard.Earnshaw@arm.com> writes:
> On 23/08/16 10:31, Richard Sandiford wrote:
>> This patch adds new tests for SVE.  It also extends diagnostic.[sl] with
>> checks for some inappropriate uses of MUL and MUL VL in base AArch64
>> instructions.
>> 
>> OK to install?
>> 
>> Thanks,
>> Richard
>> 
>> 
>> gas/testsuite/
>> 	* gas/aarch64/diagnostic.s, gas/aarch64/diagnostic.l: Add tests for
>> 	invalid uses of MUL VL and MUL in base AArch64 instructions.
>> 	* gas/aarch64/sve-add.s, gas/aarch64/sve-add.d, gas/aarch64/sve-dup.s,
>> 	gas/aarch64/sve-dup.d, gas/aarch64/sve-invalid.s,
>> 	gas/aarch64/sve-invalid.d, gas/aarch64/sve-invalid.l,
>> 	gas/aarch64/sve-reg-diagnostic.s, gas/aarch64/sve-reg-diagnostic.d,
>> 	gas/aarch64/sve-reg-diagnostic.l, gas/aarch64/sve.s,
>> 	gas/aarch64/sve.d: New tests.
>> 
>
> I noticed while quickly going over this patch that there are some more
> cases where error messages use 'should' when 'must' is more appropriate.
>  Can we please ensure that these are fixed as well.

Apart from the one you noticed in patch 27 (which I'll fix), these
come from pre-existing messages that are automatically extended to new
operand types.  Is it OK to change them as a follow-on patch?  If so,
I'll do it for all messages, rather than just touch the ones that
affect SVE.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 32/32] Add SVE tests
  2016-08-30 21:23     ` Richard Sandiford
@ 2016-08-31  9:47       ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-08-31  9:47 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 30/08/16 22:23, Richard Sandiford wrote:
> "Richard Earnshaw (lists)" <Richard.Earnshaw@arm.com> writes:
>> On 23/08/16 10:31, Richard Sandiford wrote:
>>> This patch adds new tests for SVE.  It also extends diagnostic.[sl] with
>>> checks for some inappropriate uses of MUL and MUL VL in base AArch64
>>> instructions.
>>>
>>> OK to install?
>>>
>>> Thanks,
>>> Richard
>>>
>>>
>>> gas/testsuite/
>>> 	* gas/aarch64/diagnostic.s, gas/aarch64/diagnostic.l: Add tests for
>>> 	invalid uses of MUL VL and MUL in base AArch64 instructions.
>>> 	* gas/aarch64/sve-add.s, gas/aarch64/sve-add.d, gas/aarch64/sve-dup.s,
>>> 	gas/aarch64/sve-dup.d, gas/aarch64/sve-invalid.s,
>>> 	gas/aarch64/sve-invalid.d, gas/aarch64/sve-invalid.l,
>>> 	gas/aarch64/sve-reg-diagnostic.s, gas/aarch64/sve-reg-diagnostic.d,
>>> 	gas/aarch64/sve-reg-diagnostic.l, gas/aarch64/sve.s,
>>> 	gas/aarch64/sve.d: New tests.
>>>
>>
>> I noticed while quickly going over this patch that there are some more
>> cases where error messages use 'should' when 'must' is more appropriate.
>>  Can we please ensure that these are fixed as well.
> 
> Apart from the one you noticed in patch 27 (which I'll fix), these
> come from pre-existing messages that are automatically extended to new
> operand types.  Is it OK to change them as a follow-on patch?  If so,
> I'll do it for all messages, rather than just touch the ones that
> affect SVE.
> 

Yes, that's fine.

R.

> Thanks,
> Richard
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 11/32] Tweak aarch64_reg_parse_32_64 interface
  2016-08-25 13:27   ` Richard Earnshaw (lists)
@ 2016-09-16 11:51     ` Richard Sandiford
  2016-09-20 10:47       ` Richard Earnshaw (lists)
  0 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-09-16 11:51 UTC (permalink / raw)
  To: Richard Earnshaw (lists); +Cc: binutils

"Richard Earnshaw (lists)" <Richard.Earnshaw@arm.com> writes:
> On 23/08/16 10:12, Richard Sandiford wrote:
>> aarch64_reg_parse_32_64 is currently used to parse address registers,
>> among other things.  It returns two bits of information about the
>> register: whether it's W rather than X, and whether it's a zero register.
>> 
>> SVE adds addressing modes in which the base or offset can be a vector
>> register instead of a scalar, so a choice between W and X is no longer
>> enough.  It's more convenient to pass the type of register around as
>> a qualifier instead.
>> 
>> As it happens, two callers of aarch64_reg_parse_32_64 already wanted
>> the information in the form of a qualifier, so the change feels pretty
>> natural even without SVE.
>> 
>> Also, the function took two parameters to control whether {W}SP
>> and (W|X)ZR should be accepted.  These parameters were negative
>> "reject" parameters, but the closely-related parse_address_main
>> had a positive "accept" parameter (for post-indexed addressing).
>> One of the SVE patches adds a parameter to parse_address_main
>> that needs to be passed down alongside the aarch64_reg_parse_32_64
>> parameters, which as things stood led to an awkward mix of positive
>> and negative bools.  The patch therefore changes the
>> aarch64_reg_parse_32_64 parameters to "accept_sp" and "accept_rz"
>> instead.
>> 
>> Finally, the two input parameters and isregzero return value were
>> all ints but logically bools.  The patch changes the types to
>> bfd_boolean.
>> 
>> OK to install?
>> 
>> Thanks,
>> Richard
>> 
>> 
>> gas/
>> 	* config/tc-aarch64.c (aarch64_reg_parse_32_64): Return the register
>> 	type as a qualifier rather than an "isreg32" boolean.  Turn the
>> 	SP/ZR control parameters from negative "reject" to positive
>> 	"accept".  Make them and *ISREGZERO bfd_booleans rather than ints.
>> 	(parse_shifter_operand): Update accordingly.
>> 	(parse_address_main): Likewise.
>> 	(po_int_reg_or_fail): Likewise.  Make the same reject->accept
>> 	change to the macro parameters.
>> 	(parse_operands): Update after the above changes, replacing
>> 	the "isreg32" local variable with one called "qualifier".
>
> I'm not a big fan of parameters that simply take 'true' or 'false',
> especially when there is more than one such parameter: it's too easy to
> get the order mixed up.
>
> Furthermore, I'm not sure these two parameters are really independent.
> Are there any cases where both can be true?
>
> Given the above concerns I wonder whether a single enum with the
> permitted states might be better.  It certainly makes the code clearer
> at the caller as to which register types are acceptable.

In the end it seemed easier to remove the parameters entirely,
return a reg_entry, and get the caller to do the checking.
This leads to slightly better error messages in some cases.

This does create a corner case where:

	.equ	sp, 1
	ldr	w0, [x0, sp]

was previously an acceptable way of writing "ldr w0, [x0, #1]",
but I don't think it's important to continue supporting that.
We already rejected things like:

	.equ	sp, 1
	add	x0, x1, sp

To ensure these new error messages "win" when matching against
several candidate instruction entries, we need to use the same
address-parsing code for all addresses, including ADDR_SIMPLE
and SIMD_ADDR_SIMPLE.  The next patch also relies on this.

Finally, aarcch64_check_reg_type was written in a pretty
conservative way.  It should always be equivalent to a single
bit test.

Tested on aarch64-linux-gnu.  OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (REG_TYPE_R_Z, REG_TYPE_R_SP): New register
	types.
	(get_reg_expected_msg): Handle them and REG_TYPE_R64_SP.
	(aarch64_check_reg_type): Simplify.
	(aarch64_reg_parse_32_64): Return the reg_entry instead of the
	register number.  Return the type as a qualifier rather than an
	"isreg32" boolean.  Remove reject_sp, reject_rz and isregzero
	parameters.
	(parse_shifter_operand): Update call to aarch64_parse_32_64_reg.
	Use get_reg_expected_msg.
	(parse_address_main): Likewise.  Use aarch64_check_reg_type.
	(po_int_reg_or_fail): Replace reject_sp and reject_rz parameters
	with a reg_type parameter.  Update call to aarch64_parse_32_64_reg.
	Use aarch64_check_reg_type to test the result.
	(parse_operands): Update after the above changes.  Parse ADDR_SIMPLE
	addresses normally before enforcing the syntax restrictions.
	* testsuite/gas/aarch64/diagnostic.s: Add tests for a post-index
	zero register and for a stack pointer index.
	* testsuite/gas/aarch64/diagnostic.l: Update accordingly.
	Also update existing diagnostic messages after the above changes.
	* testsuite/gas/aarch64/illegal-lse.l: Update the error message
	for 32-bit register bases.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 2489d5b..7b5be8b 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -265,16 +265,22 @@ struct reloc_entry
   BASIC_REG_TYPE(FP_Q)	/* q[0-31] */	\
   BASIC_REG_TYPE(CN)	/* c[0-7]  */	\
   BASIC_REG_TYPE(VN)	/* v[0-31] */	\
-  /* Typecheck: any 64-bit int reg         (inc SP exc XZR) */		\
+  /* Typecheck: any 64-bit int reg         (inc SP exc XZR).  */	\
   MULTI_REG_TYPE(R64_SP, REG_TYPE(R_64) | REG_TYPE(SP_64))		\
-  /* Typecheck: any int                    (inc {W}SP inc [WX]ZR) */	\
+  /* Typecheck: x[0-30], w[0-30] or [xw]zr.  */				\
+  MULTI_REG_TYPE(R_Z, REG_TYPE(R_32) | REG_TYPE(R_64)			\
+		 | REG_TYPE(Z_32) | REG_TYPE(Z_64))			\
+  /* Typecheck: x[0-30], w[0-30] or {w}sp.  */				\
+  MULTI_REG_TYPE(R_SP, REG_TYPE(R_32) | REG_TYPE(R_64)			\
+		 | REG_TYPE(SP_32) | REG_TYPE(SP_64))			\
+  /* Typecheck: any int                    (inc {W}SP inc [WX]ZR).  */	\
   MULTI_REG_TYPE(R_Z_SP, REG_TYPE(R_32) | REG_TYPE(R_64)		\
 		 | REG_TYPE(SP_32) | REG_TYPE(SP_64)			\
 		 | REG_TYPE(Z_32) | REG_TYPE(Z_64)) 			\
   /* Typecheck: any [BHSDQ]P FP.  */					\
   MULTI_REG_TYPE(BHSDQ, REG_TYPE(FP_B) | REG_TYPE(FP_H)			\
 		 | REG_TYPE(FP_S) | REG_TYPE(FP_D) | REG_TYPE(FP_Q))	\
-  /* Typecheck: any int or [BHSDQ]P FP or V reg (exc SP inc [WX]ZR)  */	\
+  /* Typecheck: any int or [BHSDQ]P FP or V reg (exc SP inc [WX]ZR).  */ \
   MULTI_REG_TYPE(R_Z_BHSDQ_V, REG_TYPE(R_32) | REG_TYPE(R_64)		\
 		 | REG_TYPE(Z_32) | REG_TYPE(Z_64) | REG_TYPE(VN)	\
 		 | REG_TYPE(FP_B) | REG_TYPE(FP_H)			\
@@ -344,6 +350,15 @@ get_reg_expected_msg (aarch64_reg_type reg_type)
     case REG_TYPE_R_N:
       msg = N_("integer register expected");
       break;
+    case REG_TYPE_R64_SP:
+      msg = N_("64-bit integer or SP register expected");
+      break;
+    case REG_TYPE_R_Z:
+      msg = N_("integer or zero register expected");
+      break;
+    case REG_TYPE_R_SP:
+      msg = N_("integer or SP register expected");
+      break;
     case REG_TYPE_R_Z_SP:
       msg = N_("integer, zero or SP register expected");
       break;
@@ -390,9 +405,6 @@ get_reg_expected_msg (aarch64_reg_type reg_type)
 /* Instructions take 4 bytes in the object file.  */
 #define INSN_SIZE	4
 
-/* Define some common error messages.  */
-#define BAD_SP          _("SP not allowed here")
-
 static struct hash_control *aarch64_ops_hsh;
 static struct hash_control *aarch64_cond_hsh;
 static struct hash_control *aarch64_shift_hsh;
@@ -671,72 +683,45 @@ parse_reg (char **ccp)
 static bfd_boolean
 aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
 {
-  if (reg->type == type)
-    return TRUE;
-
-  switch (type)
-    {
-    case REG_TYPE_R64_SP:	/* 64-bit integer reg (inc SP exc XZR).  */
-    case REG_TYPE_R_Z_SP:	/* Integer reg (inc {X}SP inc [WX]ZR).  */
-    case REG_TYPE_R_Z_BHSDQ_V:	/* Any register apart from Cn.  */
-    case REG_TYPE_BHSDQ:	/* Any [BHSDQ]P FP or SIMD scalar register.  */
-    case REG_TYPE_VN:		/* Vector register.  */
-      gas_assert (reg->type < REG_TYPE_MAX && type < REG_TYPE_MAX);
-      return ((reg_type_masks[reg->type] & reg_type_masks[type])
-	      == reg_type_masks[reg->type]);
-    default:
-      as_fatal ("unhandled type %d", type);
-      abort ();
-    }
+  return (reg_type_masks[type] & (1 << reg->type)) != 0;
 }
 
-/* Parse a register and return PARSE_FAIL if the register is not of type R_Z_SP.
-   Return the register number otherwise.  *ISREG32 is set to one if the
-   register is 32-bit wide; *ISREGZERO is set to one if the register is
-   of type Z_32 or Z_64.
+/* Try to parse a base or offset register.  Return the register entry
+   on success, setting *QUALIFIER to the register qualifier.  Return null
+   otherwise.
+
    Note that this function does not issue any diagnostics.  */
 
-static int
-aarch64_reg_parse_32_64 (char **ccp, int reject_sp, int reject_rz,
-			 int *isreg32, int *isregzero)
+static const reg_entry *
+aarch64_reg_parse_32_64 (char **ccp, aarch64_opnd_qualifier_t *qualifier)
 {
   char *str = *ccp;
   const reg_entry *reg = parse_reg (&str);
 
   if (reg == NULL)
-    return PARSE_FAIL;
-
-  if (! aarch64_check_reg_type (reg, REG_TYPE_R_Z_SP))
-    return PARSE_FAIL;
+    return NULL;
 
   switch (reg->type)
     {
+    case REG_TYPE_R_32:
     case REG_TYPE_SP_32:
-    case REG_TYPE_SP_64:
-      if (reject_sp)
-	return PARSE_FAIL;
-      *isreg32 = reg->type == REG_TYPE_SP_32;
-      *isregzero = 0;
+    case REG_TYPE_Z_32:
+      *qualifier = AARCH64_OPND_QLF_W;
       break;
-    case REG_TYPE_R_32:
+
     case REG_TYPE_R_64:
-      *isreg32 = reg->type == REG_TYPE_R_32;
-      *isregzero = 0;
-      break;
-    case REG_TYPE_Z_32:
+    case REG_TYPE_SP_64:
     case REG_TYPE_Z_64:
-      if (reject_rz)
-	return PARSE_FAIL;
-      *isreg32 = reg->type == REG_TYPE_Z_32;
-      *isregzero = 1;
+      *qualifier = AARCH64_OPND_QLF_X;
       break;
+
     default:
-      return PARSE_FAIL;
+      return NULL;
     }
 
   *ccp = str;
 
-  return reg->number;
+  return reg;
 }
 
 /* Parse the qualifier of a SIMD vector register or a SIMD vector element.
@@ -3032,13 +3017,13 @@ static bfd_boolean
 parse_shifter_operand (char **str, aarch64_opnd_info *operand,
 		       enum parse_shift_mode mode)
 {
-  int reg;
-  int isreg32, isregzero;
+  const reg_entry *reg;
+  aarch64_opnd_qualifier_t qualifier;
   enum aarch64_operand_class opd_class
     = aarch64_get_operand_class (operand->type);
 
-  if ((reg =
-       aarch64_reg_parse_32_64 (str, 0, 0, &isreg32, &isregzero)) != PARSE_FAIL)
+  reg = aarch64_reg_parse_32_64 (str, &qualifier);
+  if (reg)
     {
       if (opd_class == AARCH64_OPND_CLASS_IMMEDIATE)
 	{
@@ -3046,14 +3031,14 @@ parse_shifter_operand (char **str, aarch64_opnd_info *operand,
 	  return FALSE;
 	}
 
-      if (!isregzero && reg == REG_SP)
+      if (!aarch64_check_reg_type (reg, REG_TYPE_R_Z))
 	{
-	  set_syntax_error (BAD_SP);
+	  set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_Z)));
 	  return FALSE;
 	}
 
-      operand->reg.regno = reg;
-      operand->qualifier = isreg32 ? AARCH64_OPND_QLF_W : AARCH64_OPND_QLF_X;
+      operand->reg.regno = reg->number;
+      operand->qualifier = qualifier;
 
       /* Accept optional shift operation on register.  */
       if (! skip_past_comma (str))
@@ -3192,8 +3177,9 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
 		    int accept_reg_post_index)
 {
   char *p = *str;
-  int reg;
-  int isreg32, isregzero;
+  const reg_entry *reg;
+  aarch64_opnd_qualifier_t base_qualifier;
+  aarch64_opnd_qualifier_t offset_qualifier;
   expressionS *exp = &inst.reloc.exp;
 
   if (! skip_past_char (&p, '['))
@@ -3270,14 +3256,13 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
 
   /* [ */
 
-  /* Accept SP and reject ZR */
-  reg = aarch64_reg_parse_32_64 (&p, 0, 1, &isreg32, &isregzero);
-  if (reg == PARSE_FAIL || isreg32)
+  reg = aarch64_reg_parse_32_64 (&p, &base_qualifier);
+  if (!reg || !aarch64_check_reg_type (reg, REG_TYPE_R64_SP))
     {
-      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_64)));
+      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R64_SP)));
       return FALSE;
     }
-  operand->addr.base_regno = reg;
+  operand->addr.base_regno = reg->number;
 
   /* [Xn */
   if (skip_past_comma (&p))
@@ -3285,12 +3270,17 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
       /* [Xn, */
       operand->addr.preind = 1;
 
-      /* Reject SP and accept ZR */
-      reg = aarch64_reg_parse_32_64 (&p, 1, 0, &isreg32, &isregzero);
-      if (reg != PARSE_FAIL)
+      reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier);
+      if (reg)
 	{
+	  if (!aarch64_check_reg_type (reg, REG_TYPE_R_Z))
+	    {
+	      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_Z)));
+	      return FALSE;
+	    }
+
 	  /* [Xn,Rm  */
-	  operand->addr.offset.regno = reg;
+	  operand->addr.offset.regno = reg->number;
 	  operand->addr.offset.is_reg = 1;
 	  /* Shifted index.  */
 	  if (skip_past_comma (&p))
@@ -3309,13 +3299,13 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
 	      || operand->shifter.kind == AARCH64_MOD_LSL
 	      || operand->shifter.kind == AARCH64_MOD_SXTX)
 	    {
-	      if (isreg32)
+	      if (offset_qualifier == AARCH64_OPND_QLF_W)
 		{
 		  set_syntax_error (_("invalid use of 32-bit register offset"));
 		  return FALSE;
 		}
 	    }
-	  else if (!isreg32)
+	  else if (offset_qualifier == AARCH64_OPND_QLF_X)
 	    {
 	      set_syntax_error (_("invalid use of 64-bit register offset"));
 	      return FALSE;
@@ -3399,16 +3389,16 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
 	}
 
       if (accept_reg_post_index
-	  && (reg = aarch64_reg_parse_32_64 (&p, 1, 1, &isreg32,
-					     &isregzero)) != PARSE_FAIL)
+	  && (reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier)))
 	{
 	  /* [Xn],Xm */
-	  if (isreg32)
+	  if (!aarch64_check_reg_type (reg, REG_TYPE_R_64))
 	    {
-	      set_syntax_error (_("invalid 32-bit register offset"));
+	      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_64)));
 	      return FALSE;
 	    }
-	  operand->addr.offset.regno = reg;
+
+	  operand->addr.offset.regno = reg->number;
 	  operand->addr.offset.is_reg = 1;
 	}
       else if (! my_get_expression (exp, &p, GE_OPT_PREFIX, 1))
@@ -3723,19 +3713,15 @@ parse_sys_ins_reg (char **str, struct hash_control *sys_ins_regs)
       }								\
   } while (0)
 
-#define po_int_reg_or_fail(reject_sp, reject_rz) do {		\
-    val = aarch64_reg_parse_32_64 (&str, reject_sp, reject_rz,	\
-                                   &isreg32, &isregzero);	\
-    if (val == PARSE_FAIL)					\
+#define po_int_reg_or_fail(reg_type) do {			\
+    reg = aarch64_reg_parse_32_64 (&str, &qualifier);		\
+    if (!reg || !aarch64_check_reg_type (reg, reg_type))	\
       {								\
 	set_default_error ();					\
 	goto failure;						\
       }								\
-    info->reg.regno = val;					\
-    if (isreg32)						\
-      info->qualifier = AARCH64_OPND_QLF_W;			\
-    else							\
-      info->qualifier = AARCH64_OPND_QLF_X;			\
+    info->reg.regno = reg->number;				\
+    info->qualifier = qualifier;				\
   } while (0)
 
 #define po_imm_nc_or_fail() do {				\
@@ -4993,10 +4979,11 @@ parse_operands (char *str, const aarch64_opcode *opcode)
   for (i = 0; operands[i] != AARCH64_OPND_NIL; i++)
     {
       int64_t val;
-      int isreg32, isregzero;
+      const reg_entry *reg;
       int comma_skipped_p = 0;
       aarch64_reg_type rtype;
       struct vector_type_el vectype;
+      aarch64_opnd_qualifier_t qualifier;
       aarch64_opnd_info *info = &inst.base.operands[i];
 
       DEBUG_TRACE ("parse operand %d", i);
@@ -5032,12 +5019,12 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	case AARCH64_OPND_Ra:
 	case AARCH64_OPND_Rt_SYS:
 	case AARCH64_OPND_PAIRREG:
-	  po_int_reg_or_fail (1, 0);
+	  po_int_reg_or_fail (REG_TYPE_R_Z);
 	  break;
 
 	case AARCH64_OPND_Rd_SP:
 	case AARCH64_OPND_Rn_SP:
-	  po_int_reg_or_fail (0, 1);
+	  po_int_reg_or_fail (REG_TYPE_R_SP);
 	  break;
 
 	case AARCH64_OPND_Rm_EXT:
@@ -5498,24 +5485,39 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 
 	case AARCH64_OPND_ADDR_SIMPLE:
 	case AARCH64_OPND_SIMD_ADDR_SIMPLE:
-	  /* [<Xn|SP>{, #<simm>}]  */
-	  po_char_or_fail ('[');
-	  po_reg_or_fail (REG_TYPE_R64_SP);
-	  /* Accept optional ", #0".  */
-	  if (operands[i] == AARCH64_OPND_ADDR_SIMPLE
-	      && skip_past_char (&str, ','))
-	    {
-	      skip_past_char (&str, '#');
-	      if (! skip_past_char (&str, '0'))
-		{
-		  set_fatal_syntax_error
-		    (_("the optional immediate offset can only be 0"));
-		  goto failure;
-		}
-	    }
-	  po_char_or_fail (']');
-	  info->addr.base_regno = val;
-	  break;
+	  {
+	    /* [<Xn|SP>{, #<simm>}]  */
+	    char *start = str;
+	    /* First use the normal address-parsing routines, to get
+	       the usual syntax errors.  */
+	    po_misc_or_fail (parse_address (&str, info, 0));
+	    if (info->addr.pcrel || info->addr.offset.is_reg
+		|| !info->addr.preind || info->addr.postind
+		|| info->addr.writeback)
+	      {
+		set_syntax_error (_("invalid addressing mode"));
+		goto failure;
+	      }
+
+	    /* Then retry, matching the specific syntax of these addresses.  */
+	    str = start;
+	    po_char_or_fail ('[');
+	    po_reg_or_fail (REG_TYPE_R64_SP);
+	    /* Accept optional ", #0".  */
+	    if (operands[i] == AARCH64_OPND_ADDR_SIMPLE
+		&& skip_past_char (&str, ','))
+	      {
+		skip_past_char (&str, '#');
+		if (! skip_past_char (&str, '0'))
+		  {
+		    set_fatal_syntax_error
+		      (_("the optional immediate offset can only be 0"));
+		    goto failure;
+		  }
+	      }
+	    po_char_or_fail (']');
+	    break;
+	  }
 
 	case AARCH64_OPND_ADDR_REGOFF:
 	  /* [<Xn|SP>, <R><m>{, <extend> {<amount>}}]  */
diff --git a/gas/testsuite/gas/aarch64/diagnostic.l b/gas/testsuite/gas/aarch64/diagnostic.l
index 67ef484..ef23577 100644
--- a/gas/testsuite/gas/aarch64/diagnostic.l
+++ b/gas/testsuite/gas/aarch64/diagnostic.l
@@ -54,7 +54,7 @@
 [^:]*:56: Error: operand 2 should be a floating-point register -- `fcmp d0,x0'
 [^:]*:57: Error: immediate zero expected at operand 3 -- `cmgt v0.4s,v2.4s,#1'
 [^:]*:58: Error: unexpected characters following instruction at operand 2 -- `fmov d3,1.00,lsl#3'
-[^:]*:59: Error: writeback value should be an immediate constant at operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],sp'
+[^:]*:59: Error: integer 64-bit register expected at operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],sp'
 [^:]*:60: Error: writeback value should be an immediate constant at operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],zr'
 [^:]*:61: Error: invalid shift for the register offset addressing mode at operand 2 -- `ldr q0,\[x0,w0,lsr#4\]'
 [^:]*:62: Error: only 'LSL' shift is permitted at operand 3 -- `adds x1,sp,2134,uxtw#12'
@@ -116,10 +116,10 @@
 [^:]*:125: Warning: unpredictable transfer with writeback -- `ldp x0,x1,\[x1\],#16'
 [^:]*:126: Error: this relocation modifier is not allowed on this instruction at operand 2 -- `adr x2,:got:s1'
 [^:]*:127: Error: this relocation modifier is not allowed on this instruction at operand 2 -- `ldr x0,\[x0,:got:s1\]'
-[^:]*:130: Error: integer 64-bit register expected at operand 2 -- `ldr x1,\[wsp,#8\]!'
-[^:]*:131: Error: integer 64-bit register expected at operand 3 -- `ldp x6,x29,\[w7,#8\]!'
-[^:]*:132: Error: integer 64-bit register expected at operand 2 -- `str x30,\[w11,#8\]!'
-[^:]*:133: Error: integer 64-bit register expected at operand 3 -- `stp x8,x27,\[wsp,#8\]!'
+[^:]*:130: Error: 64-bit integer or SP register expected at operand 2 -- `ldr x1,\[wsp,#8\]!'
+[^:]*:131: Error: 64-bit integer or SP register expected at operand 3 -- `ldp x6,x29,\[w7,#8\]!'
+[^:]*:132: Error: 64-bit integer or SP register expected at operand 2 -- `str x30,\[w11,#8\]!'
+[^:]*:133: Error: 64-bit integer or SP register expected at operand 3 -- `stp x8,x27,\[wsp,#8\]!'
 [^:]*:213: Error: register element index out of range 0 to 1 at operand 2 -- `dup v0\.2d,v1\.2d\[-1\]'
 [^:]*:216: Error: register element index out of range 0 to 1 at operand 2 -- `dup v0\.2d,v1\.2d\[2\]'
 [^:]*:217: Error: register element index out of range 0 to 1 at operand 2 -- `dup v0\.2d,v1\.2d\[64\]'
@@ -148,3 +148,5 @@
 [^:]*:262: Error: invalid floating-point constant at operand 2 -- `fmov d0,#-2'
 [^:]*:263: Error: invalid floating-point constant at operand 2 -- `fmov s0,2'
 [^:]*:264: Error: invalid floating-point constant at operand 2 -- `fmov s0,-2'
+[^:]*:266: Error: integer 64-bit register expected at operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],xzr'
+[^:]*:267: Error: integer or zero register expected at operand 2 -- `str x1,\[x2,sp\]'
diff --git a/gas/testsuite/gas/aarch64/diagnostic.s b/gas/testsuite/gas/aarch64/diagnostic.s
index 3092b9b..8dbb542 100644
--- a/gas/testsuite/gas/aarch64/diagnostic.s
+++ b/gas/testsuite/gas/aarch64/diagnostic.s
@@ -262,3 +262,6 @@
 	fmov	d0, #-2
 	fmov	s0, 2
 	fmov	s0, -2
+
+	st2	{v0.4s, v1.4s}, [sp], xzr
+	str	x1, [x2, sp]
diff --git a/gas/testsuite/gas/aarch64/illegal-lse.l b/gas/testsuite/gas/aarch64/illegal-lse.l
index ed70065..dd57f99 100644
--- a/gas/testsuite/gas/aarch64/illegal-lse.l
+++ b/gas/testsuite/gas/aarch64/illegal-lse.l
@@ -1,433 +1,433 @@
 [^:]*: Assembler messages:
 [^:]*:68: Error: operand mismatch -- `cas w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `cas w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `cas w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casa w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casa w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casa w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casl w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casl w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casl w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casal w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casal w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casal w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casb w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casb w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casb w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `cash w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `cash w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `cash w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casab w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casab w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casab w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `caslb w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `caslb w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `caslb w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casalb w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casalb w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casalb w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casah w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casah w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casah w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `caslh w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `caslh w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `caslh w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casalh w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casalh w2,w3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casalh w2,w3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `cas w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `cas x2,x3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `cas x2,x3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casa w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casa x2,x3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casa x2,x3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casl w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casl x2,x3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casl x2,x3,\[w4\]'
 [^:]*:68: Error: operand mismatch -- `casal w0,x1,\[x2\]'
-[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casal x2,x3,\[w4\]'
+[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casal x2,x3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swp w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swp w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swp w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpa w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpa w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpa w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpl w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpl w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpl w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpal w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpal w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpal w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpb w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpb w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpb w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swph w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swph w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swph w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpab w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpab w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpab w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swplb w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swplb w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swplb w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpalb w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpalb w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpalb w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpah w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpah w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpah w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swplh w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swplh w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swplh w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpalh w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpalh w2,w3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpalh w2,w3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swp w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swp x2,x3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swp x2,x3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpa w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpa x2,x3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpa x2,x3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpl w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpl x2,x3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpl x2,x3,\[w4\]'
 [^:]*:69: Error: operand mismatch -- `swpal w0,x1,\[x2\]'
-[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpal x2,x3,\[w4\]'
+[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpal x2,x3,\[w4\]'
 [^:]*:70: Error: reg pair must start from even reg at operand 1 -- `casp w1,w1,w2,w3,\[x5\]'
 [^:]*:70: Error: reg pair must be contiguous at operand 2 -- `casp w4,w4,w6,w7,\[sp\]'
 [^:]*:70: Error: operand mismatch -- `casp w0,x1,x2,x3,\[x2\]'
-[^:]*:70: Error: operand 5 should be an address with base register \(no offset\) -- `casp x4,x5,x6,x7,\[w8\]'
+[^:]*:70: Error: 64-bit integer or SP register expected at operand 5 -- `casp x4,x5,x6,x7,\[w8\]'
 [^:]*:70: Error: reg pair must start from even reg at operand 1 -- `caspa w1,w1,w2,w3,\[x5\]'
 [^:]*:70: Error: reg pair must be contiguous at operand 2 -- `caspa w4,w4,w6,w7,\[sp\]'
 [^:]*:70: Error: operand mismatch -- `caspa w0,x1,x2,x3,\[x2\]'
-[^:]*:70: Error: operand 5 should be an address with base register \(no offset\) -- `caspa x4,x5,x6,x7,\[w8\]'
+[^:]*:70: Error: 64-bit integer or SP register expected at operand 5 -- `caspa x4,x5,x6,x7,\[w8\]'
 [^:]*:70: Error: reg pair must start from even reg at operand 1 -- `caspl w1,w1,w2,w3,\[x5\]'
 [^:]*:70: Error: reg pair must be contiguous at operand 2 -- `caspl w4,w4,w6,w7,\[sp\]'
 [^:]*:70: Error: operand mismatch -- `caspl w0,x1,x2,x3,\[x2\]'
-[^:]*:70: Error: operand 5 should be an address with base register \(no offset\) -- `caspl x4,x5,x6,x7,\[w8\]'
+[^:]*:70: Error: 64-bit integer or SP register expected at operand 5 -- `caspl x4,x5,x6,x7,\[w8\]'
 [^:]*:70: Error: reg pair must start from even reg at operand 1 -- `caspal w1,w1,w2,w3,\[x5\]'
 [^:]*:70: Error: reg pair must be contiguous at operand 2 -- `caspal w4,w4,w6,w7,\[sp\]'
 [^:]*:70: Error: operand mismatch -- `caspal w0,x1,x2,x3,\[x2\]'
-[^:]*:70: Error: operand 5 should be an address with base register \(no offset\) -- `caspal x4,x5,x6,x7,\[w8\]'
+[^:]*:70: Error: 64-bit integer or SP register expected at operand 5 -- `caspal x4,x5,x6,x7,\[w8\]'
 [^:]*:71: Error: operand mismatch -- `ldadd w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldadd w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldadd w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldadda w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldadda w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldadda w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddl w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddl w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddal w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddal w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddab w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddab w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddab w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddlb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddlb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddlb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddalb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddalb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddalb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddah w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddah w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddah w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddlh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddlh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddlh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddalh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddalh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddalh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldadd w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldadd x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldadd x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldadda w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldadda x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldadda x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddl x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddl x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldaddal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddal x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddal x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclr w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclr w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclr w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclra w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclra w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclra w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclrl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrl w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrl w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclral w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclral w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclral w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclrb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclrh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclrab w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrab w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrab w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclrlb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrlb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrlb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclralb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclralb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclralb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclrah w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrah w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrah w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclrlh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrlh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrlh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclralh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclralh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclralh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclr w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclr x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclr x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclra w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclra x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclra x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclrl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrl x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrl x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldclral w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclral x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclral x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeor w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeor w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeor w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeora w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeora w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeora w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeorl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorl w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorl w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeoral w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeoral w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeoral w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeorb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeorh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeorab w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorab w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorab w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeorlb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorlb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorlb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeoralb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeoralb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeoralb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeorah w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorah w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorah w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeorlh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorlh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorlh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeoralh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeoralh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeoralh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeor w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeor x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeor x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeora w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeora x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeora x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeorl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorl x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorl x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldeoral w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeoral x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeoral x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldset w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldset w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldset w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldseta w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldseta w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldseta w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetl w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetl w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetal w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetal w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldseth w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldseth w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldseth w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetab w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetab w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetab w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetlb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetlb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetlb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetalb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetalb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetalb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetah w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetah w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetah w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetlh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetlh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetlh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetalh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetalh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetalh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldset w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldset x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldset x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldseta w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldseta x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldseta x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetl x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetl x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsetal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetal x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetal x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmax w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmax w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmax w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxa w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxa w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxa w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxl w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxl w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxal w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxal w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxab w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxab w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxab w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxlb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxlb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxlb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxalb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxalb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxalb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxah w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxah w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxah w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxlh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxlh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxlh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxalh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxalh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxalh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmax w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmax x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmax x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxa w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxa x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxa x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxl x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxl x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmaxal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxal x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxal x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmin w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmin w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmin w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmina w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmina w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmina w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminl w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminl w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminal w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminal w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminab w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminab w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminab w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminlb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminlb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminlb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminalb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminalb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminalb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminah w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminah w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminah w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminlh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminlh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminlh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminalh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminalh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminalh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmin w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmin x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmin x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsmina w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmina x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmina x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminl x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminl x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldsminal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminal x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminal x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumax w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumax w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumax w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxa w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxa w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxa w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxl w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxl w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxal w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxal w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxab w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxab w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxab w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxlb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxlb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxlb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxalb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxalb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxalb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxah w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxah w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxah w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxlh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxlh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxlh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxalh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxalh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxalh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumax w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumax x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumax x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxa w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxa x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxa x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxl x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxl x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumaxal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxal x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxal x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumin w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumin w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumin w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumina w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumina w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumina w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminl w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminl w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminal w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminal w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminab w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminab w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminab w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminlb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminlb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminlb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminalb w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminalb w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminalb w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminah w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminah w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminah w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminlh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminlh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminlh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminalh w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminalh w2,w3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminalh w2,w3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumin w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumin x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumin x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `ldumina w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumina x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumina x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminl w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminl x2,x3,\[w4\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminl x2,x3,\[w4\]'
 [^:]*:71: Error: operand mismatch -- `lduminal w0,x1,\[x2\]'
-[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminal x2,x3,\[w4\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stadd w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddl w2,\[w3\]'
+[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminal x2,x3,\[w4\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stadd w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddl w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `staddb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `staddh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddh w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `staddlb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddlb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddlb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `staddlh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddlh w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stadd x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddl x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclr w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrl w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddlh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stadd x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddl x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclr w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrl w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stclrb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stclrh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrh w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stclrlb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrlb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrlb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stclrlh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrlh w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclr x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrl x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steor w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorl w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrlh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclr x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrl x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steor w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorl w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `steorb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `steorh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorh w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `steorlb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorlb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorlb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `steorlh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorlh w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steor x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorl x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stset w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsetl w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorlh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steor x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorl x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stset w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsetl w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsetb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsetb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsetb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stseth x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stseth w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stseth w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsetlb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsetlb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsetlb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsetlh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsetlh w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stset x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsetl x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmax w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxl w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsetlh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stset x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsetl x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmax w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxl w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsmaxb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsmaxh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxh w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsmaxlb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxlb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxlb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsmaxlh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxlh w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmax x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxl x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmin w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminl w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxlh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmax x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxl x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmin w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminl w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsminb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsminh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminh w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsminlb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminlb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminlb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stsminlh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminlh w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmin x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminl x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumax w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxl w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminlh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmin x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminl x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumax w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxl w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stumaxb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stumaxh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxh w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stumaxlb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxlb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxlb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stumaxlh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxlh w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumax x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxl x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumin w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminl w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxlh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumax x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxl x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumin w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminl w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stuminb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stuminh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminh w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stuminlb x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminlb w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminlb w2,\[w3\]'
 [^:]*:72: Error: operand mismatch -- `stuminlh x0,\[x2\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminlh w2,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumin x0,\[w3\]'
-[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminl x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminlh w2,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumin x0,\[w3\]'
+[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminl x0,\[w3\]'

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 12/32] Make more use of bfd_boolean
  2016-08-25 13:39   ` Richard Earnshaw (lists)
@ 2016-09-16 11:56     ` Richard Sandiford
  2016-09-20 12:39       ` Richard Earnshaw (lists)
  0 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-09-16 11:56 UTC (permalink / raw)
  To: Richard Earnshaw (lists); +Cc: binutils

"Richard Earnshaw (lists)" <Richard.Earnshaw@arm.com> writes:
> On 23/08/16 10:13, Richard Sandiford wrote:
>> Following on from the previous patch, which converted the
>> aarch64_reg_parse_32_64 parameters to bfd_booleans, this one
>> does the same for parse_address_main and parse_address.
>> It also documents the parameters.
>> 
>> This isn't an attempt to convert the whole file to use bfd_booleans
>> more often.  It's simply trying to avoid inconsistencies with new
>> SVE parameters.
>> 
>> OK to install?
>> 
>> Thanks,
>> Richard
>> 
>> 
>> gas/
>> 	* config/tc-aarch64.c (parse_address_main): Turn reloc and
>> 	accept_reg_post_index into bfd_booleans.  Add commentary.
>> 	(parse_address_reloc): Update accordingly.  Add commentary.
>> 	(parse_address): Likewise.  Also change accept_reg_post_index
>> 	into a bfd_boolean here.
>> 	(parse_operands): Update calls accordingly.
>
> My comment on the previous patch applies somewhat here too, although the
> two bools are not as closely related here.  In particular statements
> such as
>
>   return parse_address_main (str, operand, TRUE, FALSE);
>
> are not intuitively obvious to the reader of the code.

Yeah...

I think here too we can just get rid of the parameters and leavve the
callers to check the addressing modes.  As it happens, the handling of
ADDR_SIMM9{,_2} already did this for relocation operators (i.e. it used
parse_address_reloc and then rejected relocations).

The callers are already set up to reject invalid register post-indexed
addressing, so we can simply remove the accept_reg_post_index parameter
without adding any more checks.  This again creates a corner case where:

	.equ	x2, 1
	ldr	w0, [x1], x2

was previously an acceptable way of writing "ldr w0, [x1], #1" but
is now rejected.

Removing the "reloc" parameter means that two cases need to check
explicitly for relocation operators.

ADDR_SIMM9_2 appers to be unused.  I'll send a separate patch
to remove it.

This patch makes parse_address temporarily equivalent to
parse_address_main, but later patches in the series will need
to keep the distinction.

Tested on aarch64-linux-gnu.  OK to install?

Thanks,
Richard


gas/
	* config/tc-aarch64.c (parse_address_main): Remove reloc and
	accept_reg_post_index parameters.  Parse relocations and register
	post indexes unconditionally.
	(parse_address): Remove accept_reg_post_index parameter.
	Update call to parse_address_main.
	(parse_address_reloc): Delete.
	(parse_operands): Call parse_address instead of parse_address_main.
	Update existing callers of parse_address and make them check
	inst.reloc.type where appropriate.
	* testsuite/gas/aarch64/diagnostic.s: Add tests for relocations
	in ADDR_SIMPLE, SIMD_ADDR_SIMPLE, ADDR_SIMM7 and ADDR_SIMM9 addresses.
	Also test for invalid uses of post-index register addressing.
	* testsuite/gas/aarch64/diagnostic.l: Update accordingly.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 7b5be8b..f82fdb9 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -3173,8 +3173,7 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
    supported by the instruction, and to set inst.reloc.type.  */
 
 static bfd_boolean
-parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
-		    int accept_reg_post_index)
+parse_address_main (char **str, aarch64_opnd_info *operand)
 {
   char *p = *str;
   const reg_entry *reg;
@@ -3190,7 +3189,7 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
 
       /* #:<reloc_op>:<symbol>  */
       skip_past_char (&p, '#');
-      if (reloc && skip_past_char (&p, ':'))
+      if (skip_past_char (&p, ':'))
 	{
 	  bfd_reloc_code_real_type ty;
 	  struct reloc_table_entry *entry;
@@ -3315,7 +3314,7 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
 	{
 	  /* [Xn,#:<reloc_op>:<symbol>  */
 	  skip_past_char (&p, '#');
-	  if (reloc && skip_past_char (&p, ':'))
+	  if (skip_past_char (&p, ':'))
 	    {
 	      struct reloc_table_entry *entry;
 
@@ -3388,8 +3387,8 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
 	  return FALSE;
 	}
 
-      if (accept_reg_post_index
-	  && (reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier)))
+      reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier);
+      if (reg)
 	{
 	  /* [Xn],Xm */
 	  if (!aarch64_check_reg_type (reg, REG_TYPE_R_64))
@@ -3428,19 +3427,12 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
   return TRUE;
 }
 
-/* Return TRUE on success; otherwise return FALSE.  */
-static bfd_boolean
-parse_address (char **str, aarch64_opnd_info *operand,
-	       int accept_reg_post_index)
-{
-  return parse_address_main (str, operand, 0, accept_reg_post_index);
-}
-
-/* Return TRUE on success; otherwise return FALSE.  */
+/* Parse a base AArch64 address (as opposed to an SVE one).  Return TRUE
+   on success.  */
 static bfd_boolean
-parse_address_reloc (char **str, aarch64_opnd_info *operand)
+parse_address (char **str, aarch64_opnd_info *operand)
 {
-  return parse_address_main (str, operand, 1, 0);
+  return parse_address_main (str, operand);
 }
 
 /* Parse an operand for a MOVZ, MOVN or MOVK instruction.
@@ -5419,7 +5411,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	case AARCH64_OPND_ADDR_PCREL19:
 	case AARCH64_OPND_ADDR_PCREL21:
 	case AARCH64_OPND_ADDR_PCREL26:
-	  po_misc_or_fail (parse_address_reloc (&str, info));
+	  po_misc_or_fail (parse_address (&str, info));
 	  if (!info->addr.pcrel)
 	    {
 	      set_syntax_error (_("invalid pc-relative address"));
@@ -5490,7 +5482,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	    char *start = str;
 	    /* First use the normal address-parsing routines, to get
 	       the usual syntax errors.  */
-	    po_misc_or_fail (parse_address (&str, info, 0));
+	    po_misc_or_fail (parse_address (&str, info));
 	    if (info->addr.pcrel || info->addr.offset.is_reg
 		|| !info->addr.preind || info->addr.postind
 		|| info->addr.writeback)
@@ -5521,7 +5513,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 
 	case AARCH64_OPND_ADDR_REGOFF:
 	  /* [<Xn|SP>, <R><m>{, <extend> {<amount>}}]  */
-	  po_misc_or_fail (parse_address (&str, info, 0));
+	  po_misc_or_fail (parse_address (&str, info));
 	  if (info->addr.pcrel || !info->addr.offset.is_reg
 	      || !info->addr.preind || info->addr.postind
 	      || info->addr.writeback)
@@ -5540,13 +5532,18 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  break;
 
 	case AARCH64_OPND_ADDR_SIMM7:
-	  po_misc_or_fail (parse_address (&str, info, 0));
+	  po_misc_or_fail (parse_address (&str, info));
 	  if (info->addr.pcrel || info->addr.offset.is_reg
 	      || (!info->addr.preind && !info->addr.postind))
 	    {
 	      set_syntax_error (_("invalid addressing mode"));
 	      goto failure;
 	    }
+	  if (inst.reloc.type != BFD_RELOC_UNUSED)
+	    {
+	      set_syntax_error (_("relocation not allowed"));
+	      goto failure;
+	    }
 	  assign_imm_if_const_or_fixup_later (&inst.reloc, info,
 					      /* addr_off_p */ 1,
 					      /* need_libopcodes_p */ 1,
@@ -5555,7 +5552,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 
 	case AARCH64_OPND_ADDR_SIMM9:
 	case AARCH64_OPND_ADDR_SIMM9_2:
-	  po_misc_or_fail (parse_address_reloc (&str, info));
+	  po_misc_or_fail (parse_address (&str, info));
 	  if (info->addr.pcrel || info->addr.offset.is_reg
 	      || (!info->addr.preind && !info->addr.postind)
 	      || (operands[i] == AARCH64_OPND_ADDR_SIMM9_2
@@ -5576,7 +5573,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  break;
 
 	case AARCH64_OPND_ADDR_UIMM12:
-	  po_misc_or_fail (parse_address_reloc (&str, info));
+	  po_misc_or_fail (parse_address (&str, info));
 	  if (info->addr.pcrel || info->addr.offset.is_reg
 	      || !info->addr.preind || info->addr.writeback)
 	    {
@@ -5596,7 +5593,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 
 	case AARCH64_OPND_SIMD_ADDR_POST:
 	  /* [<Xn|SP>], <Xm|#<amount>>  */
-	  po_misc_or_fail (parse_address (&str, info, 1));
+	  po_misc_or_fail (parse_address (&str, info));
 	  if (!info->addr.postind || !info->addr.writeback)
 	    {
 	      set_syntax_error (_("invalid addressing mode"));
diff --git a/gas/testsuite/gas/aarch64/diagnostic.l b/gas/testsuite/gas/aarch64/diagnostic.l
index ef23577..0fb4db9 100644
--- a/gas/testsuite/gas/aarch64/diagnostic.l
+++ b/gas/testsuite/gas/aarch64/diagnostic.l
@@ -150,3 +150,11 @@
 [^:]*:264: Error: invalid floating-point constant at operand 2 -- `fmov s0,-2'
 [^:]*:266: Error: integer 64-bit register expected at operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],xzr'
 [^:]*:267: Error: integer or zero register expected at operand 2 -- `str x1,\[x2,sp\]'
+[^:]*:270: Error: relocation not allowed at operand 3 -- `ldnp x1,x2,\[x3,#:lo12:foo\]'
+[^:]*:271: Error: invalid addressing mode at operand 2 -- `ld1 {v0\.4s},\[x3,#:lo12:foo\]'
+[^:]*:272: Error: the optional immediate offset can only be 0 at operand 2 -- `stuminl x0,\[x3,#:lo12:foo\]'
+[^:]*:273: Error: relocation not allowed at operand 2 -- `prfum pldl1keep,\[x3,#:lo12:foo\]'
+[^:]*:275: Error: invalid addressing mode at operand 2 -- `ldr x0,\[x3\],x4'
+[^:]*:276: Error: invalid addressing mode at operand 3 -- `ldnp x1,x2,\[x3\],x4'
+[^:]*:278: Error: invalid addressing mode at operand 2 -- `stuminl x0,\[x3\],x4'
+[^:]*:279: Error: invalid addressing mode at operand 2 -- `prfum pldl1keep,\[x3\],x4'
diff --git a/gas/testsuite/gas/aarch64/diagnostic.s b/gas/testsuite/gas/aarch64/diagnostic.s
index 8dbb542..a9cd124 100644
--- a/gas/testsuite/gas/aarch64/diagnostic.s
+++ b/gas/testsuite/gas/aarch64/diagnostic.s
@@ -265,3 +265,15 @@
 
 	st2	{v0.4s, v1.4s}, [sp], xzr
 	str	x1, [x2, sp]
+
+	ldr	x0, [x1, #:lo12:foo] // OK
+	ldnp	x1, x2, [x3, #:lo12:foo]
+	ld1	{v0.4s}, [x3, #:lo12:foo]
+	stuminl x0, [x3, #:lo12:foo]
+	prfum	pldl1keep, [x3, #:lo12:foo]
+
+	ldr	x0, [x3], x4
+	ldnp	x1, x2, [x3], x4
+	ld1	{v0.4s}, [x3], x4 // OK
+	stuminl x0, [x3], x4
+	prfum	pldl1keep, [x3], x4

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 25/32] Add support for SVE addressing modes
  2016-08-25 14:38   ` Richard Earnshaw (lists)
@ 2016-09-16 12:06     ` Richard Sandiford
  2016-09-20 13:40       ` Richard Earnshaw (lists)
  0 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-09-16 12:06 UTC (permalink / raw)
  To: Richard Earnshaw (lists); +Cc: binutils

"Richard Earnshaw (lists)" <Richard.Earnshaw@arm.com> writes:
> On 23/08/16 10:21, Richard Sandiford wrote:
>> This patch adds most of the new SVE addressing modes and associated
>> operands.  A follow-on patch adds MUL VL, since handling it separately
>> makes the changes easier to read.
>> 
>> The patch also introduces a new "operand-dependent data" field to the
>> operand flags, based closely on the existing one for opcode flags.
>> For SVE this new field needs only 2 bits, but it could be widened
>> in future if necessary.
>> 
>> OK to install?
>> 
>> Thanks,
>> Richard
>> 
>> 
>> include/opcode/
>> 	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_U6): New aarch64_opnd.
>> 	(AARCH64_OPND_SVE_ADDR_RI_U6x2, AARCH64_OPND_SVE_ADDR_RI_U6x4)
>> 	(AARCH64_OPND_SVE_ADDR_RI_U6x8, AARCH64_OPND_SVE_ADDR_RR)
>> 	(AARCH64_OPND_SVE_ADDR_RR_LSL1, AARCH64_OPND_SVE_ADDR_RR_LSL2)
>> 	(AARCH64_OPND_SVE_ADDR_RR_LSL3, AARCH64_OPND_SVE_ADDR_RX)
>> 	(AARCH64_OPND_SVE_ADDR_RX_LSL1, AARCH64_OPND_SVE_ADDR_RX_LSL2)
>> 	(AARCH64_OPND_SVE_ADDR_RX_LSL3, AARCH64_OPND_SVE_ADDR_RZ)
>> 	(AARCH64_OPND_SVE_ADDR_RZ_LSL1, AARCH64_OPND_SVE_ADDR_RZ_LSL2)
>> 	(AARCH64_OPND_SVE_ADDR_RZ_LSL3, AARCH64_OPND_SVE_ADDR_RZ_XTW_14)
>> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW_22, AARCH64_OPND_SVE_ADDR_RZ_XTW1_14)
>> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW1_22, AARCH64_OPND_SVE_ADDR_RZ_XTW2_14)
>> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW2_22, AARCH64_OPND_SVE_ADDR_RZ_XTW3_14)
>> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW3_22, AARCH64_OPND_SVE_ADDR_ZI_U5)
>> 	(AARCH64_OPND_SVE_ADDR_ZI_U5x2, AARCH64_OPND_SVE_ADDR_ZI_U5x4)
>> 	(AARCH64_OPND_SVE_ADDR_ZI_U5x8, AARCH64_OPND_SVE_ADDR_ZZ_LSL)
>> 	(AARCH64_OPND_SVE_ADDR_ZZ_SXTW, AARCH64_OPND_SVE_ADDR_ZZ_UXTW):
>> 	Likewise.
>> 
>> opcodes/
>> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE
>> 	address operands.
>> 	* aarch64-opc.h (FLD_SVE_imm6, FLD_SVE_msz, FLD_SVE_xs_14)
>> 	(FLD_SVE_xs_22): New aarch64_field_kinds.
>> 	(OPD_F_OD_MASK, OPD_F_OD_LSB, OPD_F_NO_ZR): New flags.
>> 	(get_operand_specific_data): New function.
>> 	* aarch64-opc.c (fields): Add entries for FLD_SVE_imm6, FLD_SVE_msz,
>> 	FLD_SVE_xs_14 and FLD_SVE_xs_22.
>> 	(operand_general_constraint_met_p): Handle the new SVE address
>> 	operands.
>> 	(sve_reg): New array.
>> 	(get_addr_sve_reg_name): New function.
>> 	(aarch64_print_operand): Handle the new SVE address operands.
>> 	* aarch64-opc-2.c: Regenerate.
>> 	* aarch64-asm.h (ins_sve_addr_ri_u6, ins_sve_addr_rr_lsl)
>> 	(ins_sve_addr_rz_xtw, ins_sve_addr_zi_u5, ins_sve_addr_zz_lsl)
>> 	(ins_sve_addr_zz_sxtw, ins_sve_addr_zz_uxtw): New inserters.
>> 	* aarch64-asm.c (aarch64_ins_sve_addr_ri_u6): New function.
>> 	(aarch64_ins_sve_addr_rr_lsl): Likewise.
>> 	(aarch64_ins_sve_addr_rz_xtw): Likewise.
>> 	(aarch64_ins_sve_addr_zi_u5): Likewise.
>> 	(aarch64_ins_sve_addr_zz): Likewise.
>> 	(aarch64_ins_sve_addr_zz_lsl): Likewise.
>> 	(aarch64_ins_sve_addr_zz_sxtw): Likewise.
>> 	(aarch64_ins_sve_addr_zz_uxtw): Likewise.
>> 	* aarch64-asm-2.c: Regenerate.
>> 	* aarch64-dis.h (ext_sve_addr_ri_u6, ext_sve_addr_rr_lsl)
>> 	(ext_sve_addr_rz_xtw, ext_sve_addr_zi_u5, ext_sve_addr_zz_lsl)
>> 	(ext_sve_addr_zz_sxtw, ext_sve_addr_zz_uxtw): New extractors.
>> 	* aarch64-dis.c (aarch64_ext_sve_add_reg_imm): New function.
>> 	(aarch64_ext_sve_addr_ri_u6): Likewise.
>> 	(aarch64_ext_sve_addr_rr_lsl): Likewise.
>> 	(aarch64_ext_sve_addr_rz_xtw): Likewise.
>> 	(aarch64_ext_sve_addr_zi_u5): Likewise.
>> 	(aarch64_ext_sve_addr_zz): Likewise.
>> 	(aarch64_ext_sve_addr_zz_lsl): Likewise.
>> 	(aarch64_ext_sve_addr_zz_sxtw): Likewise.
>> 	(aarch64_ext_sve_addr_zz_uxtw): Likewise.
>> 	* aarch64-dis-2.c: Regenerate.
>> 
>> gas/
>> 	* config/tc-aarch64.c (aarch64_addr_reg_parse): New function,
>> 	split out from aarch64_reg_parse_32_64.  Handle Z registers too.
>> 	(aarch64_reg_parse_32_64): Call it.
>> 	(parse_address_main): Add base_qualifier, offset_qualifier
>> 	and accept_sve parameters.  Handle SVE base and offset registers.
>
> Ug!  Another bool parameter.

Here's an updated version, based on the new versions of patches
11 and 12.  It adds register type enums for the registers that
can be used as bases and offsets in SVE instructions (which is
the normal set plus Zn.D and Zn.S).  We can then use register
types instead of boolean parameters to say which registers
are acceptable.

Tested on aarch64-linux-gnu.  OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_U6): New aarch64_opnd.
	(AARCH64_OPND_SVE_ADDR_RI_U6x2, AARCH64_OPND_SVE_ADDR_RI_U6x4)
	(AARCH64_OPND_SVE_ADDR_RI_U6x8, AARCH64_OPND_SVE_ADDR_RR)
	(AARCH64_OPND_SVE_ADDR_RR_LSL1, AARCH64_OPND_SVE_ADDR_RR_LSL2)
	(AARCH64_OPND_SVE_ADDR_RR_LSL3, AARCH64_OPND_SVE_ADDR_RX)
	(AARCH64_OPND_SVE_ADDR_RX_LSL1, AARCH64_OPND_SVE_ADDR_RX_LSL2)
	(AARCH64_OPND_SVE_ADDR_RX_LSL3, AARCH64_OPND_SVE_ADDR_RZ)
	(AARCH64_OPND_SVE_ADDR_RZ_LSL1, AARCH64_OPND_SVE_ADDR_RZ_LSL2)
	(AARCH64_OPND_SVE_ADDR_RZ_LSL3, AARCH64_OPND_SVE_ADDR_RZ_XTW_14)
	(AARCH64_OPND_SVE_ADDR_RZ_XTW_22, AARCH64_OPND_SVE_ADDR_RZ_XTW1_14)
	(AARCH64_OPND_SVE_ADDR_RZ_XTW1_22, AARCH64_OPND_SVE_ADDR_RZ_XTW2_14)
	(AARCH64_OPND_SVE_ADDR_RZ_XTW2_22, AARCH64_OPND_SVE_ADDR_RZ_XTW3_14)
	(AARCH64_OPND_SVE_ADDR_RZ_XTW3_22, AARCH64_OPND_SVE_ADDR_ZI_U5)
	(AARCH64_OPND_SVE_ADDR_ZI_U5x2, AARCH64_OPND_SVE_ADDR_ZI_U5x4)
	(AARCH64_OPND_SVE_ADDR_ZI_U5x8, AARCH64_OPND_SVE_ADDR_ZZ_LSL)
	(AARCH64_OPND_SVE_ADDR_ZZ_SXTW, AARCH64_OPND_SVE_ADDR_ZZ_UXTW):
	Likewise.

opcodes/
	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE
	address operands.
	* aarch64-opc.h (FLD_SVE_imm6, FLD_SVE_msz, FLD_SVE_xs_14)
	(FLD_SVE_xs_22): New aarch64_field_kinds.
	(OPD_F_OD_MASK, OPD_F_OD_LSB, OPD_F_NO_ZR): New flags.
	(get_operand_specific_data): New function.
	* aarch64-opc.c (fields): Add entries for FLD_SVE_imm6, FLD_SVE_msz,
	FLD_SVE_xs_14 and FLD_SVE_xs_22.
	(operand_general_constraint_met_p): Handle the new SVE address
	operands.
	(sve_reg): New array.
	(get_addr_sve_reg_name): New function.
	(aarch64_print_operand): Handle the new SVE address operands.
	* aarch64-opc-2.c: Regenerate.
	* aarch64-asm.h (ins_sve_addr_ri_u6, ins_sve_addr_rr_lsl)
	(ins_sve_addr_rz_xtw, ins_sve_addr_zi_u5, ins_sve_addr_zz_lsl)
	(ins_sve_addr_zz_sxtw, ins_sve_addr_zz_uxtw): New inserters.
	* aarch64-asm.c (aarch64_ins_sve_addr_ri_u6): New function.
	(aarch64_ins_sve_addr_rr_lsl): Likewise.
	(aarch64_ins_sve_addr_rz_xtw): Likewise.
	(aarch64_ins_sve_addr_zi_u5): Likewise.
	(aarch64_ins_sve_addr_zz): Likewise.
	(aarch64_ins_sve_addr_zz_lsl): Likewise.
	(aarch64_ins_sve_addr_zz_sxtw): Likewise.
	(aarch64_ins_sve_addr_zz_uxtw): Likewise.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis.h (ext_sve_addr_ri_u6, ext_sve_addr_rr_lsl)
	(ext_sve_addr_rz_xtw, ext_sve_addr_zi_u5, ext_sve_addr_zz_lsl)
	(ext_sve_addr_zz_sxtw, ext_sve_addr_zz_uxtw): New extractors.
	* aarch64-dis.c (aarch64_ext_sve_add_reg_imm): New function.
	(aarch64_ext_sve_addr_ri_u6): Likewise.
	(aarch64_ext_sve_addr_rr_lsl): Likewise.
	(aarch64_ext_sve_addr_rz_xtw): Likewise.
	(aarch64_ext_sve_addr_zi_u5): Likewise.
	(aarch64_ext_sve_addr_zz): Likewise.
	(aarch64_ext_sve_addr_zz_lsl): Likewise.
	(aarch64_ext_sve_addr_zz_sxtw): Likewise.
	(aarch64_ext_sve_addr_zz_uxtw): Likewise.
	* aarch64-dis-2.c: Regenerate.

gas/
	* config/tc-aarch64.c (REG_TYPE_SVE_BASE, REG_TYPE_SVE_OFFSET): New
	register types.
	(get_reg_expected_msg): Handle them.
	(aarch64_addr_reg_parse): New function, split out from
	aarch64_reg_parse_32_64.  Handle Z registers too.
	(aarch64_reg_parse_32_64): Call it.
	(parse_address_main): Add base_qualifier, offset_qualifier,
	base_type and offset_type parameters.  Handle SVE base and offset
	registers.
	(parse_address): Update call to parse_address_main.
	(parse_sve_address): New function.
	(parse_operands): Parse the new SVE address operands.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index 79ee054..e59333f 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -272,9 +272,16 @@ struct reloc_entry
   BASIC_REG_TYPE(PN)	/* p[0-15] */	\
   /* Typecheck: any 64-bit int reg         (inc SP exc XZR).  */	\
   MULTI_REG_TYPE(R64_SP, REG_TYPE(R_64) | REG_TYPE(SP_64))		\
+  /* Typecheck: same, plus SVE registers.  */				\
+  MULTI_REG_TYPE(SVE_BASE, REG_TYPE(R_64) | REG_TYPE(SP_64)		\
+		 | REG_TYPE(ZN))					\
   /* Typecheck: x[0-30], w[0-30] or [xw]zr.  */				\
   MULTI_REG_TYPE(R_Z, REG_TYPE(R_32) | REG_TYPE(R_64)			\
 		 | REG_TYPE(Z_32) | REG_TYPE(Z_64))			\
+  /* Typecheck: same, plus SVE registers.  */				\
+  MULTI_REG_TYPE(SVE_OFFSET, REG_TYPE(R_32) | REG_TYPE(R_64)		\
+		 | REG_TYPE(Z_32) | REG_TYPE(Z_64)			\
+		 | REG_TYPE(ZN))					\
   /* Typecheck: x[0-30], w[0-30] or {w}sp.  */				\
   MULTI_REG_TYPE(R_SP, REG_TYPE(R_32) | REG_TYPE(R_64)			\
 		 | REG_TYPE(SP_32) | REG_TYPE(SP_64))			\
@@ -358,9 +365,15 @@ get_reg_expected_msg (aarch64_reg_type reg_type)
     case REG_TYPE_R64_SP:
       msg = N_("64-bit integer or SP register expected");
       break;
+    case REG_TYPE_SVE_BASE:
+      msg = N_("base register expected");
+      break;
     case REG_TYPE_R_Z:
       msg = N_("integer or zero register expected");
       break;
+    case REG_TYPE_SVE_OFFSET:
+      msg = N_("offset register expected");
+      break;
     case REG_TYPE_R_SP:
       msg = N_("integer or SP register expected");
       break;
@@ -697,14 +710,16 @@ aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
   return (reg_type_masks[type] & (1 << reg->type)) != 0;
 }
 
-/* Try to parse a base or offset register.  Return the register entry
-   on success, setting *QUALIFIER to the register qualifier.  Return null
-   otherwise.
+/* Try to parse a base or offset register.  Allow SVE base and offset
+   registers if REG_TYPE includes SVE registers.  Return the register
+   entry on success, setting *QUALIFIER to the register qualifier.
+   Return null otherwise.
 
    Note that this function does not issue any diagnostics.  */
 
 static const reg_entry *
-aarch64_reg_parse_32_64 (char **ccp, aarch64_opnd_qualifier_t *qualifier)
+aarch64_addr_reg_parse (char **ccp, aarch64_reg_type reg_type,
+			aarch64_opnd_qualifier_t *qualifier)
 {
   char *str = *ccp;
   const reg_entry *reg = parse_reg (&str);
@@ -726,6 +741,24 @@ aarch64_reg_parse_32_64 (char **ccp, aarch64_opnd_qualifier_t *qualifier)
       *qualifier = AARCH64_OPND_QLF_X;
       break;
 
+    case REG_TYPE_ZN:
+      if ((reg_type_masks[reg_type] & (1 << REG_TYPE_ZN)) == 0
+	  || str[0] != '.')
+	return NULL;
+      switch (TOLOWER (str[1]))
+	{
+	case 's':
+	  *qualifier = AARCH64_OPND_QLF_S_S;
+	  break;
+	case 'd':
+	  *qualifier = AARCH64_OPND_QLF_S_D;
+	  break;
+	default:
+	  return NULL;
+	}
+      str += 2;
+      break;
+
     default:
       return NULL;
     }
@@ -735,6 +768,18 @@ aarch64_reg_parse_32_64 (char **ccp, aarch64_opnd_qualifier_t *qualifier)
   return reg;
 }
 
+/* Try to parse a base or offset register.  Return the register entry
+   on success, setting *QUALIFIER to the register qualifier.  Return null
+   otherwise.
+
+   Note that this function does not issue any diagnostics.  */
+
+static const reg_entry *
+aarch64_reg_parse_32_64 (char **ccp, aarch64_opnd_qualifier_t *qualifier)
+{
+  return aarch64_addr_reg_parse (ccp, REG_TYPE_R_Z_SP, qualifier);
+}
+
 /* Parse the qualifier of a vector register or vector element of type
    REG_TYPE.  Fill in *PARSED_TYPE and return TRUE if the parsing
    succeeds; otherwise return FALSE.
@@ -3209,8 +3254,8 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
    The A64 instruction set has the following addressing modes:
 
    Offset
-     [base]			// in SIMD ld/st structure
-     [base{,#0}]		// in ld/st exclusive
+     [base]			 // in SIMD ld/st structure
+     [base{,#0}]		 // in ld/st exclusive
      [base{,#imm}]
      [base,Xm{,LSL #imm}]
      [base,Xm,SXTX {#imm}]
@@ -3219,10 +3264,18 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
      [base,#imm]!
    Post-indexed
      [base],#imm
-     [base],Xm			// in SIMD ld/st structure
+     [base],Xm			 // in SIMD ld/st structure
    PC-relative (literal)
      label
-     =immediate
+   SVE:
+     [base,Zm.D{,LSL #imm}]
+     [base,Zm.S,(S|U)XTW {#imm}]
+     [base,Zm.D,(S|U)XTW {#imm}] // ignores top 32 bits of Zm.D elements
+     [Zn.S,#imm]
+     [Zn.D,#imm]
+     [Zn.S,Zm.S{,LSL #imm}]      // in ADR
+     [Zn.D,Zm.D{,LSL #imm}]      // in ADR
+     [Zn.D,Zm.D,(S|U)XTW {#imm}] // in ADR
 
    (As a convenience, the notation "=immediate" is permitted in conjunction
    with the pc-relative literal load instructions to automatically place an
@@ -3249,19 +3302,27 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
      .pcrel=1; .preind=1; .postind=0; .writeback=0
 
    The shift/extension information, if any, will be stored in .shifter.
+   The base and offset qualifiers will be stored in *BASE_QUALIFIER and
+   *OFFSET_QUALIFIER respectively, with NIL being used if there's no
+   corresponding register.
 
-   It is the caller's responsibility to check for addressing modes not
+   BASE_TYPE says which types of base register should be accepted and
+   OFFSET_TYPE says the same for offset registers.  In all other respects,
+   it is the caller's responsibility to check for addressing modes not
    supported by the instruction, and to set inst.reloc.type.  */
 
 static bfd_boolean
-parse_address_main (char **str, aarch64_opnd_info *operand)
+parse_address_main (char **str, aarch64_opnd_info *operand,
+		    aarch64_opnd_qualifier_t *base_qualifier,
+		    aarch64_opnd_qualifier_t *offset_qualifier,
+		    aarch64_reg_type base_type, aarch64_reg_type offset_type)
 {
   char *p = *str;
   const reg_entry *reg;
-  aarch64_opnd_qualifier_t base_qualifier;
-  aarch64_opnd_qualifier_t offset_qualifier;
   expressionS *exp = &inst.reloc.exp;
 
+  *base_qualifier = AARCH64_OPND_QLF_NIL;
+  *offset_qualifier = AARCH64_OPND_QLF_NIL;
   if (! skip_past_char (&p, '['))
     {
       /* =immediate or label.  */
@@ -3336,10 +3397,10 @@ parse_address_main (char **str, aarch64_opnd_info *operand)
 
   /* [ */
 
-  reg = aarch64_reg_parse_32_64 (&p, &base_qualifier);
-  if (!reg || !aarch64_check_reg_type (reg, REG_TYPE_R64_SP))
+  reg = aarch64_addr_reg_parse (&p, base_type, base_qualifier);
+  if (!reg || !aarch64_check_reg_type (reg, base_type))
     {
-      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R64_SP)));
+      set_syntax_error (_(get_reg_expected_msg (base_type)));
       return FALSE;
     }
   operand->addr.base_regno = reg->number;
@@ -3350,12 +3411,12 @@ parse_address_main (char **str, aarch64_opnd_info *operand)
       /* [Xn, */
       operand->addr.preind = 1;
 
-      reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier);
+      reg = aarch64_addr_reg_parse (&p, offset_type, offset_qualifier);
       if (reg)
 	{
-	  if (!aarch64_check_reg_type (reg, REG_TYPE_R_Z))
+	  if (!aarch64_check_reg_type (reg, offset_type))
 	    {
-	      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_Z)));
+	      set_syntax_error (_(get_reg_expected_msg (offset_type)));
 	      return FALSE;
 	    }
 
@@ -3379,13 +3440,19 @@ parse_address_main (char **str, aarch64_opnd_info *operand)
 	      || operand->shifter.kind == AARCH64_MOD_LSL
 	      || operand->shifter.kind == AARCH64_MOD_SXTX)
 	    {
-	      if (offset_qualifier == AARCH64_OPND_QLF_W)
+	      if (*offset_qualifier == AARCH64_OPND_QLF_W)
 		{
 		  set_syntax_error (_("invalid use of 32-bit register offset"));
 		  return FALSE;
 		}
+	      if (aarch64_get_qualifier_esize (*base_qualifier)
+		  != aarch64_get_qualifier_esize (*offset_qualifier))
+		{
+		  set_syntax_error (_("offset has different size from base"));
+		  return FALSE;
+		}
 	    }
-	  else if (offset_qualifier == AARCH64_OPND_QLF_X)
+	  else if (*offset_qualifier == AARCH64_OPND_QLF_X)
 	    {
 	      set_syntax_error (_("invalid use of 64-bit register offset"));
 	      return FALSE;
@@ -3468,7 +3535,7 @@ parse_address_main (char **str, aarch64_opnd_info *operand)
 	  return FALSE;
 	}
 
-      reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier);
+      reg = aarch64_reg_parse_32_64 (&p, offset_qualifier);
       if (reg)
 	{
 	  /* [Xn],Xm */
@@ -3513,7 +3580,21 @@ parse_address_main (char **str, aarch64_opnd_info *operand)
 static bfd_boolean
 parse_address (char **str, aarch64_opnd_info *operand)
 {
-  return parse_address_main (str, operand);
+  aarch64_opnd_qualifier_t base_qualifier, offset_qualifier;
+  return parse_address_main (str, operand, &base_qualifier, &offset_qualifier,
+			     REG_TYPE_R64_SP, REG_TYPE_R_Z);
+}
+
+/* Parse an address in which SVE vector registers are allowed.
+   The arguments have the same meaning as for parse_address_main.
+   Return TRUE on success.  */
+static bfd_boolean
+parse_sve_address (char **str, aarch64_opnd_info *operand,
+		   aarch64_opnd_qualifier_t *base_qualifier,
+		   aarch64_opnd_qualifier_t *offset_qualifier)
+{
+  return parse_address_main (str, operand, base_qualifier, offset_qualifier,
+			     REG_TYPE_SVE_BASE, REG_TYPE_SVE_OFFSET);
 }
 
 /* Parse an operand for a MOVZ, MOVN or MOVK instruction.
@@ -5123,7 +5204,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
       int comma_skipped_p = 0;
       aarch64_reg_type rtype;
       struct vector_type_el vectype;
-      aarch64_opnd_qualifier_t qualifier;
+      aarch64_opnd_qualifier_t qualifier, base_qualifier, offset_qualifier;
       aarch64_opnd_info *info = &inst.base.operands[i];
       aarch64_reg_type reg_type;
 
@@ -5757,6 +5838,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	case AARCH64_OPND_ADDR_REGOFF:
 	  /* [<Xn|SP>, <R><m>{, <extend> {<amount>}}]  */
 	  po_misc_or_fail (parse_address (&str, info));
+	regoff_addr:
 	  if (info->addr.pcrel || !info->addr.offset.is_reg
 	      || !info->addr.preind || info->addr.postind
 	      || info->addr.writeback)
@@ -5856,6 +5938,123 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  /* No qualifier.  */
 	  break;
 
+	case AARCH64_OPND_SVE_ADDR_RI_U6:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
+	  /* [X<n>{, #imm}]
+	     but recognizing SVE registers.  */
+	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
+					      &offset_qualifier));
+	  if (base_qualifier != AARCH64_OPND_QLF_X)
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	sve_regimm:
+	  if (info->addr.pcrel || info->addr.offset.is_reg
+	      || !info->addr.preind || info->addr.writeback)
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	  if (inst.reloc.type != BFD_RELOC_UNUSED
+	      || inst.reloc.exp.X_op != O_constant)
+	    {
+	      /* Make sure this has priority over
+		 "invalid addressing mode".  */
+	      set_fatal_syntax_error (_("constant offset required"));
+	      goto failure;
+	    }
+	  info->addr.offset.imm = inst.reloc.exp.X_add_number;
+	  break;
+
+	case AARCH64_OPND_SVE_ADDR_RR:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL3:
+	case AARCH64_OPND_SVE_ADDR_RX:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL3:
+	  /* [<Xn|SP>, <R><m>{, lsl #<amount>}]
+	     but recognizing SVE registers.  */
+	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
+					      &offset_qualifier));
+	  if (base_qualifier != AARCH64_OPND_QLF_X
+	      || offset_qualifier != AARCH64_OPND_QLF_X)
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	  goto regoff_addr;
+
+	case AARCH64_OPND_SVE_ADDR_RZ:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
+	  /* [<Xn|SP>, Z<m>.D{, LSL #<amount>}]
+	     [<Xn|SP>, Z<m>.<T>, <extend> {#<amount>}]  */
+	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
+					      &offset_qualifier));
+	  if (base_qualifier != AARCH64_OPND_QLF_X
+	      || (offset_qualifier != AARCH64_OPND_QLF_S_S
+		  && offset_qualifier != AARCH64_OPND_QLF_S_D))
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	  info->qualifier = offset_qualifier;
+	  goto regoff_addr;
+
+	case AARCH64_OPND_SVE_ADDR_ZI_U5:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
+	  /* [Z<n>.<T>{, #imm}]  */
+	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
+					      &offset_qualifier));
+	  if (base_qualifier != AARCH64_OPND_QLF_S_S
+	      && base_qualifier != AARCH64_OPND_QLF_S_D)
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	  info->qualifier = base_qualifier;
+	  goto sve_regimm;
+
+	case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
+	case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
+	case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
+	  /* [Z<n>.<T>, Z<m>.<T>{, LSL #<amount>}]
+	     [Z<n>.D, Z<m>.D, <extend> {#<amount>}]
+
+	     We don't reject:
+
+	     [Z<n>.S, Z<m>.S, <extend> {#<amount>}]
+
+	     here since we get better error messages by leaving it to
+	     the qualifier checking routines.  */
+	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
+					      &offset_qualifier));
+	  if ((base_qualifier != AARCH64_OPND_QLF_S_S
+	       && base_qualifier != AARCH64_OPND_QLF_S_D)
+	      || offset_qualifier != base_qualifier)
+	    {
+	      set_syntax_error (_("invalid addressing mode"));
+	      goto failure;
+	    }
+	  info->qualifier = base_qualifier;
+	  goto regoff_addr;
+
 	case AARCH64_OPND_SYSREG:
 	  if ((val = parse_sys_reg (&str, aarch64_sys_regs_hsh, 1, 0))
 	      == PARSE_FAIL)
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index 49b4413..e61ac9c 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -244,6 +244,45 @@ enum aarch64_opnd
   AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
   AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
 
+  AARCH64_OPND_SVE_ADDR_RI_U6,	    /* SVE [<Xn|SP>, #<uimm6>].  */
+  AARCH64_OPND_SVE_ADDR_RI_U6x2,    /* SVE [<Xn|SP>, #<uimm6>*2].  */
+  AARCH64_OPND_SVE_ADDR_RI_U6x4,    /* SVE [<Xn|SP>, #<uimm6>*4].  */
+  AARCH64_OPND_SVE_ADDR_RI_U6x8,    /* SVE [<Xn|SP>, #<uimm6>*8].  */
+  AARCH64_OPND_SVE_ADDR_RR,	    /* SVE [<Xn|SP>, <Xm|XZR>].  */
+  AARCH64_OPND_SVE_ADDR_RR_LSL1,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #1].  */
+  AARCH64_OPND_SVE_ADDR_RR_LSL2,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #2].  */
+  AARCH64_OPND_SVE_ADDR_RR_LSL3,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #3].  */
+  AARCH64_OPND_SVE_ADDR_RX,	    /* SVE [<Xn|SP>, <Xm>].  */
+  AARCH64_OPND_SVE_ADDR_RX_LSL1,    /* SVE [<Xn|SP>, <Xm>, LSL #1].  */
+  AARCH64_OPND_SVE_ADDR_RX_LSL2,    /* SVE [<Xn|SP>, <Xm>, LSL #2].  */
+  AARCH64_OPND_SVE_ADDR_RX_LSL3,    /* SVE [<Xn|SP>, <Xm>, LSL #3].  */
+  AARCH64_OPND_SVE_ADDR_RZ,	    /* SVE [<Xn|SP>, Zm.D].  */
+  AARCH64_OPND_SVE_ADDR_RZ_LSL1,    /* SVE [<Xn|SP>, Zm.D, LSL #1].  */
+  AARCH64_OPND_SVE_ADDR_RZ_LSL2,    /* SVE [<Xn|SP>, Zm.D, LSL #2].  */
+  AARCH64_OPND_SVE_ADDR_RZ_LSL3,    /* SVE [<Xn|SP>, Zm.D, LSL #3].  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW_14,  /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW].
+				       Bit 14 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW_22,  /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW].
+				       Bit 22 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW1_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #1].
+				       Bit 14 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW1_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #1].
+				       Bit 22 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW2_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #2].
+				       Bit 14 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW2_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #2].
+				       Bit 22 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW3_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #3].
+				       Bit 14 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_RZ_XTW3_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #3].
+				       Bit 22 controls S/U choice.  */
+  AARCH64_OPND_SVE_ADDR_ZI_U5,	    /* SVE [Zn.<T>, #<uimm5>].  */
+  AARCH64_OPND_SVE_ADDR_ZI_U5x2,    /* SVE [Zn.<T>, #<uimm5>*2].  */
+  AARCH64_OPND_SVE_ADDR_ZI_U5x4,    /* SVE [Zn.<T>, #<uimm5>*4].  */
+  AARCH64_OPND_SVE_ADDR_ZI_U5x8,    /* SVE [Zn.<T>, #<uimm5>*8].  */
+  AARCH64_OPND_SVE_ADDR_ZZ_LSL,     /* SVE [Zn.<T>, Zm,<T>, LSL #<msz>].  */
+  AARCH64_OPND_SVE_ADDR_ZZ_SXTW,    /* SVE [Zn.<T>, Zm,<T>, SXTW #<msz>].  */
+  AARCH64_OPND_SVE_ADDR_ZZ_UXTW,    /* SVE [Zn.<T>, Zm,<T>, UXTW #<msz>].  */
   AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
   AARCH64_OPND_SVE_PATTERN_SCALED, /* Likewise, with additional MUL factor.  */
   AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index 039b9be..47a414c 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -480,21 +480,21 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 92:
-    case 93:
-    case 94:
-    case 95:
-    case 96:
-    case 97:
-    case 98:
-    case 99:
-    case 100:
-    case 101:
-    case 102:
-    case 103:
-    case 104:
-    case 105:
-    case 108:
+    case 123:
+    case 124:
+    case 125:
+    case 126:
+    case 127:
+    case 128:
+    case 129:
+    case 130:
+    case 131:
+    case 132:
+    case 133:
+    case 134:
+    case 135:
+    case 136:
+    case 139:
       return aarch64_ins_regno (self, info, code, inst);
     case 12:
       return aarch64_ins_reg_extended (self, info, code, inst);
@@ -531,8 +531,8 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 68:
     case 69:
     case 70:
-    case 89:
-    case 91:
+    case 120:
+    case 122:
       return aarch64_ins_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -583,12 +583,50 @@ aarch64_insert_operand (const aarch64_operand *self,
       return aarch64_ins_prfop (self, info, code, inst);
     case 88:
       return aarch64_ins_hint (self, info, code, inst);
+    case 89:
     case 90:
-      return aarch64_ins_sve_scale (self, info, code, inst);
+    case 91:
+    case 92:
+      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
+    case 93:
+    case 94:
+    case 95:
+    case 96:
+    case 97:
+    case 98:
+    case 99:
+    case 100:
+    case 101:
+    case 102:
+    case 103:
+    case 104:
+      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
+    case 105:
     case 106:
-      return aarch64_ins_sve_index (self, info, code, inst);
     case 107:
+    case 108:
     case 109:
+    case 110:
+    case 111:
+    case 112:
+      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
+    case 113:
+    case 114:
+    case 115:
+    case 116:
+      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
+    case 117:
+      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
+    case 118:
+      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
+    case 119:
+      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
+    case 121:
+      return aarch64_ins_sve_scale (self, info, code, inst);
+    case 137:
+      return aarch64_ins_sve_index (self, info, code, inst);
+    case 138:
+    case 140:
       return aarch64_ins_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index 117a3c6..0d3b2c7 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -745,6 +745,114 @@ aarch64_ins_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return NULL;
 }
 
+/* Encode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
+   is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
+   value.  fields[0] specifies the base register field.  */
+const char *
+aarch64_ins_sve_addr_ri_u6 (const aarch64_operand *self,
+			    const aarch64_opnd_info *info, aarch64_insn *code,
+			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int factor = 1 << get_operand_specific_data (self);
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (FLD_SVE_imm6, code, info->addr.offset.imm / factor, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [X<n>, X<m>{, LSL #<shift>}], where <shift>
+   is SELF's operand-dependent value.  fields[0] specifies the base
+   register field and fields[1] specifies the offset register field.  */
+const char *
+aarch64_ins_sve_addr_rr_lsl (const aarch64_operand *self,
+			     const aarch64_opnd_info *info, aarch64_insn *code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [X<n>, Z<m>.<T>, (S|U)XTW {#<shift>}], where
+   <shift> is SELF's operand-dependent value.  fields[0] specifies the
+   base register field, fields[1] specifies the offset register field and
+   fields[2] is a single-bit field that selects SXTW over UXTW.  */
+const char *
+aarch64_ins_sve_addr_rz_xtw (const aarch64_operand *self,
+			     const aarch64_opnd_info *info, aarch64_insn *code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
+  if (info->shifter.kind == AARCH64_MOD_UXTW)
+    insert_field (self->fields[2], code, 0, 0);
+  else
+    insert_field (self->fields[2], code, 1, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [Z<n>.<T>, #<imm5> << <shift>], where <imm5> is a
+   5-bit unsigned number and where <shift> is SELF's operand-dependent value.
+   fields[0] specifies the base register field.  */
+const char *
+aarch64_ins_sve_addr_zi_u5 (const aarch64_operand *self,
+			    const aarch64_opnd_info *info, aarch64_insn *code,
+			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int factor = 1 << get_operand_specific_data (self);
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (FLD_imm5, code, info->addr.offset.imm / factor, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>{, <modifier> {#<msz>}}],
+   where <modifier> is fixed by the instruction and where <msz> is a
+   2-bit unsigned number.  fields[0] specifies the base register field
+   and fields[1] specifies the offset register field.  */
+static const char *
+aarch64_ext_sve_addr_zz (const aarch64_operand *self,
+			 const aarch64_opnd_info *info, aarch64_insn *code)
+{
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
+  insert_field (FLD_SVE_msz, code, info->shifter.amount, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>{, LSL #<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+const char *
+aarch64_ins_sve_addr_zz_lsl (const aarch64_operand *self,
+			     const aarch64_opnd_info *info, aarch64_insn *code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code);
+}
+
+/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>, SXTW {#<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+const char *
+aarch64_ins_sve_addr_zz_sxtw (const aarch64_operand *self,
+			      const aarch64_opnd_info *info,
+			      aarch64_insn *code,
+			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code);
+}
+
+/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>, UXTW {#<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+const char *
+aarch64_ins_sve_addr_zz_uxtw (const aarch64_operand *self,
+			      const aarch64_opnd_info *info,
+			      aarch64_insn *code,
+			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code);
+}
+
 /* Encode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
    array specifies which field to use for Zn.  MM is encoded in the
    concatenation of imm5 and SVE_tszh, with imm5 being the less
diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
index ac5faeb..b81cfa1 100644
--- a/opcodes/aarch64-asm.h
+++ b/opcodes/aarch64-asm.h
@@ -69,6 +69,13 @@ AARCH64_DECL_OPD_INSERTER (ins_hint);
 AARCH64_DECL_OPD_INSERTER (ins_prfop);
 AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
 AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_u6);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rr_lsl);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rz_xtw);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zi_u5);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_lsl);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_sxtw);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_uxtw);
 AARCH64_DECL_OPD_INSERTER (ins_sve_index);
 AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
 AARCH64_DECL_OPD_INSERTER (ins_sve_scale);
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index 124385d..3dd714f 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10426,21 +10426,21 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 92:
-    case 93:
-    case 94:
-    case 95:
-    case 96:
-    case 97:
-    case 98:
-    case 99:
-    case 100:
-    case 101:
-    case 102:
-    case 103:
-    case 104:
-    case 105:
-    case 108:
+    case 123:
+    case 124:
+    case 125:
+    case 126:
+    case 127:
+    case 128:
+    case 129:
+    case 130:
+    case 131:
+    case 132:
+    case 133:
+    case 134:
+    case 135:
+    case 136:
+    case 139:
       return aarch64_ext_regno (self, info, code, inst);
     case 8:
       return aarch64_ext_regrt_sysins (self, info, code, inst);
@@ -10482,8 +10482,8 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 68:
     case 69:
     case 70:
-    case 89:
-    case 91:
+    case 120:
+    case 122:
       return aarch64_ext_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -10536,12 +10536,50 @@ aarch64_extract_operand (const aarch64_operand *self,
       return aarch64_ext_prfop (self, info, code, inst);
     case 88:
       return aarch64_ext_hint (self, info, code, inst);
+    case 89:
     case 90:
-      return aarch64_ext_sve_scale (self, info, code, inst);
+    case 91:
+    case 92:
+      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
+    case 93:
+    case 94:
+    case 95:
+    case 96:
+    case 97:
+    case 98:
+    case 99:
+    case 100:
+    case 101:
+    case 102:
+    case 103:
+    case 104:
+      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
+    case 105:
     case 106:
-      return aarch64_ext_sve_index (self, info, code, inst);
     case 107:
+    case 108:
     case 109:
+    case 110:
+    case 111:
+    case 112:
+      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
+    case 113:
+    case 114:
+    case 115:
+    case 116:
+      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
+    case 117:
+      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
+    case 118:
+      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
+    case 119:
+      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
+    case 121:
+      return aarch64_ext_sve_scale (self, info, code, inst);
+    case 137:
+      return aarch64_ext_sve_index (self, info, code, inst);
+    case 138:
+    case 140:
       return aarch64_ext_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index 1d00c0a..ed77b4d 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -1186,6 +1186,152 @@ aarch64_ext_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return 1;
 }
 
+/* Decode an SVE address [<base>, #<offset> << <shift>], where <offset>
+   is given by the OFFSET parameter and where <shift> is SELF's operand-
+   dependent value.  fields[0] specifies the base register field <base>.  */
+static int
+aarch64_ext_sve_addr_reg_imm (const aarch64_operand *self,
+			      aarch64_opnd_info *info, aarch64_insn code,
+			      int64_t offset)
+{
+  info->addr.base_regno = extract_field (self->fields[0], code, 0);
+  info->addr.offset.imm = offset * (1 << get_operand_specific_data (self));
+  info->addr.offset.is_reg = FALSE;
+  info->addr.writeback = FALSE;
+  info->addr.preind = TRUE;
+  info->shifter.operator_present = FALSE;
+  info->shifter.amount_present = FALSE;
+  return 1;
+}
+
+/* Decode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
+   is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
+   value.  fields[0] specifies the base register field.  */
+int
+aarch64_ext_sve_addr_ri_u6 (const aarch64_operand *self,
+			    aarch64_opnd_info *info, aarch64_insn code,
+			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int offset = extract_field (FLD_SVE_imm6, code, 0);
+  return aarch64_ext_sve_addr_reg_imm (self, info, code, offset);
+}
+
+/* Decode an SVE address [X<n>, X<m>{, LSL #<shift>}], where <shift>
+   is SELF's operand-dependent value.  fields[0] specifies the base
+   register field and fields[1] specifies the offset register field.  */
+int
+aarch64_ext_sve_addr_rr_lsl (const aarch64_operand *self,
+			     aarch64_opnd_info *info, aarch64_insn code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int index;
+
+  index = extract_field (self->fields[1], code, 0);
+  if (index == 31 && (self->flags & OPD_F_NO_ZR) != 0)
+    return 0;
+
+  info->addr.base_regno = extract_field (self->fields[0], code, 0);
+  info->addr.offset.regno = index;
+  info->addr.offset.is_reg = TRUE;
+  info->addr.writeback = FALSE;
+  info->addr.preind = TRUE;
+  info->shifter.kind = AARCH64_MOD_LSL;
+  info->shifter.amount = get_operand_specific_data (self);
+  info->shifter.operator_present = (info->shifter.amount != 0);
+  info->shifter.amount_present = (info->shifter.amount != 0);
+  return 1;
+}
+
+/* Decode an SVE address [X<n>, Z<m>.<T>, (S|U)XTW {#<shift>}], where
+   <shift> is SELF's operand-dependent value.  fields[0] specifies the
+   base register field, fields[1] specifies the offset register field and
+   fields[2] is a single-bit field that selects SXTW over UXTW.  */
+int
+aarch64_ext_sve_addr_rz_xtw (const aarch64_operand *self,
+			     aarch64_opnd_info *info, aarch64_insn code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  info->addr.base_regno = extract_field (self->fields[0], code, 0);
+  info->addr.offset.regno = extract_field (self->fields[1], code, 0);
+  info->addr.offset.is_reg = TRUE;
+  info->addr.writeback = FALSE;
+  info->addr.preind = TRUE;
+  if (extract_field (self->fields[2], code, 0))
+    info->shifter.kind = AARCH64_MOD_SXTW;
+  else
+    info->shifter.kind = AARCH64_MOD_UXTW;
+  info->shifter.amount = get_operand_specific_data (self);
+  info->shifter.operator_present = TRUE;
+  info->shifter.amount_present = (info->shifter.amount != 0);
+  return 1;
+}
+
+/* Decode an SVE address [Z<n>.<T>, #<imm5> << <shift>], where <imm5> is a
+   5-bit unsigned number and where <shift> is SELF's operand-dependent value.
+   fields[0] specifies the base register field.  */
+int
+aarch64_ext_sve_addr_zi_u5 (const aarch64_operand *self,
+			    aarch64_opnd_info *info, aarch64_insn code,
+			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int offset = extract_field (FLD_imm5, code, 0);
+  return aarch64_ext_sve_addr_reg_imm (self, info, code, offset);
+}
+
+/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>{, <modifier> {#<msz>}}],
+   where <modifier> is given by KIND and where <msz> is a 2-bit unsigned
+   number.  fields[0] specifies the base register field and fields[1]
+   specifies the offset register field.  */
+static int
+aarch64_ext_sve_addr_zz (const aarch64_operand *self, aarch64_opnd_info *info,
+			 aarch64_insn code, enum aarch64_modifier_kind kind)
+{
+  info->addr.base_regno = extract_field (self->fields[0], code, 0);
+  info->addr.offset.regno = extract_field (self->fields[1], code, 0);
+  info->addr.offset.is_reg = TRUE;
+  info->addr.writeback = FALSE;
+  info->addr.preind = TRUE;
+  info->shifter.kind = kind;
+  info->shifter.amount = extract_field (FLD_SVE_msz, code, 0);
+  info->shifter.operator_present = (kind != AARCH64_MOD_LSL
+				    || info->shifter.amount != 0);
+  info->shifter.amount_present = (info->shifter.amount != 0);
+  return 1;
+}
+
+/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>{, LSL #<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+int
+aarch64_ext_sve_addr_zz_lsl (const aarch64_operand *self,
+			     aarch64_opnd_info *info, aarch64_insn code,
+			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_LSL);
+}
+
+/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>, SXTW {#<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+int
+aarch64_ext_sve_addr_zz_sxtw (const aarch64_operand *self,
+			      aarch64_opnd_info *info, aarch64_insn code,
+			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_SXTW);
+}
+
+/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>, UXTW {#<msz>}], where
+   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
+   field and fields[1] specifies the offset register field.  */
+int
+aarch64_ext_sve_addr_zz_uxtw (const aarch64_operand *self,
+			      aarch64_opnd_info *info, aarch64_insn code,
+			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_UXTW);
+}
+
 /* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
    array specifies which field to use for Zn.  MM is encoded in the
    concatenation of imm5 and SVE_tszh, with imm5 being the less
diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
index 92f5ad4..0ce2d89 100644
--- a/opcodes/aarch64-dis.h
+++ b/opcodes/aarch64-dis.h
@@ -91,6 +91,13 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_hint);
 AARCH64_DECL_OPD_EXTRACTOR (ext_prfop);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_u6);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rr_lsl);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rz_xtw);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zi_u5);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_lsl);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_sxtw);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_uxtw);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_scale);
diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
index 8f221b8..ed2b70b 100644
--- a/opcodes/aarch64-opc-2.c
+++ b/opcodes/aarch64-opc-2.c
@@ -113,6 +113,37 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
   {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 2"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 4"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x8", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 8"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL1", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL2", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL3", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX", (0 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL1", (1 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL2", (2 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL3", (3 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL1", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL2", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL3", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW_14", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW_22", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW1_14", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW1_22", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW2_14", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW2_22", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW3_14", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW3_22", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 2"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 4"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x8", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 8"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_LSL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_SXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_UXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN_SCALED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
   {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 326b94e..6617e28 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -280,9 +280,13 @@ const aarch64_field fields[] =
     {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
     {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
     { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
+    { 16,  6 }, /* SVE_imm6: 6-bit immediate field.  */
+    { 10,  2 }, /* SVE_msz: 2-bit shift amount for ADR.  */
     {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
     {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
     { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
+    { 14,  1 }, /* SVE_xs_14: UXTW/SXTW select (bit 14).  */
+    { 22,  1 }  /* SVE_xs_22: UXTW/SXTW select (bit 22).  */
 };
 
 enum aarch64_operand_class
@@ -1368,9 +1372,9 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 				  const aarch64_opcode *opcode,
 				  aarch64_operand_error *mismatch_detail)
 {
-  unsigned num;
+  unsigned num, modifiers;
   unsigned char size;
-  int64_t imm;
+  int64_t imm, min_value, max_value;
   const aarch64_opnd_info *opnd = opnds + idx;
   aarch64_opnd_qualifier_t qualifier = opnd->qualifier;
 
@@ -1662,6 +1666,113 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	    }
 	  break;
 
+	case AARCH64_OPND_SVE_ADDR_RI_U6:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
+	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
+	  min_value = 0;
+	  max_value = 63;
+	sve_imm_offset:
+	  assert (!opnd->addr.offset.is_reg);
+	  assert (opnd->addr.preind);
+	  num = 1 << get_operand_specific_data (&aarch64_operands[type]);
+	  min_value *= num;
+	  max_value *= num;
+	  if (opnd->shifter.operator_present
+	      || opnd->shifter.amount_present)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("invalid addressing mode"));
+	      return 0;
+	    }
+	  if (!value_in_range_p (opnd->addr.offset.imm, min_value, max_value))
+	    {
+	      set_offset_out_of_range_error (mismatch_detail, idx,
+					     min_value, max_value);
+	      return 0;
+	    }
+	  if (!value_aligned_p (opnd->addr.offset.imm, num))
+	    {
+	      set_unaligned_error (mismatch_detail, idx, num);
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_ADDR_RR:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RR_LSL3:
+	case AARCH64_OPND_SVE_ADDR_RX:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RX_LSL3:
+	case AARCH64_OPND_SVE_ADDR_RZ:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
+	case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
+	  modifiers = 1 << AARCH64_MOD_LSL;
+	sve_rr_operand:
+	  assert (opnd->addr.offset.is_reg);
+	  assert (opnd->addr.preind);
+	  if ((aarch64_operands[type].flags & OPD_F_NO_ZR) != 0
+	      && opnd->addr.offset.regno == 31)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("index register xzr is not allowed"));
+	      return 0;
+	    }
+	  if (((1 << opnd->shifter.kind) & modifiers) == 0
+	      || (opnd->shifter.amount
+		  != get_operand_specific_data (&aarch64_operands[type])))
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("invalid addressing mode"));
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
+	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
+	  modifiers = (1 << AARCH64_MOD_SXTW) | (1 << AARCH64_MOD_UXTW);
+	  goto sve_rr_operand;
+
+	case AARCH64_OPND_SVE_ADDR_ZI_U5:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
+	case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
+	  min_value = 0;
+	  max_value = 31;
+	  goto sve_imm_offset;
+
+	case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
+	  modifiers = 1 << AARCH64_MOD_LSL;
+	sve_zz_operand:
+	  assert (opnd->addr.offset.is_reg);
+	  assert (opnd->addr.preind);
+	  if (((1 << opnd->shifter.kind) & modifiers) == 0
+	      || opnd->shifter.amount < 0
+	      || opnd->shifter.amount > 3)
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("invalid addressing mode"));
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
+	  modifiers = (1 << AARCH64_MOD_SXTW);
+	  goto sve_zz_operand;
+
+	case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
+	  modifiers = 1 << AARCH64_MOD_UXTW;
+	  goto sve_zz_operand;
+
 	default:
 	  break;
 	}
@@ -2330,6 +2441,17 @@ static const char *int_reg[2][2][32] = {
 #undef R64
 #undef R32
 };
+
+/* Names of the SVE vector registers, first with .S suffixes,
+   then with .D suffixes.  */
+
+static const char *sve_reg[2][32] = {
+#define ZS(X) "z" #X ".s"
+#define ZD(X) "z" #X ".d"
+  BANK (ZS, ZS (31)), BANK (ZD, ZD (31))
+#undef ZD
+#undef ZS
+};
 #undef BANK
 
 /* Return the integer register name.
@@ -2373,6 +2495,17 @@ get_offset_int_reg_name (const aarch64_opnd_info *opnd)
     }
 }
 
+/* Get the name of the SVE vector offset register in OPND, using the operand
+   qualifier to decide whether the suffix should be .S or .D.  */
+
+static inline const char *
+get_addr_sve_reg_name (int regno, aarch64_opnd_qualifier_t qualifier)
+{
+  assert (qualifier == AARCH64_OPND_QLF_S_S
+	  || qualifier == AARCH64_OPND_QLF_S_D);
+  return sve_reg[qualifier == AARCH64_OPND_QLF_S_D][regno];
+}
+
 /* Types for expanding an encoded 8-bit value to a floating-point value.  */
 
 typedef union
@@ -2948,18 +3081,65 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
       break;
 
     case AARCH64_OPND_ADDR_REGOFF:
+    case AARCH64_OPND_SVE_ADDR_RR:
+    case AARCH64_OPND_SVE_ADDR_RR_LSL1:
+    case AARCH64_OPND_SVE_ADDR_RR_LSL2:
+    case AARCH64_OPND_SVE_ADDR_RR_LSL3:
+    case AARCH64_OPND_SVE_ADDR_RX:
+    case AARCH64_OPND_SVE_ADDR_RX_LSL1:
+    case AARCH64_OPND_SVE_ADDR_RX_LSL2:
+    case AARCH64_OPND_SVE_ADDR_RX_LSL3:
       print_register_offset_address
 	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1),
 	 get_offset_int_reg_name (opnd));
       break;
 
+    case AARCH64_OPND_SVE_ADDR_RZ:
+    case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
+    case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
+    case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
+    case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
+      print_register_offset_address
+	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1),
+	 get_addr_sve_reg_name (opnd->addr.offset.regno, opnd->qualifier));
+      break;
+
     case AARCH64_OPND_ADDR_SIMM7:
     case AARCH64_OPND_ADDR_SIMM9:
     case AARCH64_OPND_ADDR_SIMM9_2:
+    case AARCH64_OPND_SVE_ADDR_RI_U6:
+    case AARCH64_OPND_SVE_ADDR_RI_U6x2:
+    case AARCH64_OPND_SVE_ADDR_RI_U6x4:
+    case AARCH64_OPND_SVE_ADDR_RI_U6x8:
       print_immediate_offset_address
 	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1));
       break;
 
+    case AARCH64_OPND_SVE_ADDR_ZI_U5:
+    case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
+    case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
+    case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
+      print_immediate_offset_address
+	(buf, size, opnd,
+	 get_addr_sve_reg_name (opnd->addr.base_regno, opnd->qualifier));
+      break;
+
+    case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
+    case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
+    case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
+      print_register_offset_address
+	(buf, size, opnd,
+	 get_addr_sve_reg_name (opnd->addr.base_regno, opnd->qualifier),
+	 get_addr_sve_reg_name (opnd->addr.offset.regno, opnd->qualifier));
+      break;
+
     case AARCH64_OPND_ADDR_UIMM12:
       name = get_64bit_int_reg_name (opnd->addr.base_regno, 1);
       if (opnd->addr.offset.imm)
diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
index 3406f6e..e823146 100644
--- a/opcodes/aarch64-opc.h
+++ b/opcodes/aarch64-opc.h
@@ -107,9 +107,13 @@ enum aarch64_field_kind
   FLD_SVE_Zn,
   FLD_SVE_Zt,
   FLD_SVE_imm4,
+  FLD_SVE_imm6,
+  FLD_SVE_msz,
   FLD_SVE_pattern,
   FLD_SVE_prfop,
   FLD_SVE_tszh,
+  FLD_SVE_xs_14,
+  FLD_SVE_xs_22,
 };
 
 /* Field description.  */
@@ -156,6 +160,9 @@ extern const aarch64_operand aarch64_operands[];
 						   value by 2 to get the value
 						   of an immediate operand.  */
 #define OPD_F_MAYBE_SP		0x00000010	/* May potentially be SP.  */
+#define OPD_F_OD_MASK		0x00000060	/* Operand-dependent data.  */
+#define OPD_F_OD_LSB		5
+#define OPD_F_NO_ZR		0x00000080	/* ZR index not allowed.  */
 
 static inline bfd_boolean
 operand_has_inserter (const aarch64_operand *operand)
@@ -187,6 +194,13 @@ operand_maybe_stack_pointer (const aarch64_operand *operand)
   return (operand->flags & OPD_F_MAYBE_SP) ? TRUE : FALSE;
 }
 
+/* Return the value of the operand-specific data field (OPD_F_OD_MASK).  */
+static inline unsigned int
+get_operand_specific_data (const aarch64_operand *operand)
+{
+  return (operand->flags & OPD_F_OD_MASK) >> OPD_F_OD_LSB;
+}
+
 /* Return the total width of the operand *OPERAND.  */
 static inline unsigned
 get_operand_fields_width (const aarch64_operand *operand)
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index 491235f..aba4b2d 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2818,8 +2818,95 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "the ISB option name SY or an optional 4-bit unsigned immediate")	\
     Y(SYSTEM, prfop, "PRFOP", 0, F(),					\
       "a prefetch operation specifier")					\
-    Y (SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
+    Y(SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
       "the PSB option name CSYNC")					\
+    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6", 0 << OPD_F_OD_LSB,	\
+      F(FLD_Rn), "an address with a 6-bit unsigned offset")		\
+    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB,	\
+      F(FLD_Rn),							\
+      "an address with a 6-bit unsigned offset, multiplied by 2")	\
+    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB,	\
+      F(FLD_Rn),							\
+      "an address with a 6-bit unsigned offset, multiplied by 4")	\
+    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x8", 3 << OPD_F_OD_LSB,	\
+      F(FLD_Rn),							\
+      "an address with a 6-bit unsigned offset, multiplied by 8")	\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR", 0 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL1", 1 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL2", 2 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL3", 3 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX",				\
+      (0 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
+      "an address with a scalar register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL1",			\
+      (1 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
+      "an address with a scalar register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL2",			\
+      (2 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
+      "an address with a scalar register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL3",			\
+      (3 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
+      "an address with a scalar register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ", 0 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_SVE_Zm_16),						\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL1", 1 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_SVE_Zm_16),						\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL2", 2 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_SVE_Zm_16),						\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL3", 3 << OPD_F_OD_LSB,	\
+      F(FLD_Rn,FLD_SVE_Zm_16),						\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW_14",			\
+      0 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW_22",			\
+      0 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW1_14",			\
+      1 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW1_22",			\
+      1 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW2_14",			\
+      2 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW2_22",			\
+      2 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW3_14",			\
+      3 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW3_22",			\
+      3 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5", 0 << OPD_F_OD_LSB,	\
+      F(FLD_SVE_Zn), "an address with a 5-bit unsigned offset")		\
+    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x2", 1 << OPD_F_OD_LSB,	\
+      F(FLD_SVE_Zn),							\
+      "an address with a 5-bit unsigned offset, multiplied by 2")	\
+    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x4", 2 << OPD_F_OD_LSB,	\
+      F(FLD_SVE_Zn),							\
+      "an address with a 5-bit unsigned offset, multiplied by 4")	\
+    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x8", 3 << OPD_F_OD_LSB,	\
+      F(FLD_SVE_Zn),							\
+      "an address with a 5-bit unsigned offset, multiplied by 8")	\
+    Y(ADDRESS, sve_addr_zz_lsl, "SVE_ADDR_ZZ_LSL", 0,			\
+      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_zz_sxtw, "SVE_ADDR_ZZ_SXTW", 0,			\
+      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
+      "an address with a vector register offset")			\
+    Y(ADDRESS, sve_addr_zz_uxtw, "SVE_ADDR_ZZ_UXTW", 0,			\
+      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
+      "an address with a vector register offset")			\
     Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
       "an enumeration value such as POW2")				\
     Y(IMMEDIATE, sve_scale, "SVE_PATTERN_SCALED", 0,			\

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 26/32] Add SVE MUL VL addressing modes
  2016-08-25 14:44   ` Richard Earnshaw (lists)
@ 2016-09-16 12:10     ` Richard Sandiford
  2016-09-20 13:51       ` Richard Earnshaw (lists)
  0 siblings, 1 reply; 76+ messages in thread
From: Richard Sandiford @ 2016-09-16 12:10 UTC (permalink / raw)
  To: Richard Earnshaw (lists); +Cc: binutils

"Richard Earnshaw (lists)" <Richard.Earnshaw@arm.com> writes:
> On 23/08/16 10:23, Richard Sandiford wrote:
>> This patch adds support for addresses of the form:
>> 
>>    [<base>, #<offset>, MUL VL]
>> 
>> This involves adding a new AARCH64_MOD_MUL_VL modifier, which is
>> why I split it out from the other addressing modes.
>> 
>> For LD2, LD3 and LD4, the offset must be a multiple of the structure
>> size, so for LD3 the possible values are 0, 3, 6, ....  The patch
>> therefore extends value_aligned_p to handle non-power-of-2 alignments.
>> 
>> OK to install?
>> 
>> Thanks,
>> Richard
>> 
>> 
>> include/opcode/
>> 	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_S4xVL): New aarch64_opnd.
>> 	(AARCH64_OPND_SVE_ADDR_RI_S4x2xVL, AARCH64_OPND_SVE_ADDR_RI_S4x3xVL)
>> 	(AARCH64_OPND_SVE_ADDR_RI_S4x4xVL, AARCH64_OPND_SVE_ADDR_RI_S6xVL)
>> 	(AARCH64_OPND_SVE_ADDR_RI_S9xVL): Likewise.
>> 	(AARCH64_MOD_MUL_VL): New aarch64_modifier_kind.
>> 
>> opcodes/
>> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for new MUL VL
>> 	operands.
>> 	* aarch64-opc.c (aarch64_operand_modifiers): Initialize
>> 	the AARCH64_MOD_MUL_VL entry.
>> 	(value_aligned_p): Cope with non-power-of-two alignments.
>> 	(operand_general_constraint_met_p): Handle the new MUL VL addresses.
>> 	(print_immediate_offset_address): Likewise.
>> 	(aarch64_print_operand): Likewise.
>> 	* aarch64-opc-2.c: Regenerate.
>> 	* aarch64-asm.h (ins_sve_addr_ri_s4xvl, ins_sve_addr_ri_s6xvl)
>> 	(ins_sve_addr_ri_s9xvl): New inserters.
>> 	* aarch64-asm.c (aarch64_ins_sve_addr_ri_s4xvl): New function.
>> 	(aarch64_ins_sve_addr_ri_s6xvl): Likewise.
>> 	(aarch64_ins_sve_addr_ri_s9xvl): Likewise.
>> 	* aarch64-asm-2.c: Regenerate.
>> 	* aarch64-dis.h (ext_sve_addr_ri_s4xvl, ext_sve_addr_ri_s6xvl)
>> 	(ext_sve_addr_ri_s9xvl): New extractors.
>> 	* aarch64-dis.c (aarch64_ext_sve_addr_reg_mul_vl): New function.
>> 	(aarch64_ext_sve_addr_ri_s4xvl): Likewise.
>> 	(aarch64_ext_sve_addr_ri_s6xvl): Likewise.
>> 	(aarch64_ext_sve_addr_ri_s9xvl): Likewise.
>> 	* aarch64-dis-2.c: Regenerate.
>> 
>> gas/
>> 	* config/tc-aarch64.c (SHIFTED_MUL_VL): New parse_shift_mode.
>> 	(parse_shift): Handle it.
>> 	(parse_address_main): Handle the new MUL VL addresses.
>> 	(parse_operands): Likewise.
>> 
>
> OK.

Here's an update that uses a parse_shift_mode rather than a boolean
parameter to say what kinds of shift are allowed for immediate offsets.

Tested on aarch64-linux-gnu.  OK to install?

Thanks,
Richard


include/opcode/
	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_S4xVL): New aarch64_opnd.
	(AARCH64_OPND_SVE_ADDR_RI_S4x2xVL, AARCH64_OPND_SVE_ADDR_RI_S4x3xVL)
	(AARCH64_OPND_SVE_ADDR_RI_S4x4xVL, AARCH64_OPND_SVE_ADDR_RI_S6xVL)
	(AARCH64_OPND_SVE_ADDR_RI_S9xVL): Likewise.
	(AARCH64_MOD_MUL_VL): New aarch64_modifier_kind.

opcodes/
	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for new MUL VL
	operands.
	* aarch64-opc.c (aarch64_operand_modifiers): Initialize
	the AARCH64_MOD_MUL_VL entry.
	(value_aligned_p): Cope with non-power-of-two alignments.
	(operand_general_constraint_met_p): Handle the new MUL VL addresses.
	(print_immediate_offset_address): Likewise.
	(aarch64_print_operand): Likewise.
	* aarch64-opc-2.c: Regenerate.
	* aarch64-asm.h (ins_sve_addr_ri_s4xvl, ins_sve_addr_ri_s6xvl)
	(ins_sve_addr_ri_s9xvl): New inserters.
	* aarch64-asm.c (aarch64_ins_sve_addr_ri_s4xvl): New function.
	(aarch64_ins_sve_addr_ri_s6xvl): Likewise.
	(aarch64_ins_sve_addr_ri_s9xvl): Likewise.
	* aarch64-asm-2.c: Regenerate.
	* aarch64-dis.h (ext_sve_addr_ri_s4xvl, ext_sve_addr_ri_s6xvl)
	(ext_sve_addr_ri_s9xvl): New extractors.
	* aarch64-dis.c (aarch64_ext_sve_addr_reg_mul_vl): New function.
	(aarch64_ext_sve_addr_ri_s4xvl): Likewise.
	(aarch64_ext_sve_addr_ri_s6xvl): Likewise.
	(aarch64_ext_sve_addr_ri_s9xvl): Likewise.
	* aarch64-dis-2.c: Regenerate.

gas/
	* config/tc-aarch64.c (SHIFTED_NONE, SHIFTED_MUL_VL): New
	parse_shift_modes.
	(parse_shift): Handle SHIFTED_MUL_VL.
	(parse_address_main): Add an imm_shift_mode parameter.
	(parse_address, parse_sve_address): Update accordingly.
	(parse_operands): Handle MUL VL addressing modes.

diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
index e59333f..930b07a 100644
--- a/gas/config/tc-aarch64.c
+++ b/gas/config/tc-aarch64.c
@@ -2922,6 +2922,7 @@ find_reloc_table_entry (char **str)
 /* Mode argument to parse_shift and parser_shifter_operand.  */
 enum parse_shift_mode
 {
+  SHIFTED_NONE,			/* no shifter allowed  */
   SHIFTED_ARITH_IMM,		/* "rn{,lsl|lsr|asl|asr|uxt|sxt #n}" or
 				   "#imm{,lsl #n}"  */
   SHIFTED_LOGIC_IMM,		/* "rn{,lsl|lsr|asl|asr|ror #n}" or
@@ -2929,6 +2930,7 @@ enum parse_shift_mode
   SHIFTED_LSL,			/* bare "lsl #n"  */
   SHIFTED_MUL,			/* bare "mul #n"  */
   SHIFTED_LSL_MSL,		/* "lsl|msl #n"  */
+  SHIFTED_MUL_VL,		/* "mul vl"  */
   SHIFTED_REG_OFFSET		/* [su]xtw|sxtx {#n} or lsl #n  */
 };
 
@@ -2970,7 +2972,8 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
     }
 
   if (kind == AARCH64_MOD_MUL
-      && mode != SHIFTED_MUL)
+      && mode != SHIFTED_MUL
+      && mode != SHIFTED_MUL_VL)
     {
       set_syntax_error (_("invalid use of 'MUL'"));
       return FALSE;
@@ -3010,6 +3013,22 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
 	}
       break;
 
+    case SHIFTED_MUL_VL:
+      /* "MUL VL" consists of two separate tokens.  Require the first
+	 token to be "MUL" and look for a following "VL".  */
+      if (kind == AARCH64_MOD_MUL)
+	{
+	  skip_whitespace (p);
+	  if (strncasecmp (p, "vl", 2) == 0 && !ISALPHA (p[2]))
+	    {
+	      p += 2;
+	      kind = AARCH64_MOD_MUL_VL;
+	      break;
+	    }
+	}
+      set_syntax_error (_("only 'MUL VL' is permitted"));
+      return FALSE;
+
     case SHIFTED_REG_OFFSET:
       if (kind != AARCH64_MOD_UXTW && kind != AARCH64_MOD_LSL
 	  && kind != AARCH64_MOD_SXTW && kind != AARCH64_MOD_SXTX)
@@ -3037,7 +3056,7 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
 
   /* Parse shift amount.  */
   exp_has_prefix = 0;
-  if (mode == SHIFTED_REG_OFFSET && *p == ']')
+  if ((mode == SHIFTED_REG_OFFSET && *p == ']') || kind == AARCH64_MOD_MUL_VL)
     exp.X_op = O_absent;
   else
     {
@@ -3048,7 +3067,11 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
 	}
       my_get_expression (&exp, &p, GE_NO_PREFIX, 0);
     }
-  if (exp.X_op == O_absent)
+  if (kind == AARCH64_MOD_MUL_VL)
+    /* For consistency, give MUL VL the same shift amount as an implicit
+       MUL #1.  */
+    operand->shifter.amount = 1;
+  else if (exp.X_op == O_absent)
     {
       if (aarch64_extend_operator_p (kind) == FALSE || exp_has_prefix)
 	{
@@ -3268,6 +3291,7 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
    PC-relative (literal)
      label
    SVE:
+     [base,#imm,MUL VL]
      [base,Zm.D{,LSL #imm}]
      [base,Zm.S,(S|U)XTW {#imm}]
      [base,Zm.D,(S|U)XTW {#imm}] // ignores top 32 bits of Zm.D elements
@@ -3307,15 +3331,20 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
    corresponding register.
 
    BASE_TYPE says which types of base register should be accepted and
-   OFFSET_TYPE says the same for offset registers.  In all other respects,
-   it is the caller's responsibility to check for addressing modes not
-   supported by the instruction, and to set inst.reloc.type.  */
+   OFFSET_TYPE says the same for offset registers.  IMM_SHIFT_MODE
+   is the type of shifter that is allowed for immediate offsets,
+   or SHIFTED_NONE if none.
+
+   In all other respects, it is the caller's responsibility to check
+   for addressing modes not supported by the instruction, and to set
+   inst.reloc.type.  */
 
 static bfd_boolean
 parse_address_main (char **str, aarch64_opnd_info *operand,
 		    aarch64_opnd_qualifier_t *base_qualifier,
 		    aarch64_opnd_qualifier_t *offset_qualifier,
-		    aarch64_reg_type base_type, aarch64_reg_type offset_type)
+		    aarch64_reg_type base_type, aarch64_reg_type offset_type,
+		    enum parse_shift_mode imm_shift_mode)
 {
   char *p = *str;
   const reg_entry *reg;
@@ -3497,12 +3526,19 @@ parse_address_main (char **str, aarch64_opnd_info *operand,
 	      inst.reloc.type = entry->ldst_type;
 	      inst.reloc.pc_rel = entry->pc_rel;
 	    }
-	  else if (! my_get_expression (exp, &p, GE_OPT_PREFIX, 1))
+	  else
 	    {
-	      set_syntax_error (_("invalid expression in the address"));
-	      return FALSE;
+	      if (! my_get_expression (exp, &p, GE_OPT_PREFIX, 1))
+		{
+		  set_syntax_error (_("invalid expression in the address"));
+		  return FALSE;
+		}
+	      /* [Xn,<expr>  */
+	      if (imm_shift_mode != SHIFTED_NONE && skip_past_comma (&p))
+		/* [Xn,<expr>,<shifter>  */
+		if (! parse_shift (&p, operand, imm_shift_mode))
+		  return FALSE;
 	    }
-	  /* [Xn,<expr>  */
 	}
     }
 
@@ -3582,10 +3618,10 @@ parse_address (char **str, aarch64_opnd_info *operand)
 {
   aarch64_opnd_qualifier_t base_qualifier, offset_qualifier;
   return parse_address_main (str, operand, &base_qualifier, &offset_qualifier,
-			     REG_TYPE_R64_SP, REG_TYPE_R_Z);
+			     REG_TYPE_R64_SP, REG_TYPE_R_Z, SHIFTED_NONE);
 }
 
-/* Parse an address in which SVE vector registers are allowed.
+/* Parse an address in which SVE vector registers and MUL VL are allowed.
    The arguments have the same meaning as for parse_address_main.
    Return TRUE on success.  */
 static bfd_boolean
@@ -3594,7 +3630,8 @@ parse_sve_address (char **str, aarch64_opnd_info *operand,
 		   aarch64_opnd_qualifier_t *offset_qualifier)
 {
   return parse_address_main (str, operand, base_qualifier, offset_qualifier,
-			     REG_TYPE_SVE_BASE, REG_TYPE_SVE_OFFSET);
+			     REG_TYPE_SVE_BASE, REG_TYPE_SVE_OFFSET,
+			     SHIFTED_MUL_VL);
 }
 
 /* Parse an operand for a MOVZ, MOVN or MOVK instruction.
@@ -5938,11 +5975,18 @@ parse_operands (char *str, const aarch64_opcode *opcode)
 	  /* No qualifier.  */
 	  break;
 
+	case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
 	case AARCH64_OPND_SVE_ADDR_RI_U6:
 	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
 	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
 	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
-	  /* [X<n>{, #imm}]
+	  /* [X<n>{, #imm, MUL VL}]
+	     [X<n>{, #imm}]
 	     but recognizing SVE registers.  */
 	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
 					      &offset_qualifier));
diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
index e61ac9c..837d6bd 100644
--- a/include/opcode/aarch64.h
+++ b/include/opcode/aarch64.h
@@ -244,6 +244,12 @@ enum aarch64_opnd
   AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
   AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
 
+  AARCH64_OPND_SVE_ADDR_RI_S4xVL,   /* SVE [<Xn|SP>, #<simm4>, MUL VL].  */
+  AARCH64_OPND_SVE_ADDR_RI_S4x2xVL, /* SVE [<Xn|SP>, #<simm4>*2, MUL VL].  */
+  AARCH64_OPND_SVE_ADDR_RI_S4x3xVL, /* SVE [<Xn|SP>, #<simm4>*3, MUL VL].  */
+  AARCH64_OPND_SVE_ADDR_RI_S4x4xVL, /* SVE [<Xn|SP>, #<simm4>*4, MUL VL].  */
+  AARCH64_OPND_SVE_ADDR_RI_S6xVL,   /* SVE [<Xn|SP>, #<simm6>, MUL VL].  */
+  AARCH64_OPND_SVE_ADDR_RI_S9xVL,   /* SVE [<Xn|SP>, #<simm9>, MUL VL].  */
   AARCH64_OPND_SVE_ADDR_RI_U6,	    /* SVE [<Xn|SP>, #<uimm6>].  */
   AARCH64_OPND_SVE_ADDR_RI_U6x2,    /* SVE [<Xn|SP>, #<uimm6>*2].  */
   AARCH64_OPND_SVE_ADDR_RI_U6x4,    /* SVE [<Xn|SP>, #<uimm6>*4].  */
@@ -786,6 +792,7 @@ enum aarch64_modifier_kind
   AARCH64_MOD_SXTW,
   AARCH64_MOD_SXTX,
   AARCH64_MOD_MUL,
+  AARCH64_MOD_MUL_VL,
 };
 
 bfd_boolean
diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
index 47a414c..da590ca 100644
--- a/opcodes/aarch64-asm-2.c
+++ b/opcodes/aarch64-asm-2.c
@@ -480,12 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 123:
-    case 124:
-    case 125:
-    case 126:
-    case 127:
-    case 128:
     case 129:
     case 130:
     case 131:
@@ -494,7 +488,13 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 134:
     case 135:
     case 136:
+    case 137:
+    case 138:
     case 139:
+    case 140:
+    case 141:
+    case 142:
+    case 145:
       return aarch64_ins_regno (self, info, code, inst);
     case 12:
       return aarch64_ins_reg_extended (self, info, code, inst);
@@ -531,8 +531,8 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 68:
     case 69:
     case 70:
-    case 120:
-    case 122:
+    case 126:
+    case 128:
       return aarch64_ins_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -587,46 +587,55 @@ aarch64_insert_operand (const aarch64_operand *self,
     case 90:
     case 91:
     case 92:
-      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
+      return aarch64_ins_sve_addr_ri_s4xvl (self, info, code, inst);
     case 93:
+      return aarch64_ins_sve_addr_ri_s6xvl (self, info, code, inst);
     case 94:
+      return aarch64_ins_sve_addr_ri_s9xvl (self, info, code, inst);
     case 95:
     case 96:
     case 97:
     case 98:
+      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
     case 99:
     case 100:
     case 101:
     case 102:
     case 103:
     case 104:
-      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
     case 105:
     case 106:
     case 107:
     case 108:
     case 109:
     case 110:
+      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
     case 111:
     case 112:
-      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
     case 113:
     case 114:
     case 115:
     case 116:
-      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
     case 117:
-      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
     case 118:
-      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
+      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
     case 119:
-      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
+    case 120:
     case 121:
+    case 122:
+      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
+    case 123:
+      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
+    case 124:
+      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
+    case 125:
+      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
+    case 127:
       return aarch64_ins_sve_scale (self, info, code, inst);
-    case 137:
+    case 143:
       return aarch64_ins_sve_index (self, info, code, inst);
-    case 138:
-    case 140:
+    case 144:
+    case 146:
       return aarch64_ins_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
index 0d3b2c7..944a9eb 100644
--- a/opcodes/aarch64-asm.c
+++ b/opcodes/aarch64-asm.c
@@ -745,6 +745,56 @@ aarch64_ins_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return NULL;
 }
 
+/* Encode an SVE address [<base>, #<simm4>*<factor>, MUL VL],
+   where <simm4> is a 4-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm4> is encoded in the SVE_imm4 field.  */
+const char *
+aarch64_ins_sve_addr_ri_s4xvl (const aarch64_operand *self,
+			       const aarch64_opnd_info *info,
+			       aarch64_insn *code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int factor = 1 + get_operand_specific_data (self);
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (FLD_SVE_imm4, code, info->addr.offset.imm / factor, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [<base>, #<simm6>*<factor>, MUL VL],
+   where <simm6> is a 6-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm6> is encoded in the SVE_imm6 field.  */
+const char *
+aarch64_ins_sve_addr_ri_s6xvl (const aarch64_operand *self,
+			       const aarch64_opnd_info *info,
+			       aarch64_insn *code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int factor = 1 + get_operand_specific_data (self);
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_field (FLD_SVE_imm6, code, info->addr.offset.imm / factor, 0);
+  return NULL;
+}
+
+/* Encode an SVE address [<base>, #<simm9>*<factor>, MUL VL],
+   where <simm9> is a 9-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm9> is encoded in the concatenation of the SVE_imm6
+   and imm3 fields, with imm3 being the less-significant part.  */
+const char *
+aarch64_ins_sve_addr_ri_s9xvl (const aarch64_operand *self,
+			       const aarch64_opnd_info *info,
+			       aarch64_insn *code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int factor = 1 + get_operand_specific_data (self);
+  insert_field (self->fields[0], code, info->addr.base_regno, 0);
+  insert_fields (code, info->addr.offset.imm / factor, 0,
+		 2, FLD_imm3, FLD_SVE_imm6);
+  return NULL;
+}
+
 /* Encode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
    is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
    value.  fields[0] specifies the base register field.  */
diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
index b81cfa1..5e13de0 100644
--- a/opcodes/aarch64-asm.h
+++ b/opcodes/aarch64-asm.h
@@ -69,6 +69,9 @@ AARCH64_DECL_OPD_INSERTER (ins_hint);
 AARCH64_DECL_OPD_INSERTER (ins_prfop);
 AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
 AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s4xvl);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s6xvl);
+AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s9xvl);
 AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_u6);
 AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rr_lsl);
 AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rz_xtw);
diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
index 3dd714f..48d6ce7 100644
--- a/opcodes/aarch64-dis-2.c
+++ b/opcodes/aarch64-dis-2.c
@@ -10426,12 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 27:
     case 35:
     case 36:
-    case 123:
-    case 124:
-    case 125:
-    case 126:
-    case 127:
-    case 128:
     case 129:
     case 130:
     case 131:
@@ -10440,7 +10434,13 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 134:
     case 135:
     case 136:
+    case 137:
+    case 138:
     case 139:
+    case 140:
+    case 141:
+    case 142:
+    case 145:
       return aarch64_ext_regno (self, info, code, inst);
     case 8:
       return aarch64_ext_regrt_sysins (self, info, code, inst);
@@ -10482,8 +10482,8 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 68:
     case 69:
     case 70:
-    case 120:
-    case 122:
+    case 126:
+    case 128:
       return aarch64_ext_imm (self, info, code, inst);
     case 38:
     case 39:
@@ -10540,46 +10540,55 @@ aarch64_extract_operand (const aarch64_operand *self,
     case 90:
     case 91:
     case 92:
-      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
+      return aarch64_ext_sve_addr_ri_s4xvl (self, info, code, inst);
     case 93:
+      return aarch64_ext_sve_addr_ri_s6xvl (self, info, code, inst);
     case 94:
+      return aarch64_ext_sve_addr_ri_s9xvl (self, info, code, inst);
     case 95:
     case 96:
     case 97:
     case 98:
+      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
     case 99:
     case 100:
     case 101:
     case 102:
     case 103:
     case 104:
-      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
     case 105:
     case 106:
     case 107:
     case 108:
     case 109:
     case 110:
+      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
     case 111:
     case 112:
-      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
     case 113:
     case 114:
     case 115:
     case 116:
-      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
     case 117:
-      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
     case 118:
-      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
+      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
     case 119:
-      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
+    case 120:
     case 121:
+    case 122:
+      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
+    case 123:
+      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
+    case 124:
+      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
+    case 125:
+      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
+    case 127:
       return aarch64_ext_sve_scale (self, info, code, inst);
-    case 137:
+    case 143:
       return aarch64_ext_sve_index (self, info, code, inst);
-    case 138:
-    case 140:
+    case 144:
+    case 146:
       return aarch64_ext_sve_reglist (self, info, code, inst);
     default: assert (0); abort ();
     }
diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
index ed77b4d..ba6befd 100644
--- a/opcodes/aarch64-dis.c
+++ b/opcodes/aarch64-dis.c
@@ -1186,6 +1186,78 @@ aarch64_ext_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
   return 1;
 }
 
+/* Decode an SVE address [<base>, #<offset>*<factor>, MUL VL],
+   where <offset> is given by the OFFSET parameter and where <factor> is
+   1 plus SELF's operand-dependent value.  fields[0] specifies the field
+   that holds <base>.  */
+static int
+aarch64_ext_sve_addr_reg_mul_vl (const aarch64_operand *self,
+				 aarch64_opnd_info *info, aarch64_insn code,
+				 int64_t offset)
+{
+  info->addr.base_regno = extract_field (self->fields[0], code, 0);
+  info->addr.offset.imm = offset * (1 + get_operand_specific_data (self));
+  info->addr.offset.is_reg = FALSE;
+  info->addr.writeback = FALSE;
+  info->addr.preind = TRUE;
+  if (offset != 0)
+    info->shifter.kind = AARCH64_MOD_MUL_VL;
+  info->shifter.amount = 1;
+  info->shifter.operator_present = (info->addr.offset.imm != 0);
+  info->shifter.amount_present = FALSE;
+  return 1;
+}
+
+/* Decode an SVE address [<base>, #<simm4>*<factor>, MUL VL],
+   where <simm4> is a 4-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm4> is encoded in the SVE_imm4 field.  */
+int
+aarch64_ext_sve_addr_ri_s4xvl (const aarch64_operand *self,
+			       aarch64_opnd_info *info, aarch64_insn code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int offset;
+
+  offset = extract_field (FLD_SVE_imm4, code, 0);
+  offset = ((offset + 8) & 15) - 8;
+  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
+}
+
+/* Decode an SVE address [<base>, #<simm6>*<factor>, MUL VL],
+   where <simm6> is a 6-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm6> is encoded in the SVE_imm6 field.  */
+int
+aarch64_ext_sve_addr_ri_s6xvl (const aarch64_operand *self,
+			       aarch64_opnd_info *info, aarch64_insn code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int offset;
+
+  offset = extract_field (FLD_SVE_imm6, code, 0);
+  offset = (((offset + 32) & 63) - 32);
+  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
+}
+
+/* Decode an SVE address [<base>, #<simm9>*<factor>, MUL VL],
+   where <simm9> is a 9-bit signed value and where <factor> is 1 plus
+   SELF's operand-dependent value.  fields[0] specifies the field that
+   holds <base>.  <simm9> is encoded in the concatenation of the SVE_imm6
+   and imm3 fields, with imm3 being the less-significant part.  */
+int
+aarch64_ext_sve_addr_ri_s9xvl (const aarch64_operand *self,
+			       aarch64_opnd_info *info,
+			       aarch64_insn code,
+			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
+{
+  int offset;
+
+  offset = extract_fields (code, 0, 2, FLD_SVE_imm6, FLD_imm3);
+  offset = (((offset + 256) & 511) - 256);
+  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
+}
+
 /* Decode an SVE address [<base>, #<offset> << <shift>], where <offset>
    is given by the OFFSET parameter and where <shift> is SELF's operand-
    dependent value.  fields[0] specifies the base register field <base>.  */
diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
index 0ce2d89..5619877 100644
--- a/opcodes/aarch64-dis.h
+++ b/opcodes/aarch64-dis.h
@@ -91,6 +91,9 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_hint);
 AARCH64_DECL_OPD_EXTRACTOR (ext_prfop);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
 AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s4xvl);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s6xvl);
+AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s9xvl);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_u6);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rr_lsl);
 AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rz_xtw);
diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
index ed2b70b..a72f577 100644
--- a/opcodes/aarch64-opc-2.c
+++ b/opcodes/aarch64-opc-2.c
@@ -113,6 +113,12 @@ const struct aarch64_operand aarch64_operands[] =
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
   {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
   {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by VL"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x2xVL", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 2*VL"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x3xVL", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 3*VL"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x4xVL", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 4*VL"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S6xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit signed offset, multiplied by VL"},
+  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S9xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 9-bit signed offset, multiplied by VL"},
   {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset"},
   {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 2"},
   {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 4"},
diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
index 6617e28..d0959b5 100644
--- a/opcodes/aarch64-opc.c
+++ b/opcodes/aarch64-opc.c
@@ -365,6 +365,7 @@ const struct aarch64_name_value_pair aarch64_operand_modifiers [] =
     {"sxtw", 0x6},
     {"sxtx", 0x7},
     {"mul", 0x0},
+    {"mul vl", 0x0},
     {NULL, 0},
 };
 
@@ -486,10 +487,11 @@ value_in_range_p (int64_t value, int low, int high)
   return (value >= low && value <= high) ? 1 : 0;
 }
 
+/* Return true if VALUE is a multiple of ALIGN.  */
 static inline int
 value_aligned_p (int64_t value, int align)
 {
-  return ((value & (align - 1)) == 0) ? 1 : 0;
+  return (value % align) == 0;
 }
 
 /* A signed value fits in a field.  */
@@ -1666,6 +1668,49 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
 	    }
 	  break;
 
+	case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
+	case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
+	  min_value = -8;
+	  max_value = 7;
+	sve_imm_offset_vl:
+	  assert (!opnd->addr.offset.is_reg);
+	  assert (opnd->addr.preind);
+	  num = 1 + get_operand_specific_data (&aarch64_operands[type]);
+	  min_value *= num;
+	  max_value *= num;
+	  if ((opnd->addr.offset.imm != 0 && !opnd->shifter.operator_present)
+	      || (opnd->shifter.operator_present
+		  && opnd->shifter.kind != AARCH64_MOD_MUL_VL))
+	    {
+	      set_other_error (mismatch_detail, idx,
+			       _("invalid addressing mode"));
+	      return 0;
+	    }
+	  if (!value_in_range_p (opnd->addr.offset.imm, min_value, max_value))
+	    {
+	      set_offset_out_of_range_error (mismatch_detail, idx,
+					     min_value, max_value);
+	      return 0;
+	    }
+	  if (!value_aligned_p (opnd->addr.offset.imm, num))
+	    {
+	      set_unaligned_error (mismatch_detail, idx, num);
+	      return 0;
+	    }
+	  break;
+
+	case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
+	  min_value = -32;
+	  max_value = 31;
+	  goto sve_imm_offset_vl;
+
+	case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
+	  min_value = -256;
+	  max_value = 255;
+	  goto sve_imm_offset_vl;
+
 	case AARCH64_OPND_SVE_ADDR_RI_U6:
 	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
 	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
@@ -2645,7 +2690,13 @@ print_immediate_offset_address (char *buf, size_t size,
     }
   else
     {
-      if (opnd->addr.offset.imm)
+      if (opnd->shifter.operator_present)
+	{
+	  assert (opnd->shifter.kind == AARCH64_MOD_MUL_VL);
+	  snprintf (buf, size, "[%s,#%d,mul vl]",
+		    base, opnd->addr.offset.imm);
+	}
+      else if (opnd->addr.offset.imm)
 	snprintf (buf, size, "[%s,#%d]", base, opnd->addr.offset.imm);
       else
 	snprintf (buf, size, "[%s]", base);
@@ -3114,6 +3165,12 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
     case AARCH64_OPND_ADDR_SIMM7:
     case AARCH64_OPND_ADDR_SIMM9:
     case AARCH64_OPND_ADDR_SIMM9_2:
+    case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
+    case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
+    case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
+    case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
+    case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
+    case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
     case AARCH64_OPND_SVE_ADDR_RI_U6:
     case AARCH64_OPND_SVE_ADDR_RI_U6x2:
     case AARCH64_OPND_SVE_ADDR_RI_U6x4:
diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
index aba4b2d..986cef6 100644
--- a/opcodes/aarch64-tbl.h
+++ b/opcodes/aarch64-tbl.h
@@ -2820,6 +2820,24 @@ struct aarch64_opcode aarch64_opcode_table[] =
       "a prefetch operation specifier")					\
     Y(SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
       "the PSB option name CSYNC")					\
+    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4xVL",			\
+      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 4-bit signed offset, multiplied by VL")	\
+    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x2xVL",		\
+      1 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 4-bit signed offset, multiplied by 2*VL")	\
+    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x3xVL",		\
+      2 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 4-bit signed offset, multiplied by 3*VL")	\
+    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x4xVL",		\
+      3 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 4-bit signed offset, multiplied by 4*VL")	\
+    Y(ADDRESS, sve_addr_ri_s6xvl, "SVE_ADDR_RI_S6xVL",			\
+      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 6-bit signed offset, multiplied by VL")	\
+    Y(ADDRESS, sve_addr_ri_s9xvl, "SVE_ADDR_RI_S9xVL",			\
+      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
+      "an address with a 9-bit signed offset, multiplied by VL")	\
     Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6", 0 << OPD_F_OD_LSB,	\
       F(FLD_Rn), "an address with a 6-bit unsigned offset")		\
     Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB,	\

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 11/32] Tweak aarch64_reg_parse_32_64 interface
  2016-09-16 11:51     ` Richard Sandiford
@ 2016-09-20 10:47       ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-09-20 10:47 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 16/09/16 12:51, Richard Sandiford wrote:
> "Richard Earnshaw (lists)" <Richard.Earnshaw@arm.com> writes:
>> On 23/08/16 10:12, Richard Sandiford wrote:
>>> aarch64_reg_parse_32_64 is currently used to parse address registers,
>>> among other things.  It returns two bits of information about the
>>> register: whether it's W rather than X, and whether it's a zero register.
>>>
>>> SVE adds addressing modes in which the base or offset can be a vector
>>> register instead of a scalar, so a choice between W and X is no longer
>>> enough.  It's more convenient to pass the type of register around as
>>> a qualifier instead.
>>>
>>> As it happens, two callers of aarch64_reg_parse_32_64 already wanted
>>> the information in the form of a qualifier, so the change feels pretty
>>> natural even without SVE.
>>>
>>> Also, the function took two parameters to control whether {W}SP
>>> and (W|X)ZR should be accepted.  These parameters were negative
>>> "reject" parameters, but the closely-related parse_address_main
>>> had a positive "accept" parameter (for post-indexed addressing).
>>> One of the SVE patches adds a parameter to parse_address_main
>>> that needs to be passed down alongside the aarch64_reg_parse_32_64
>>> parameters, which as things stood led to an awkward mix of positive
>>> and negative bools.  The patch therefore changes the
>>> aarch64_reg_parse_32_64 parameters to "accept_sp" and "accept_rz"
>>> instead.
>>>
>>> Finally, the two input parameters and isregzero return value were
>>> all ints but logically bools.  The patch changes the types to
>>> bfd_boolean.
>>>
>>> OK to install?
>>>
>>> Thanks,
>>> Richard
>>>
>>>
>>> gas/
>>> 	* config/tc-aarch64.c (aarch64_reg_parse_32_64): Return the register
>>> 	type as a qualifier rather than an "isreg32" boolean.  Turn the
>>> 	SP/ZR control parameters from negative "reject" to positive
>>> 	"accept".  Make them and *ISREGZERO bfd_booleans rather than ints.
>>> 	(parse_shifter_operand): Update accordingly.
>>> 	(parse_address_main): Likewise.
>>> 	(po_int_reg_or_fail): Likewise.  Make the same reject->accept
>>> 	change to the macro parameters.
>>> 	(parse_operands): Update after the above changes, replacing
>>> 	the "isreg32" local variable with one called "qualifier".
>>
>> I'm not a big fan of parameters that simply take 'true' or 'false',
>> especially when there is more than one such parameter: it's too easy to
>> get the order mixed up.
>>
>> Furthermore, I'm not sure these two parameters are really independent.
>> Are there any cases where both can be true?
>>
>> Given the above concerns I wonder whether a single enum with the
>> permitted states might be better.  It certainly makes the code clearer
>> at the caller as to which register types are acceptable.
> 
> In the end it seemed easier to remove the parameters entirely,
> return a reg_entry, and get the caller to do the checking.
> This leads to slightly better error messages in some cases.
> 

I like this much better...

> This does create a corner case where:
> 
> 	.equ	sp, 1
> 	ldr	w0, [x0, sp]
> 
> was previously an acceptable way of writing "ldr w0, [x0, #1]",
> but I don't think it's important to continue supporting that.
> We already rejected things like:
> 
> 	.equ	sp, 1
> 	add	x0, x1, sp

I'm not sure it was ever intended that the ldr form should work, so this
sounds like a useful clean-up.

> 
> To ensure these new error messages "win" when matching against
> several candidate instruction entries, we need to use the same
> address-parsing code for all addresses, including ADDR_SIMPLE
> and SIMD_ADDR_SIMPLE.  The next patch also relies on this.
> 
> Finally, aarcch64_check_reg_type was written in a pretty
> conservative way.  It should always be equivalent to a single
> bit test.

I notice that the st2 diagnostics are slightly misleading:

-[^:]*:59: Error: writeback value should be an immediate constant at
operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],sp'
+[^:]*:59: Error: integer 64-bit register expected at operand 2 -- `st2
{v0.4s,v1.4s},\[sp\],sp'
 [^:]*:60: Error: writeback value should be an immediate constant at
operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],zr'

If we're going to say what is acceptable, then we should list all
permitted candidates rather than just one.  However, that's an existing
issue; can you file something in bugzilla please.

> 
> Tested on aarch64-linux-gnu.  OK to install?
> 

OK.

> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (REG_TYPE_R_Z, REG_TYPE_R_SP): New register
> 	types.
> 	(get_reg_expected_msg): Handle them and REG_TYPE_R64_SP.
> 	(aarch64_check_reg_type): Simplify.
> 	(aarch64_reg_parse_32_64): Return the reg_entry instead of the
> 	register number.  Return the type as a qualifier rather than an
> 	"isreg32" boolean.  Remove reject_sp, reject_rz and isregzero
> 	parameters.
> 	(parse_shifter_operand): Update call to aarch64_parse_32_64_reg.
> 	Use get_reg_expected_msg.
> 	(parse_address_main): Likewise.  Use aarch64_check_reg_type.
> 	(po_int_reg_or_fail): Replace reject_sp and reject_rz parameters
> 	with a reg_type parameter.  Update call to aarch64_parse_32_64_reg.
> 	Use aarch64_check_reg_type to test the result.
> 	(parse_operands): Update after the above changes.  Parse ADDR_SIMPLE
> 	addresses normally before enforcing the syntax restrictions.
> 	* testsuite/gas/aarch64/diagnostic.s: Add tests for a post-index
> 	zero register and for a stack pointer index.
> 	* testsuite/gas/aarch64/diagnostic.l: Update accordingly.
> 	Also update existing diagnostic messages after the above changes.
> 	* testsuite/gas/aarch64/illegal-lse.l: Update the error message
> 	for 32-bit register bases.
> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 2489d5b..7b5be8b 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -265,16 +265,22 @@ struct reloc_entry
>    BASIC_REG_TYPE(FP_Q)	/* q[0-31] */	\
>    BASIC_REG_TYPE(CN)	/* c[0-7]  */	\
>    BASIC_REG_TYPE(VN)	/* v[0-31] */	\
> -  /* Typecheck: any 64-bit int reg         (inc SP exc XZR) */		\
> +  /* Typecheck: any 64-bit int reg         (inc SP exc XZR).  */	\
>    MULTI_REG_TYPE(R64_SP, REG_TYPE(R_64) | REG_TYPE(SP_64))		\
> -  /* Typecheck: any int                    (inc {W}SP inc [WX]ZR) */	\
> +  /* Typecheck: x[0-30], w[0-30] or [xw]zr.  */				\
> +  MULTI_REG_TYPE(R_Z, REG_TYPE(R_32) | REG_TYPE(R_64)			\
> +		 | REG_TYPE(Z_32) | REG_TYPE(Z_64))			\
> +  /* Typecheck: x[0-30], w[0-30] or {w}sp.  */				\
> +  MULTI_REG_TYPE(R_SP, REG_TYPE(R_32) | REG_TYPE(R_64)			\
> +		 | REG_TYPE(SP_32) | REG_TYPE(SP_64))			\
> +  /* Typecheck: any int                    (inc {W}SP inc [WX]ZR).  */	\
>    MULTI_REG_TYPE(R_Z_SP, REG_TYPE(R_32) | REG_TYPE(R_64)		\
>  		 | REG_TYPE(SP_32) | REG_TYPE(SP_64)			\
>  		 | REG_TYPE(Z_32) | REG_TYPE(Z_64)) 			\
>    /* Typecheck: any [BHSDQ]P FP.  */					\
>    MULTI_REG_TYPE(BHSDQ, REG_TYPE(FP_B) | REG_TYPE(FP_H)			\
>  		 | REG_TYPE(FP_S) | REG_TYPE(FP_D) | REG_TYPE(FP_Q))	\
> -  /* Typecheck: any int or [BHSDQ]P FP or V reg (exc SP inc [WX]ZR)  */	\
> +  /* Typecheck: any int or [BHSDQ]P FP or V reg (exc SP inc [WX]ZR).  */ \
>    MULTI_REG_TYPE(R_Z_BHSDQ_V, REG_TYPE(R_32) | REG_TYPE(R_64)		\
>  		 | REG_TYPE(Z_32) | REG_TYPE(Z_64) | REG_TYPE(VN)	\
>  		 | REG_TYPE(FP_B) | REG_TYPE(FP_H)			\
> @@ -344,6 +350,15 @@ get_reg_expected_msg (aarch64_reg_type reg_type)
>      case REG_TYPE_R_N:
>        msg = N_("integer register expected");
>        break;
> +    case REG_TYPE_R64_SP:
> +      msg = N_("64-bit integer or SP register expected");
> +      break;
> +    case REG_TYPE_R_Z:
> +      msg = N_("integer or zero register expected");
> +      break;
> +    case REG_TYPE_R_SP:
> +      msg = N_("integer or SP register expected");
> +      break;
>      case REG_TYPE_R_Z_SP:
>        msg = N_("integer, zero or SP register expected");
>        break;
> @@ -390,9 +405,6 @@ get_reg_expected_msg (aarch64_reg_type reg_type)
>  /* Instructions take 4 bytes in the object file.  */
>  #define INSN_SIZE	4
>  
> -/* Define some common error messages.  */
> -#define BAD_SP          _("SP not allowed here")
> -
>  static struct hash_control *aarch64_ops_hsh;
>  static struct hash_control *aarch64_cond_hsh;
>  static struct hash_control *aarch64_shift_hsh;
> @@ -671,72 +683,45 @@ parse_reg (char **ccp)
>  static bfd_boolean
>  aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
>  {
> -  if (reg->type == type)
> -    return TRUE;
> -
> -  switch (type)
> -    {
> -    case REG_TYPE_R64_SP:	/* 64-bit integer reg (inc SP exc XZR).  */
> -    case REG_TYPE_R_Z_SP:	/* Integer reg (inc {X}SP inc [WX]ZR).  */
> -    case REG_TYPE_R_Z_BHSDQ_V:	/* Any register apart from Cn.  */
> -    case REG_TYPE_BHSDQ:	/* Any [BHSDQ]P FP or SIMD scalar register.  */
> -    case REG_TYPE_VN:		/* Vector register.  */
> -      gas_assert (reg->type < REG_TYPE_MAX && type < REG_TYPE_MAX);
> -      return ((reg_type_masks[reg->type] & reg_type_masks[type])
> -	      == reg_type_masks[reg->type]);
> -    default:
> -      as_fatal ("unhandled type %d", type);
> -      abort ();
> -    }
> +  return (reg_type_masks[type] & (1 << reg->type)) != 0;
>  }
>  
> -/* Parse a register and return PARSE_FAIL if the register is not of type R_Z_SP.
> -   Return the register number otherwise.  *ISREG32 is set to one if the
> -   register is 32-bit wide; *ISREGZERO is set to one if the register is
> -   of type Z_32 or Z_64.
> +/* Try to parse a base or offset register.  Return the register entry
> +   on success, setting *QUALIFIER to the register qualifier.  Return null
> +   otherwise.
> +
>     Note that this function does not issue any diagnostics.  */
>  
> -static int
> -aarch64_reg_parse_32_64 (char **ccp, int reject_sp, int reject_rz,
> -			 int *isreg32, int *isregzero)
> +static const reg_entry *
> +aarch64_reg_parse_32_64 (char **ccp, aarch64_opnd_qualifier_t *qualifier)
>  {
>    char *str = *ccp;
>    const reg_entry *reg = parse_reg (&str);
>  
>    if (reg == NULL)
> -    return PARSE_FAIL;
> -
> -  if (! aarch64_check_reg_type (reg, REG_TYPE_R_Z_SP))
> -    return PARSE_FAIL;
> +    return NULL;
>  
>    switch (reg->type)
>      {
> +    case REG_TYPE_R_32:
>      case REG_TYPE_SP_32:
> -    case REG_TYPE_SP_64:
> -      if (reject_sp)
> -	return PARSE_FAIL;
> -      *isreg32 = reg->type == REG_TYPE_SP_32;
> -      *isregzero = 0;
> +    case REG_TYPE_Z_32:
> +      *qualifier = AARCH64_OPND_QLF_W;
>        break;
> -    case REG_TYPE_R_32:
> +
>      case REG_TYPE_R_64:
> -      *isreg32 = reg->type == REG_TYPE_R_32;
> -      *isregzero = 0;
> -      break;
> -    case REG_TYPE_Z_32:
> +    case REG_TYPE_SP_64:
>      case REG_TYPE_Z_64:
> -      if (reject_rz)
> -	return PARSE_FAIL;
> -      *isreg32 = reg->type == REG_TYPE_Z_32;
> -      *isregzero = 1;
> +      *qualifier = AARCH64_OPND_QLF_X;
>        break;
> +
>      default:
> -      return PARSE_FAIL;
> +      return NULL;
>      }
>  
>    *ccp = str;
>  
> -  return reg->number;
> +  return reg;
>  }
>  
>  /* Parse the qualifier of a SIMD vector register or a SIMD vector element.
> @@ -3032,13 +3017,13 @@ static bfd_boolean
>  parse_shifter_operand (char **str, aarch64_opnd_info *operand,
>  		       enum parse_shift_mode mode)
>  {
> -  int reg;
> -  int isreg32, isregzero;
> +  const reg_entry *reg;
> +  aarch64_opnd_qualifier_t qualifier;
>    enum aarch64_operand_class opd_class
>      = aarch64_get_operand_class (operand->type);
>  
> -  if ((reg =
> -       aarch64_reg_parse_32_64 (str, 0, 0, &isreg32, &isregzero)) != PARSE_FAIL)
> +  reg = aarch64_reg_parse_32_64 (str, &qualifier);
> +  if (reg)
>      {
>        if (opd_class == AARCH64_OPND_CLASS_IMMEDIATE)
>  	{
> @@ -3046,14 +3031,14 @@ parse_shifter_operand (char **str, aarch64_opnd_info *operand,
>  	  return FALSE;
>  	}
>  
> -      if (!isregzero && reg == REG_SP)
> +      if (!aarch64_check_reg_type (reg, REG_TYPE_R_Z))
>  	{
> -	  set_syntax_error (BAD_SP);
> +	  set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_Z)));
>  	  return FALSE;
>  	}
>  
> -      operand->reg.regno = reg;
> -      operand->qualifier = isreg32 ? AARCH64_OPND_QLF_W : AARCH64_OPND_QLF_X;
> +      operand->reg.regno = reg->number;
> +      operand->qualifier = qualifier;
>  
>        /* Accept optional shift operation on register.  */
>        if (! skip_past_comma (str))
> @@ -3192,8 +3177,9 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>  		    int accept_reg_post_index)
>  {
>    char *p = *str;
> -  int reg;
> -  int isreg32, isregzero;
> +  const reg_entry *reg;
> +  aarch64_opnd_qualifier_t base_qualifier;
> +  aarch64_opnd_qualifier_t offset_qualifier;
>    expressionS *exp = &inst.reloc.exp;
>  
>    if (! skip_past_char (&p, '['))
> @@ -3270,14 +3256,13 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>  
>    /* [ */
>  
> -  /* Accept SP and reject ZR */
> -  reg = aarch64_reg_parse_32_64 (&p, 0, 1, &isreg32, &isregzero);
> -  if (reg == PARSE_FAIL || isreg32)
> +  reg = aarch64_reg_parse_32_64 (&p, &base_qualifier);
> +  if (!reg || !aarch64_check_reg_type (reg, REG_TYPE_R64_SP))
>      {
> -      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_64)));
> +      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R64_SP)));
>        return FALSE;
>      }
> -  operand->addr.base_regno = reg;
> +  operand->addr.base_regno = reg->number;
>  
>    /* [Xn */
>    if (skip_past_comma (&p))
> @@ -3285,12 +3270,17 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>        /* [Xn, */
>        operand->addr.preind = 1;
>  
> -      /* Reject SP and accept ZR */
> -      reg = aarch64_reg_parse_32_64 (&p, 1, 0, &isreg32, &isregzero);
> -      if (reg != PARSE_FAIL)
> +      reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier);
> +      if (reg)
>  	{
> +	  if (!aarch64_check_reg_type (reg, REG_TYPE_R_Z))
> +	    {
> +	      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_Z)));
> +	      return FALSE;
> +	    }
> +
>  	  /* [Xn,Rm  */
> -	  operand->addr.offset.regno = reg;
> +	  operand->addr.offset.regno = reg->number;
>  	  operand->addr.offset.is_reg = 1;
>  	  /* Shifted index.  */
>  	  if (skip_past_comma (&p))
> @@ -3309,13 +3299,13 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>  	      || operand->shifter.kind == AARCH64_MOD_LSL
>  	      || operand->shifter.kind == AARCH64_MOD_SXTX)
>  	    {
> -	      if (isreg32)
> +	      if (offset_qualifier == AARCH64_OPND_QLF_W)
>  		{
>  		  set_syntax_error (_("invalid use of 32-bit register offset"));
>  		  return FALSE;
>  		}
>  	    }
> -	  else if (!isreg32)
> +	  else if (offset_qualifier == AARCH64_OPND_QLF_X)
>  	    {
>  	      set_syntax_error (_("invalid use of 64-bit register offset"));
>  	      return FALSE;
> @@ -3399,16 +3389,16 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>  	}
>  
>        if (accept_reg_post_index
> -	  && (reg = aarch64_reg_parse_32_64 (&p, 1, 1, &isreg32,
> -					     &isregzero)) != PARSE_FAIL)
> +	  && (reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier)))
>  	{
>  	  /* [Xn],Xm */
> -	  if (isreg32)
> +	  if (!aarch64_check_reg_type (reg, REG_TYPE_R_64))
>  	    {
> -	      set_syntax_error (_("invalid 32-bit register offset"));
> +	      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_64)));
>  	      return FALSE;
>  	    }
> -	  operand->addr.offset.regno = reg;
> +
> +	  operand->addr.offset.regno = reg->number;
>  	  operand->addr.offset.is_reg = 1;
>  	}
>        else if (! my_get_expression (exp, &p, GE_OPT_PREFIX, 1))
> @@ -3723,19 +3713,15 @@ parse_sys_ins_reg (char **str, struct hash_control *sys_ins_regs)
>        }								\
>    } while (0)
>  
> -#define po_int_reg_or_fail(reject_sp, reject_rz) do {		\
> -    val = aarch64_reg_parse_32_64 (&str, reject_sp, reject_rz,	\
> -                                   &isreg32, &isregzero);	\
> -    if (val == PARSE_FAIL)					\
> +#define po_int_reg_or_fail(reg_type) do {			\
> +    reg = aarch64_reg_parse_32_64 (&str, &qualifier);		\
> +    if (!reg || !aarch64_check_reg_type (reg, reg_type))	\
>        {								\
>  	set_default_error ();					\
>  	goto failure;						\
>        }								\
> -    info->reg.regno = val;					\
> -    if (isreg32)						\
> -      info->qualifier = AARCH64_OPND_QLF_W;			\
> -    else							\
> -      info->qualifier = AARCH64_OPND_QLF_X;			\
> +    info->reg.regno = reg->number;				\
> +    info->qualifier = qualifier;				\
>    } while (0)
>  
>  #define po_imm_nc_or_fail() do {				\
> @@ -4993,10 +4979,11 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>    for (i = 0; operands[i] != AARCH64_OPND_NIL; i++)
>      {
>        int64_t val;
> -      int isreg32, isregzero;
> +      const reg_entry *reg;
>        int comma_skipped_p = 0;
>        aarch64_reg_type rtype;
>        struct vector_type_el vectype;
> +      aarch64_opnd_qualifier_t qualifier;
>        aarch64_opnd_info *info = &inst.base.operands[i];
>  
>        DEBUG_TRACE ("parse operand %d", i);
> @@ -5032,12 +5019,12 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	case AARCH64_OPND_Ra:
>  	case AARCH64_OPND_Rt_SYS:
>  	case AARCH64_OPND_PAIRREG:
> -	  po_int_reg_or_fail (1, 0);
> +	  po_int_reg_or_fail (REG_TYPE_R_Z);
>  	  break;
>  
>  	case AARCH64_OPND_Rd_SP:
>  	case AARCH64_OPND_Rn_SP:
> -	  po_int_reg_or_fail (0, 1);
> +	  po_int_reg_or_fail (REG_TYPE_R_SP);
>  	  break;
>  
>  	case AARCH64_OPND_Rm_EXT:
> @@ -5498,24 +5485,39 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  
>  	case AARCH64_OPND_ADDR_SIMPLE:
>  	case AARCH64_OPND_SIMD_ADDR_SIMPLE:
> -	  /* [<Xn|SP>{, #<simm>}]  */
> -	  po_char_or_fail ('[');
> -	  po_reg_or_fail (REG_TYPE_R64_SP);
> -	  /* Accept optional ", #0".  */
> -	  if (operands[i] == AARCH64_OPND_ADDR_SIMPLE
> -	      && skip_past_char (&str, ','))
> -	    {
> -	      skip_past_char (&str, '#');
> -	      if (! skip_past_char (&str, '0'))
> -		{
> -		  set_fatal_syntax_error
> -		    (_("the optional immediate offset can only be 0"));
> -		  goto failure;
> -		}
> -	    }
> -	  po_char_or_fail (']');
> -	  info->addr.base_regno = val;
> -	  break;
> +	  {
> +	    /* [<Xn|SP>{, #<simm>}]  */
> +	    char *start = str;
> +	    /* First use the normal address-parsing routines, to get
> +	       the usual syntax errors.  */
> +	    po_misc_or_fail (parse_address (&str, info, 0));
> +	    if (info->addr.pcrel || info->addr.offset.is_reg
> +		|| !info->addr.preind || info->addr.postind
> +		|| info->addr.writeback)
> +	      {
> +		set_syntax_error (_("invalid addressing mode"));
> +		goto failure;
> +	      }
> +
> +	    /* Then retry, matching the specific syntax of these addresses.  */
> +	    str = start;
> +	    po_char_or_fail ('[');
> +	    po_reg_or_fail (REG_TYPE_R64_SP);
> +	    /* Accept optional ", #0".  */
> +	    if (operands[i] == AARCH64_OPND_ADDR_SIMPLE
> +		&& skip_past_char (&str, ','))
> +	      {
> +		skip_past_char (&str, '#');
> +		if (! skip_past_char (&str, '0'))
> +		  {
> +		    set_fatal_syntax_error
> +		      (_("the optional immediate offset can only be 0"));
> +		    goto failure;
> +		  }
> +	      }
> +	    po_char_or_fail (']');
> +	    break;
> +	  }
>  
>  	case AARCH64_OPND_ADDR_REGOFF:
>  	  /* [<Xn|SP>, <R><m>{, <extend> {<amount>}}]  */
> diff --git a/gas/testsuite/gas/aarch64/diagnostic.l b/gas/testsuite/gas/aarch64/diagnostic.l
> index 67ef484..ef23577 100644
> --- a/gas/testsuite/gas/aarch64/diagnostic.l
> +++ b/gas/testsuite/gas/aarch64/diagnostic.l
> @@ -54,7 +54,7 @@
>  [^:]*:56: Error: operand 2 should be a floating-point register -- `fcmp d0,x0'
>  [^:]*:57: Error: immediate zero expected at operand 3 -- `cmgt v0.4s,v2.4s,#1'
>  [^:]*:58: Error: unexpected characters following instruction at operand 2 -- `fmov d3,1.00,lsl#3'
> -[^:]*:59: Error: writeback value should be an immediate constant at operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],sp'
> +[^:]*:59: Error: integer 64-bit register expected at operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],sp'
>  [^:]*:60: Error: writeback value should be an immediate constant at operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],zr'
>  [^:]*:61: Error: invalid shift for the register offset addressing mode at operand 2 -- `ldr q0,\[x0,w0,lsr#4\]'
>  [^:]*:62: Error: only 'LSL' shift is permitted at operand 3 -- `adds x1,sp,2134,uxtw#12'
> @@ -116,10 +116,10 @@
>  [^:]*:125: Warning: unpredictable transfer with writeback -- `ldp x0,x1,\[x1\],#16'
>  [^:]*:126: Error: this relocation modifier is not allowed on this instruction at operand 2 -- `adr x2,:got:s1'
>  [^:]*:127: Error: this relocation modifier is not allowed on this instruction at operand 2 -- `ldr x0,\[x0,:got:s1\]'
> -[^:]*:130: Error: integer 64-bit register expected at operand 2 -- `ldr x1,\[wsp,#8\]!'
> -[^:]*:131: Error: integer 64-bit register expected at operand 3 -- `ldp x6,x29,\[w7,#8\]!'
> -[^:]*:132: Error: integer 64-bit register expected at operand 2 -- `str x30,\[w11,#8\]!'
> -[^:]*:133: Error: integer 64-bit register expected at operand 3 -- `stp x8,x27,\[wsp,#8\]!'
> +[^:]*:130: Error: 64-bit integer or SP register expected at operand 2 -- `ldr x1,\[wsp,#8\]!'
> +[^:]*:131: Error: 64-bit integer or SP register expected at operand 3 -- `ldp x6,x29,\[w7,#8\]!'
> +[^:]*:132: Error: 64-bit integer or SP register expected at operand 2 -- `str x30,\[w11,#8\]!'
> +[^:]*:133: Error: 64-bit integer or SP register expected at operand 3 -- `stp x8,x27,\[wsp,#8\]!'
>  [^:]*:213: Error: register element index out of range 0 to 1 at operand 2 -- `dup v0\.2d,v1\.2d\[-1\]'
>  [^:]*:216: Error: register element index out of range 0 to 1 at operand 2 -- `dup v0\.2d,v1\.2d\[2\]'
>  [^:]*:217: Error: register element index out of range 0 to 1 at operand 2 -- `dup v0\.2d,v1\.2d\[64\]'
> @@ -148,3 +148,5 @@
>  [^:]*:262: Error: invalid floating-point constant at operand 2 -- `fmov d0,#-2'
>  [^:]*:263: Error: invalid floating-point constant at operand 2 -- `fmov s0,2'
>  [^:]*:264: Error: invalid floating-point constant at operand 2 -- `fmov s0,-2'
> +[^:]*:266: Error: integer 64-bit register expected at operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],xzr'
> +[^:]*:267: Error: integer or zero register expected at operand 2 -- `str x1,\[x2,sp\]'
> diff --git a/gas/testsuite/gas/aarch64/diagnostic.s b/gas/testsuite/gas/aarch64/diagnostic.s
> index 3092b9b..8dbb542 100644
> --- a/gas/testsuite/gas/aarch64/diagnostic.s
> +++ b/gas/testsuite/gas/aarch64/diagnostic.s
> @@ -262,3 +262,6 @@
>  	fmov	d0, #-2
>  	fmov	s0, 2
>  	fmov	s0, -2
> +
> +	st2	{v0.4s, v1.4s}, [sp], xzr
> +	str	x1, [x2, sp]
> diff --git a/gas/testsuite/gas/aarch64/illegal-lse.l b/gas/testsuite/gas/aarch64/illegal-lse.l
> index ed70065..dd57f99 100644
> --- a/gas/testsuite/gas/aarch64/illegal-lse.l
> +++ b/gas/testsuite/gas/aarch64/illegal-lse.l
> @@ -1,433 +1,433 @@
>  [^:]*: Assembler messages:
>  [^:]*:68: Error: operand mismatch -- `cas w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `cas w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `cas w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casa w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casa w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casa w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casl w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casl w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casl w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casal w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casal w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casal w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casb w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casb w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casb w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `cash w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `cash w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `cash w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casab w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casab w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casab w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `caslb w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `caslb w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `caslb w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casalb w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casalb w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casalb w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casah w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casah w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casah w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `caslh w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `caslh w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `caslh w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casalh w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casalh w2,w3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casalh w2,w3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `cas w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `cas x2,x3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `cas x2,x3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casa w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casa x2,x3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casa x2,x3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casl w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casl x2,x3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casl x2,x3,\[w4\]'
>  [^:]*:68: Error: operand mismatch -- `casal w0,x1,\[x2\]'
> -[^:]*:68: Error: operand 3 should be an address with base register \(no offset\) -- `casal x2,x3,\[w4\]'
> +[^:]*:68: Error: 64-bit integer or SP register expected at operand 3 -- `casal x2,x3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swp w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swp w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swp w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpa w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpa w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpa w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpl w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpl w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpl w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpal w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpal w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpal w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpb w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpb w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpb w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swph w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swph w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swph w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpab w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpab w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpab w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swplb w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swplb w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swplb w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpalb w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpalb w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpalb w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpah w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpah w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpah w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swplh w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swplh w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swplh w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpalh w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpalh w2,w3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpalh w2,w3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swp w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swp x2,x3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swp x2,x3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpa w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpa x2,x3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpa x2,x3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpl w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpl x2,x3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpl x2,x3,\[w4\]'
>  [^:]*:69: Error: operand mismatch -- `swpal w0,x1,\[x2\]'
> -[^:]*:69: Error: operand 3 should be an address with base register \(no offset\) -- `swpal x2,x3,\[w4\]'
> +[^:]*:69: Error: 64-bit integer or SP register expected at operand 3 -- `swpal x2,x3,\[w4\]'
>  [^:]*:70: Error: reg pair must start from even reg at operand 1 -- `casp w1,w1,w2,w3,\[x5\]'
>  [^:]*:70: Error: reg pair must be contiguous at operand 2 -- `casp w4,w4,w6,w7,\[sp\]'
>  [^:]*:70: Error: operand mismatch -- `casp w0,x1,x2,x3,\[x2\]'
> -[^:]*:70: Error: operand 5 should be an address with base register \(no offset\) -- `casp x4,x5,x6,x7,\[w8\]'
> +[^:]*:70: Error: 64-bit integer or SP register expected at operand 5 -- `casp x4,x5,x6,x7,\[w8\]'
>  [^:]*:70: Error: reg pair must start from even reg at operand 1 -- `caspa w1,w1,w2,w3,\[x5\]'
>  [^:]*:70: Error: reg pair must be contiguous at operand 2 -- `caspa w4,w4,w6,w7,\[sp\]'
>  [^:]*:70: Error: operand mismatch -- `caspa w0,x1,x2,x3,\[x2\]'
> -[^:]*:70: Error: operand 5 should be an address with base register \(no offset\) -- `caspa x4,x5,x6,x7,\[w8\]'
> +[^:]*:70: Error: 64-bit integer or SP register expected at operand 5 -- `caspa x4,x5,x6,x7,\[w8\]'
>  [^:]*:70: Error: reg pair must start from even reg at operand 1 -- `caspl w1,w1,w2,w3,\[x5\]'
>  [^:]*:70: Error: reg pair must be contiguous at operand 2 -- `caspl w4,w4,w6,w7,\[sp\]'
>  [^:]*:70: Error: operand mismatch -- `caspl w0,x1,x2,x3,\[x2\]'
> -[^:]*:70: Error: operand 5 should be an address with base register \(no offset\) -- `caspl x4,x5,x6,x7,\[w8\]'
> +[^:]*:70: Error: 64-bit integer or SP register expected at operand 5 -- `caspl x4,x5,x6,x7,\[w8\]'
>  [^:]*:70: Error: reg pair must start from even reg at operand 1 -- `caspal w1,w1,w2,w3,\[x5\]'
>  [^:]*:70: Error: reg pair must be contiguous at operand 2 -- `caspal w4,w4,w6,w7,\[sp\]'
>  [^:]*:70: Error: operand mismatch -- `caspal w0,x1,x2,x3,\[x2\]'
> -[^:]*:70: Error: operand 5 should be an address with base register \(no offset\) -- `caspal x4,x5,x6,x7,\[w8\]'
> +[^:]*:70: Error: 64-bit integer or SP register expected at operand 5 -- `caspal x4,x5,x6,x7,\[w8\]'
>  [^:]*:71: Error: operand mismatch -- `ldadd w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldadd w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldadd w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldadda w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldadda w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldadda w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddl w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddl w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddal w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddal w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddab w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddab w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddab w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddlb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddlb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddlb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddalb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddalb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddalb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddah w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddah w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddah w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddlh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddlh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddlh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddalh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddalh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddalh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldadd w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldadd x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldadd x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldadda w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldadda x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldadda x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddl x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddl x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldaddal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldaddal x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldaddal x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclr w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclr w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclr w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclra w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclra w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclra w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclrl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrl w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrl w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclral w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclral w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclral w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclrb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclrh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclrab w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrab w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrab w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclrlb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrlb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrlb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclralb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclralb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclralb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclrah w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrah w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrah w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclrlh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrlh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrlh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclralh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclralh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclralh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclr w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclr x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclr x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclra w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclra x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclra x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclrl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclrl x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclrl x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldclral w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldclral x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldclral x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeor w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeor w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeor w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeora w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeora w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeora w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeorl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorl w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorl w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeoral w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeoral w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeoral w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeorb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeorh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeorab w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorab w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorab w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeorlb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorlb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorlb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeoralb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeoralb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeoralb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeorah w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorah w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorah w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeorlh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorlh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorlh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeoralh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeoralh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeoralh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeor w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeor x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeor x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeora w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeora x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeora x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeorl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeorl x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeorl x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldeoral w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldeoral x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldeoral x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldset w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldset w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldset w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldseta w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldseta w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldseta w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetl w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetl w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetal w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetal w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldseth w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldseth w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldseth w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetab w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetab w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetab w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetlb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetlb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetlb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetalb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetalb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetalb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetah w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetah w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetah w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetlh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetlh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetlh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetalh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetalh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetalh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldset w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldset x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldset x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldseta w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldseta x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldseta x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetl x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetl x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsetal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsetal x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsetal x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmax w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmax w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmax w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxa w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxa w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxa w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxl w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxl w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxal w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxal w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxab w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxab w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxab w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxlb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxlb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxlb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxalb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxalb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxalb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxah w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxah w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxah w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxlh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxlh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxlh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxalh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxalh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxalh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmax w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmax x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmax x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxa w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxa x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxa x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxl x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxl x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmaxal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmaxal x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmaxal x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmin w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmin w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmin w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmina w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmina w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmina w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminl w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminl w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminal w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminal w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminab w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminab w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminab w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminlb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminlb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminlb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminalb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminalb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminalb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminah w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminah w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminah w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminlh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminlh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminlh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminalh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminalh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminalh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmin w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmin x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmin x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsmina w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsmina x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsmina x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminl x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminl x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldsminal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldsminal x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldsminal x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumax w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumax w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumax w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxa w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxa w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxa w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxl w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxl w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxal w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxal w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxab w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxab w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxab w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxlb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxlb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxlb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxalb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxalb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxalb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxah w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxah w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxah w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxlh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxlh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxlh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxalh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxalh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxalh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumax w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumax x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumax x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxa w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxa x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxa x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxl x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxl x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumaxal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumaxal x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumaxal x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumin w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumin w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumin w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumina w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumina w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumina w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminl w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminl w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminal w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminal w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminab w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminab w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminab w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminlb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminlb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminlb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminalb w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminalb w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminalb w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminah w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminah w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminah w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminlh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminlh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminlh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminalh w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminalh w2,w3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminalh w2,w3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumin w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumin x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumin x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `ldumina w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `ldumina x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `ldumina x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminl w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminl x2,x3,\[w4\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminl x2,x3,\[w4\]'
>  [^:]*:71: Error: operand mismatch -- `lduminal w0,x1,\[x2\]'
> -[^:]*:71: Error: operand 3 should be an address with base register \(no offset\) -- `lduminal x2,x3,\[w4\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stadd w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddl w2,\[w3\]'
> +[^:]*:71: Error: 64-bit integer or SP register expected at operand 3 -- `lduminal x2,x3,\[w4\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stadd w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddl w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `staddb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `staddh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddh w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `staddlb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddlb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddlb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `staddlh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddlh w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stadd x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `staddl x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclr w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrl w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddlh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stadd x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `staddl x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclr w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrl w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stclrb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stclrh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrh w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stclrlb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrlb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrlb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stclrlh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrlh w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclr x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stclrl x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steor w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorl w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrlh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclr x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stclrl x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steor w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorl w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `steorb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `steorh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorh w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `steorlb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorlb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorlb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `steorlh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorlh w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steor x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `steorl x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stset w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsetl w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorlh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steor x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `steorl x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stset w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsetl w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsetb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsetb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsetb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stseth x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stseth w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stseth w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsetlb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsetlb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsetlb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsetlh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsetlh w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stset x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsetl x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmax w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxl w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsetlh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stset x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsetl x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmax w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxl w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsmaxb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsmaxh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxh w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsmaxlb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxlb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxlb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsmaxlh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxlh w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmax x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmaxl x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmin w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminl w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxlh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmax x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmaxl x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmin w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminl w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsminb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsminh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminh w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsminlb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminlb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminlb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stsminlh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminlh w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsmin x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stsminl x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumax w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxl w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminlh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsmin x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stsminl x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumax w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxl w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stumaxb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stumaxh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxh w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stumaxlb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxlb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxlb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stumaxlh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxlh w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumax x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumaxl x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumin w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminl w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxlh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumax x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumaxl x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumin w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminl w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stuminb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stuminh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminh w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stuminlb x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminlb w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminlb w2,\[w3\]'
>  [^:]*:72: Error: operand mismatch -- `stuminlh x0,\[x2\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminlh w2,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stumin x0,\[w3\]'
> -[^:]*:72: Error: operand 2 should be an address with base register \(no offset\) -- `stuminl x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminlh w2,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stumin x0,\[w3\]'
> +[^:]*:72: Error: 64-bit integer or SP register expected at operand 2 -- `stuminl x0,\[w3\]'
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 12/32] Make more use of bfd_boolean
  2016-09-16 11:56     ` Richard Sandiford
@ 2016-09-20 12:39       ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-09-20 12:39 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 16/09/16 12:56, Richard Sandiford wrote:
> "Richard Earnshaw (lists)" <Richard.Earnshaw@arm.com> writes:
>> On 23/08/16 10:13, Richard Sandiford wrote:
>>> Following on from the previous patch, which converted the
>>> aarch64_reg_parse_32_64 parameters to bfd_booleans, this one
>>> does the same for parse_address_main and parse_address.
>>> It also documents the parameters.
>>>
>>> This isn't an attempt to convert the whole file to use bfd_booleans
>>> more often.  It's simply trying to avoid inconsistencies with new
>>> SVE parameters.
>>>
>>> OK to install?
>>>
>>> Thanks,
>>> Richard
>>>
>>>
>>> gas/
>>> 	* config/tc-aarch64.c (parse_address_main): Turn reloc and
>>> 	accept_reg_post_index into bfd_booleans.  Add commentary.
>>> 	(parse_address_reloc): Update accordingly.  Add commentary.
>>> 	(parse_address): Likewise.  Also change accept_reg_post_index
>>> 	into a bfd_boolean here.
>>> 	(parse_operands): Update calls accordingly.
>>
>> My comment on the previous patch applies somewhat here too, although the
>> two bools are not as closely related here.  In particular statements
>> such as
>>
>>   return parse_address_main (str, operand, TRUE, FALSE);
>>
>> are not intuitively obvious to the reader of the code.
> 
> Yeah...
> 
> I think here too we can just get rid of the parameters and leavve the
> callers to check the addressing modes.  As it happens, the handling of
> ADDR_SIMM9{,_2} already did this for relocation operators (i.e. it used
> parse_address_reloc and then rejected relocations).
> 
> The callers are already set up to reject invalid register post-indexed
> addressing, so we can simply remove the accept_reg_post_index parameter
> without adding any more checks.  This again creates a corner case where:
> 
> 	.equ	x2, 1
> 	ldr	w0, [x1], x2
> 
> was previously an acceptable way of writing "ldr w0, [x1], #1" but
> is now rejected.

IMO that's a good thing.  I presume users can still write

	ldr	w0, [x1], #x2

in this instance and make the nature of x2 being a constant explicit.


> 
> Removing the "reloc" parameter means that two cases need to check
> explicitly for relocation operators.
> 
> ADDR_SIMM9_2 appers to be unused.  I'll send a separate patch
> to remove it.
> 
> This patch makes parse_address temporarily equivalent to
> parse_address_main, but later patches in the series will need
> to keep the distinction.
> 
> Tested on aarch64-linux-gnu.  OK to install?
> 

OK.

R.

> Thanks,
> Richard
> 
> 
> gas/
> 	* config/tc-aarch64.c (parse_address_main): Remove reloc and
> 	accept_reg_post_index parameters.  Parse relocations and register
> 	post indexes unconditionally.
> 	(parse_address): Remove accept_reg_post_index parameter.
> 	Update call to parse_address_main.
> 	(parse_address_reloc): Delete.
> 	(parse_operands): Call parse_address instead of parse_address_main.
> 	Update existing callers of parse_address and make them check
> 	inst.reloc.type where appropriate.
> 	* testsuite/gas/aarch64/diagnostic.s: Add tests for relocations
> 	in ADDR_SIMPLE, SIMD_ADDR_SIMPLE, ADDR_SIMM7 and ADDR_SIMM9 addresses.
> 	Also test for invalid uses of post-index register addressing.
> 	* testsuite/gas/aarch64/diagnostic.l: Update accordingly.
> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 7b5be8b..f82fdb9 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -3173,8 +3173,7 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>     supported by the instruction, and to set inst.reloc.type.  */
>  
>  static bfd_boolean
> -parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
> -		    int accept_reg_post_index)
> +parse_address_main (char **str, aarch64_opnd_info *operand)
>  {
>    char *p = *str;
>    const reg_entry *reg;
> @@ -3190,7 +3189,7 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>  
>        /* #:<reloc_op>:<symbol>  */
>        skip_past_char (&p, '#');
> -      if (reloc && skip_past_char (&p, ':'))
> +      if (skip_past_char (&p, ':'))
>  	{
>  	  bfd_reloc_code_real_type ty;
>  	  struct reloc_table_entry *entry;
> @@ -3315,7 +3314,7 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>  	{
>  	  /* [Xn,#:<reloc_op>:<symbol>  */
>  	  skip_past_char (&p, '#');
> -	  if (reloc && skip_past_char (&p, ':'))
> +	  if (skip_past_char (&p, ':'))
>  	    {
>  	      struct reloc_table_entry *entry;
>  
> @@ -3388,8 +3387,8 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>  	  return FALSE;
>  	}
>  
> -      if (accept_reg_post_index
> -	  && (reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier)))
> +      reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier);
> +      if (reg)
>  	{
>  	  /* [Xn],Xm */
>  	  if (!aarch64_check_reg_type (reg, REG_TYPE_R_64))
> @@ -3428,19 +3427,12 @@ parse_address_main (char **str, aarch64_opnd_info *operand, int reloc,
>    return TRUE;
>  }
>  
> -/* Return TRUE on success; otherwise return FALSE.  */
> -static bfd_boolean
> -parse_address (char **str, aarch64_opnd_info *operand,
> -	       int accept_reg_post_index)
> -{
> -  return parse_address_main (str, operand, 0, accept_reg_post_index);
> -}
> -
> -/* Return TRUE on success; otherwise return FALSE.  */
> +/* Parse a base AArch64 address (as opposed to an SVE one).  Return TRUE
> +   on success.  */
>  static bfd_boolean
> -parse_address_reloc (char **str, aarch64_opnd_info *operand)
> +parse_address (char **str, aarch64_opnd_info *operand)
>  {
> -  return parse_address_main (str, operand, 1, 0);
> +  return parse_address_main (str, operand);
>  }
>  
>  /* Parse an operand for a MOVZ, MOVN or MOVK instruction.
> @@ -5419,7 +5411,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	case AARCH64_OPND_ADDR_PCREL19:
>  	case AARCH64_OPND_ADDR_PCREL21:
>  	case AARCH64_OPND_ADDR_PCREL26:
> -	  po_misc_or_fail (parse_address_reloc (&str, info));
> +	  po_misc_or_fail (parse_address (&str, info));
>  	  if (!info->addr.pcrel)
>  	    {
>  	      set_syntax_error (_("invalid pc-relative address"));
> @@ -5490,7 +5482,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	    char *start = str;
>  	    /* First use the normal address-parsing routines, to get
>  	       the usual syntax errors.  */
> -	    po_misc_or_fail (parse_address (&str, info, 0));
> +	    po_misc_or_fail (parse_address (&str, info));
>  	    if (info->addr.pcrel || info->addr.offset.is_reg
>  		|| !info->addr.preind || info->addr.postind
>  		|| info->addr.writeback)
> @@ -5521,7 +5513,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  
>  	case AARCH64_OPND_ADDR_REGOFF:
>  	  /* [<Xn|SP>, <R><m>{, <extend> {<amount>}}]  */
> -	  po_misc_or_fail (parse_address (&str, info, 0));
> +	  po_misc_or_fail (parse_address (&str, info));
>  	  if (info->addr.pcrel || !info->addr.offset.is_reg
>  	      || !info->addr.preind || info->addr.postind
>  	      || info->addr.writeback)
> @@ -5540,13 +5532,18 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  break;
>  
>  	case AARCH64_OPND_ADDR_SIMM7:
> -	  po_misc_or_fail (parse_address (&str, info, 0));
> +	  po_misc_or_fail (parse_address (&str, info));
>  	  if (info->addr.pcrel || info->addr.offset.is_reg
>  	      || (!info->addr.preind && !info->addr.postind))
>  	    {
>  	      set_syntax_error (_("invalid addressing mode"));
>  	      goto failure;
>  	    }
> +	  if (inst.reloc.type != BFD_RELOC_UNUSED)
> +	    {
> +	      set_syntax_error (_("relocation not allowed"));
> +	      goto failure;
> +	    }
>  	  assign_imm_if_const_or_fixup_later (&inst.reloc, info,
>  					      /* addr_off_p */ 1,
>  					      /* need_libopcodes_p */ 1,
> @@ -5555,7 +5552,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  
>  	case AARCH64_OPND_ADDR_SIMM9:
>  	case AARCH64_OPND_ADDR_SIMM9_2:
> -	  po_misc_or_fail (parse_address_reloc (&str, info));
> +	  po_misc_or_fail (parse_address (&str, info));
>  	  if (info->addr.pcrel || info->addr.offset.is_reg
>  	      || (!info->addr.preind && !info->addr.postind)
>  	      || (operands[i] == AARCH64_OPND_ADDR_SIMM9_2
> @@ -5576,7 +5573,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  break;
>  
>  	case AARCH64_OPND_ADDR_UIMM12:
> -	  po_misc_or_fail (parse_address_reloc (&str, info));
> +	  po_misc_or_fail (parse_address (&str, info));
>  	  if (info->addr.pcrel || info->addr.offset.is_reg
>  	      || !info->addr.preind || info->addr.writeback)
>  	    {
> @@ -5596,7 +5593,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  
>  	case AARCH64_OPND_SIMD_ADDR_POST:
>  	  /* [<Xn|SP>], <Xm|#<amount>>  */
> -	  po_misc_or_fail (parse_address (&str, info, 1));
> +	  po_misc_or_fail (parse_address (&str, info));
>  	  if (!info->addr.postind || !info->addr.writeback)
>  	    {
>  	      set_syntax_error (_("invalid addressing mode"));
> diff --git a/gas/testsuite/gas/aarch64/diagnostic.l b/gas/testsuite/gas/aarch64/diagnostic.l
> index ef23577..0fb4db9 100644
> --- a/gas/testsuite/gas/aarch64/diagnostic.l
> +++ b/gas/testsuite/gas/aarch64/diagnostic.l
> @@ -150,3 +150,11 @@
>  [^:]*:264: Error: invalid floating-point constant at operand 2 -- `fmov s0,-2'
>  [^:]*:266: Error: integer 64-bit register expected at operand 2 -- `st2 {v0.4s,v1.4s},\[sp\],xzr'
>  [^:]*:267: Error: integer or zero register expected at operand 2 -- `str x1,\[x2,sp\]'
> +[^:]*:270: Error: relocation not allowed at operand 3 -- `ldnp x1,x2,\[x3,#:lo12:foo\]'
> +[^:]*:271: Error: invalid addressing mode at operand 2 -- `ld1 {v0\.4s},\[x3,#:lo12:foo\]'
> +[^:]*:272: Error: the optional immediate offset can only be 0 at operand 2 -- `stuminl x0,\[x3,#:lo12:foo\]'
> +[^:]*:273: Error: relocation not allowed at operand 2 -- `prfum pldl1keep,\[x3,#:lo12:foo\]'
> +[^:]*:275: Error: invalid addressing mode at operand 2 -- `ldr x0,\[x3\],x4'
> +[^:]*:276: Error: invalid addressing mode at operand 3 -- `ldnp x1,x2,\[x3\],x4'
> +[^:]*:278: Error: invalid addressing mode at operand 2 -- `stuminl x0,\[x3\],x4'
> +[^:]*:279: Error: invalid addressing mode at operand 2 -- `prfum pldl1keep,\[x3\],x4'
> diff --git a/gas/testsuite/gas/aarch64/diagnostic.s b/gas/testsuite/gas/aarch64/diagnostic.s
> index 8dbb542..a9cd124 100644
> --- a/gas/testsuite/gas/aarch64/diagnostic.s
> +++ b/gas/testsuite/gas/aarch64/diagnostic.s
> @@ -265,3 +265,15 @@
>  
>  	st2	{v0.4s, v1.4s}, [sp], xzr
>  	str	x1, [x2, sp]
> +
> +	ldr	x0, [x1, #:lo12:foo] // OK
> +	ldnp	x1, x2, [x3, #:lo12:foo]
> +	ld1	{v0.4s}, [x3, #:lo12:foo]
> +	stuminl x0, [x3, #:lo12:foo]
> +	prfum	pldl1keep, [x3, #:lo12:foo]
> +
> +	ldr	x0, [x3], x4
> +	ldnp	x1, x2, [x3], x4
> +	ld1	{v0.4s}, [x3], x4 // OK
> +	stuminl x0, [x3], x4
> +	prfum	pldl1keep, [x3], x4
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 25/32] Add support for SVE addressing modes
  2016-09-16 12:06     ` Richard Sandiford
@ 2016-09-20 13:40       ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-09-20 13:40 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 16/09/16 13:06, Richard Sandiford wrote:
> "Richard Earnshaw (lists)" <Richard.Earnshaw@arm.com> writes:
>> On 23/08/16 10:21, Richard Sandiford wrote:
>>> This patch adds most of the new SVE addressing modes and associated
>>> operands.  A follow-on patch adds MUL VL, since handling it separately
>>> makes the changes easier to read.
>>>
>>> The patch also introduces a new "operand-dependent data" field to the
>>> operand flags, based closely on the existing one for opcode flags.
>>> For SVE this new field needs only 2 bits, but it could be widened
>>> in future if necessary.
>>>
>>> OK to install?
>>>
>>> Thanks,
>>> Richard
>>>
>>>
>>> include/opcode/
>>> 	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_U6): New aarch64_opnd.
>>> 	(AARCH64_OPND_SVE_ADDR_RI_U6x2, AARCH64_OPND_SVE_ADDR_RI_U6x4)
>>> 	(AARCH64_OPND_SVE_ADDR_RI_U6x8, AARCH64_OPND_SVE_ADDR_RR)
>>> 	(AARCH64_OPND_SVE_ADDR_RR_LSL1, AARCH64_OPND_SVE_ADDR_RR_LSL2)
>>> 	(AARCH64_OPND_SVE_ADDR_RR_LSL3, AARCH64_OPND_SVE_ADDR_RX)
>>> 	(AARCH64_OPND_SVE_ADDR_RX_LSL1, AARCH64_OPND_SVE_ADDR_RX_LSL2)
>>> 	(AARCH64_OPND_SVE_ADDR_RX_LSL3, AARCH64_OPND_SVE_ADDR_RZ)
>>> 	(AARCH64_OPND_SVE_ADDR_RZ_LSL1, AARCH64_OPND_SVE_ADDR_RZ_LSL2)
>>> 	(AARCH64_OPND_SVE_ADDR_RZ_LSL3, AARCH64_OPND_SVE_ADDR_RZ_XTW_14)
>>> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW_22, AARCH64_OPND_SVE_ADDR_RZ_XTW1_14)
>>> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW1_22, AARCH64_OPND_SVE_ADDR_RZ_XTW2_14)
>>> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW2_22, AARCH64_OPND_SVE_ADDR_RZ_XTW3_14)
>>> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW3_22, AARCH64_OPND_SVE_ADDR_ZI_U5)
>>> 	(AARCH64_OPND_SVE_ADDR_ZI_U5x2, AARCH64_OPND_SVE_ADDR_ZI_U5x4)
>>> 	(AARCH64_OPND_SVE_ADDR_ZI_U5x8, AARCH64_OPND_SVE_ADDR_ZZ_LSL)
>>> 	(AARCH64_OPND_SVE_ADDR_ZZ_SXTW, AARCH64_OPND_SVE_ADDR_ZZ_UXTW):
>>> 	Likewise.
>>>
>>> opcodes/
>>> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE
>>> 	address operands.
>>> 	* aarch64-opc.h (FLD_SVE_imm6, FLD_SVE_msz, FLD_SVE_xs_14)
>>> 	(FLD_SVE_xs_22): New aarch64_field_kinds.
>>> 	(OPD_F_OD_MASK, OPD_F_OD_LSB, OPD_F_NO_ZR): New flags.
>>> 	(get_operand_specific_data): New function.
>>> 	* aarch64-opc.c (fields): Add entries for FLD_SVE_imm6, FLD_SVE_msz,
>>> 	FLD_SVE_xs_14 and FLD_SVE_xs_22.
>>> 	(operand_general_constraint_met_p): Handle the new SVE address
>>> 	operands.
>>> 	(sve_reg): New array.
>>> 	(get_addr_sve_reg_name): New function.
>>> 	(aarch64_print_operand): Handle the new SVE address operands.
>>> 	* aarch64-opc-2.c: Regenerate.
>>> 	* aarch64-asm.h (ins_sve_addr_ri_u6, ins_sve_addr_rr_lsl)
>>> 	(ins_sve_addr_rz_xtw, ins_sve_addr_zi_u5, ins_sve_addr_zz_lsl)
>>> 	(ins_sve_addr_zz_sxtw, ins_sve_addr_zz_uxtw): New inserters.
>>> 	* aarch64-asm.c (aarch64_ins_sve_addr_ri_u6): New function.
>>> 	(aarch64_ins_sve_addr_rr_lsl): Likewise.
>>> 	(aarch64_ins_sve_addr_rz_xtw): Likewise.
>>> 	(aarch64_ins_sve_addr_zi_u5): Likewise.
>>> 	(aarch64_ins_sve_addr_zz): Likewise.
>>> 	(aarch64_ins_sve_addr_zz_lsl): Likewise.
>>> 	(aarch64_ins_sve_addr_zz_sxtw): Likewise.
>>> 	(aarch64_ins_sve_addr_zz_uxtw): Likewise.
>>> 	* aarch64-asm-2.c: Regenerate.
>>> 	* aarch64-dis.h (ext_sve_addr_ri_u6, ext_sve_addr_rr_lsl)
>>> 	(ext_sve_addr_rz_xtw, ext_sve_addr_zi_u5, ext_sve_addr_zz_lsl)
>>> 	(ext_sve_addr_zz_sxtw, ext_sve_addr_zz_uxtw): New extractors.
>>> 	* aarch64-dis.c (aarch64_ext_sve_add_reg_imm): New function.
>>> 	(aarch64_ext_sve_addr_ri_u6): Likewise.
>>> 	(aarch64_ext_sve_addr_rr_lsl): Likewise.
>>> 	(aarch64_ext_sve_addr_rz_xtw): Likewise.
>>> 	(aarch64_ext_sve_addr_zi_u5): Likewise.
>>> 	(aarch64_ext_sve_addr_zz): Likewise.
>>> 	(aarch64_ext_sve_addr_zz_lsl): Likewise.
>>> 	(aarch64_ext_sve_addr_zz_sxtw): Likewise.
>>> 	(aarch64_ext_sve_addr_zz_uxtw): Likewise.
>>> 	* aarch64-dis-2.c: Regenerate.
>>>
>>> gas/
>>> 	* config/tc-aarch64.c (aarch64_addr_reg_parse): New function,
>>> 	split out from aarch64_reg_parse_32_64.  Handle Z registers too.
>>> 	(aarch64_reg_parse_32_64): Call it.
>>> 	(parse_address_main): Add base_qualifier, offset_qualifier
>>> 	and accept_sve parameters.  Handle SVE base and offset registers.
>>
>> Ug!  Another bool parameter.
> 
> Here's an updated version, based on the new versions of patches
> 11 and 12.  It adds register type enums for the registers that
> can be used as bases and offsets in SVE instructions (which is
> the normal set plus Zn.D and Zn.S).  We can then use register
> types instead of boolean parameters to say which registers
> are acceptable.
> 
> Tested on aarch64-linux-gnu.  OK to install?
> 

OK.

R.

> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_U6): New aarch64_opnd.
> 	(AARCH64_OPND_SVE_ADDR_RI_U6x2, AARCH64_OPND_SVE_ADDR_RI_U6x4)
> 	(AARCH64_OPND_SVE_ADDR_RI_U6x8, AARCH64_OPND_SVE_ADDR_RR)
> 	(AARCH64_OPND_SVE_ADDR_RR_LSL1, AARCH64_OPND_SVE_ADDR_RR_LSL2)
> 	(AARCH64_OPND_SVE_ADDR_RR_LSL3, AARCH64_OPND_SVE_ADDR_RX)
> 	(AARCH64_OPND_SVE_ADDR_RX_LSL1, AARCH64_OPND_SVE_ADDR_RX_LSL2)
> 	(AARCH64_OPND_SVE_ADDR_RX_LSL3, AARCH64_OPND_SVE_ADDR_RZ)
> 	(AARCH64_OPND_SVE_ADDR_RZ_LSL1, AARCH64_OPND_SVE_ADDR_RZ_LSL2)
> 	(AARCH64_OPND_SVE_ADDR_RZ_LSL3, AARCH64_OPND_SVE_ADDR_RZ_XTW_14)
> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW_22, AARCH64_OPND_SVE_ADDR_RZ_XTW1_14)
> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW1_22, AARCH64_OPND_SVE_ADDR_RZ_XTW2_14)
> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW2_22, AARCH64_OPND_SVE_ADDR_RZ_XTW3_14)
> 	(AARCH64_OPND_SVE_ADDR_RZ_XTW3_22, AARCH64_OPND_SVE_ADDR_ZI_U5)
> 	(AARCH64_OPND_SVE_ADDR_ZI_U5x2, AARCH64_OPND_SVE_ADDR_ZI_U5x4)
> 	(AARCH64_OPND_SVE_ADDR_ZI_U5x8, AARCH64_OPND_SVE_ADDR_ZZ_LSL)
> 	(AARCH64_OPND_SVE_ADDR_ZZ_SXTW, AARCH64_OPND_SVE_ADDR_ZZ_UXTW):
> 	Likewise.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE
> 	address operands.
> 	* aarch64-opc.h (FLD_SVE_imm6, FLD_SVE_msz, FLD_SVE_xs_14)
> 	(FLD_SVE_xs_22): New aarch64_field_kinds.
> 	(OPD_F_OD_MASK, OPD_F_OD_LSB, OPD_F_NO_ZR): New flags.
> 	(get_operand_specific_data): New function.
> 	* aarch64-opc.c (fields): Add entries for FLD_SVE_imm6, FLD_SVE_msz,
> 	FLD_SVE_xs_14 and FLD_SVE_xs_22.
> 	(operand_general_constraint_met_p): Handle the new SVE address
> 	operands.
> 	(sve_reg): New array.
> 	(get_addr_sve_reg_name): New function.
> 	(aarch64_print_operand): Handle the new SVE address operands.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm.h (ins_sve_addr_ri_u6, ins_sve_addr_rr_lsl)
> 	(ins_sve_addr_rz_xtw, ins_sve_addr_zi_u5, ins_sve_addr_zz_lsl)
> 	(ins_sve_addr_zz_sxtw, ins_sve_addr_zz_uxtw): New inserters.
> 	* aarch64-asm.c (aarch64_ins_sve_addr_ri_u6): New function.
> 	(aarch64_ins_sve_addr_rr_lsl): Likewise.
> 	(aarch64_ins_sve_addr_rz_xtw): Likewise.
> 	(aarch64_ins_sve_addr_zi_u5): Likewise.
> 	(aarch64_ins_sve_addr_zz): Likewise.
> 	(aarch64_ins_sve_addr_zz_lsl): Likewise.
> 	(aarch64_ins_sve_addr_zz_sxtw): Likewise.
> 	(aarch64_ins_sve_addr_zz_uxtw): Likewise.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.h (ext_sve_addr_ri_u6, ext_sve_addr_rr_lsl)
> 	(ext_sve_addr_rz_xtw, ext_sve_addr_zi_u5, ext_sve_addr_zz_lsl)
> 	(ext_sve_addr_zz_sxtw, ext_sve_addr_zz_uxtw): New extractors.
> 	* aarch64-dis.c (aarch64_ext_sve_add_reg_imm): New function.
> 	(aarch64_ext_sve_addr_ri_u6): Likewise.
> 	(aarch64_ext_sve_addr_rr_lsl): Likewise.
> 	(aarch64_ext_sve_addr_rz_xtw): Likewise.
> 	(aarch64_ext_sve_addr_zi_u5): Likewise.
> 	(aarch64_ext_sve_addr_zz): Likewise.
> 	(aarch64_ext_sve_addr_zz_lsl): Likewise.
> 	(aarch64_ext_sve_addr_zz_sxtw): Likewise.
> 	(aarch64_ext_sve_addr_zz_uxtw): Likewise.
> 	* aarch64-dis-2.c: Regenerate.
> 
> gas/
> 	* config/tc-aarch64.c (REG_TYPE_SVE_BASE, REG_TYPE_SVE_OFFSET): New
> 	register types.
> 	(get_reg_expected_msg): Handle them.
> 	(aarch64_addr_reg_parse): New function, split out from
> 	aarch64_reg_parse_32_64.  Handle Z registers too.
> 	(aarch64_reg_parse_32_64): Call it.
> 	(parse_address_main): Add base_qualifier, offset_qualifier,
> 	base_type and offset_type parameters.  Handle SVE base and offset
> 	registers.
> 	(parse_address): Update call to parse_address_main.
> 	(parse_sve_address): New function.
> 	(parse_operands): Parse the new SVE address operands.
> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 79ee054..e59333f 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -272,9 +272,16 @@ struct reloc_entry
>    BASIC_REG_TYPE(PN)	/* p[0-15] */	\
>    /* Typecheck: any 64-bit int reg         (inc SP exc XZR).  */	\
>    MULTI_REG_TYPE(R64_SP, REG_TYPE(R_64) | REG_TYPE(SP_64))		\
> +  /* Typecheck: same, plus SVE registers.  */				\
> +  MULTI_REG_TYPE(SVE_BASE, REG_TYPE(R_64) | REG_TYPE(SP_64)		\
> +		 | REG_TYPE(ZN))					\
>    /* Typecheck: x[0-30], w[0-30] or [xw]zr.  */				\
>    MULTI_REG_TYPE(R_Z, REG_TYPE(R_32) | REG_TYPE(R_64)			\
>  		 | REG_TYPE(Z_32) | REG_TYPE(Z_64))			\
> +  /* Typecheck: same, plus SVE registers.  */				\
> +  MULTI_REG_TYPE(SVE_OFFSET, REG_TYPE(R_32) | REG_TYPE(R_64)		\
> +		 | REG_TYPE(Z_32) | REG_TYPE(Z_64)			\
> +		 | REG_TYPE(ZN))					\
>    /* Typecheck: x[0-30], w[0-30] or {w}sp.  */				\
>    MULTI_REG_TYPE(R_SP, REG_TYPE(R_32) | REG_TYPE(R_64)			\
>  		 | REG_TYPE(SP_32) | REG_TYPE(SP_64))			\
> @@ -358,9 +365,15 @@ get_reg_expected_msg (aarch64_reg_type reg_type)
>      case REG_TYPE_R64_SP:
>        msg = N_("64-bit integer or SP register expected");
>        break;
> +    case REG_TYPE_SVE_BASE:
> +      msg = N_("base register expected");
> +      break;
>      case REG_TYPE_R_Z:
>        msg = N_("integer or zero register expected");
>        break;
> +    case REG_TYPE_SVE_OFFSET:
> +      msg = N_("offset register expected");
> +      break;
>      case REG_TYPE_R_SP:
>        msg = N_("integer or SP register expected");
>        break;
> @@ -697,14 +710,16 @@ aarch64_check_reg_type (const reg_entry *reg, aarch64_reg_type type)
>    return (reg_type_masks[type] & (1 << reg->type)) != 0;
>  }
>  
> -/* Try to parse a base or offset register.  Return the register entry
> -   on success, setting *QUALIFIER to the register qualifier.  Return null
> -   otherwise.
> +/* Try to parse a base or offset register.  Allow SVE base and offset
> +   registers if REG_TYPE includes SVE registers.  Return the register
> +   entry on success, setting *QUALIFIER to the register qualifier.
> +   Return null otherwise.
>  
>     Note that this function does not issue any diagnostics.  */
>  
>  static const reg_entry *
> -aarch64_reg_parse_32_64 (char **ccp, aarch64_opnd_qualifier_t *qualifier)
> +aarch64_addr_reg_parse (char **ccp, aarch64_reg_type reg_type,
> +			aarch64_opnd_qualifier_t *qualifier)
>  {
>    char *str = *ccp;
>    const reg_entry *reg = parse_reg (&str);
> @@ -726,6 +741,24 @@ aarch64_reg_parse_32_64 (char **ccp, aarch64_opnd_qualifier_t *qualifier)
>        *qualifier = AARCH64_OPND_QLF_X;
>        break;
>  
> +    case REG_TYPE_ZN:
> +      if ((reg_type_masks[reg_type] & (1 << REG_TYPE_ZN)) == 0
> +	  || str[0] != '.')
> +	return NULL;
> +      switch (TOLOWER (str[1]))
> +	{
> +	case 's':
> +	  *qualifier = AARCH64_OPND_QLF_S_S;
> +	  break;
> +	case 'd':
> +	  *qualifier = AARCH64_OPND_QLF_S_D;
> +	  break;
> +	default:
> +	  return NULL;
> +	}
> +      str += 2;
> +      break;
> +
>      default:
>        return NULL;
>      }
> @@ -735,6 +768,18 @@ aarch64_reg_parse_32_64 (char **ccp, aarch64_opnd_qualifier_t *qualifier)
>    return reg;
>  }
>  
> +/* Try to parse a base or offset register.  Return the register entry
> +   on success, setting *QUALIFIER to the register qualifier.  Return null
> +   otherwise.
> +
> +   Note that this function does not issue any diagnostics.  */
> +
> +static const reg_entry *
> +aarch64_reg_parse_32_64 (char **ccp, aarch64_opnd_qualifier_t *qualifier)
> +{
> +  return aarch64_addr_reg_parse (ccp, REG_TYPE_R_Z_SP, qualifier);
> +}
> +
>  /* Parse the qualifier of a vector register or vector element of type
>     REG_TYPE.  Fill in *PARSED_TYPE and return TRUE if the parsing
>     succeeds; otherwise return FALSE.
> @@ -3209,8 +3254,8 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>     The A64 instruction set has the following addressing modes:
>  
>     Offset
> -     [base]			// in SIMD ld/st structure
> -     [base{,#0}]		// in ld/st exclusive
> +     [base]			 // in SIMD ld/st structure
> +     [base{,#0}]		 // in ld/st exclusive
>       [base{,#imm}]
>       [base,Xm{,LSL #imm}]
>       [base,Xm,SXTX {#imm}]
> @@ -3219,10 +3264,18 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>       [base,#imm]!
>     Post-indexed
>       [base],#imm
> -     [base],Xm			// in SIMD ld/st structure
> +     [base],Xm			 // in SIMD ld/st structure
>     PC-relative (literal)
>       label
> -     =immediate
> +   SVE:
> +     [base,Zm.D{,LSL #imm}]
> +     [base,Zm.S,(S|U)XTW {#imm}]
> +     [base,Zm.D,(S|U)XTW {#imm}] // ignores top 32 bits of Zm.D elements
> +     [Zn.S,#imm]
> +     [Zn.D,#imm]
> +     [Zn.S,Zm.S{,LSL #imm}]      // in ADR
> +     [Zn.D,Zm.D{,LSL #imm}]      // in ADR
> +     [Zn.D,Zm.D,(S|U)XTW {#imm}] // in ADR
>  
>     (As a convenience, the notation "=immediate" is permitted in conjunction
>     with the pc-relative literal load instructions to automatically place an
> @@ -3249,19 +3302,27 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>       .pcrel=1; .preind=1; .postind=0; .writeback=0
>  
>     The shift/extension information, if any, will be stored in .shifter.
> +   The base and offset qualifiers will be stored in *BASE_QUALIFIER and
> +   *OFFSET_QUALIFIER respectively, with NIL being used if there's no
> +   corresponding register.
>  
> -   It is the caller's responsibility to check for addressing modes not
> +   BASE_TYPE says which types of base register should be accepted and
> +   OFFSET_TYPE says the same for offset registers.  In all other respects,
> +   it is the caller's responsibility to check for addressing modes not
>     supported by the instruction, and to set inst.reloc.type.  */
>  
>  static bfd_boolean
> -parse_address_main (char **str, aarch64_opnd_info *operand)
> +parse_address_main (char **str, aarch64_opnd_info *operand,
> +		    aarch64_opnd_qualifier_t *base_qualifier,
> +		    aarch64_opnd_qualifier_t *offset_qualifier,
> +		    aarch64_reg_type base_type, aarch64_reg_type offset_type)
>  {
>    char *p = *str;
>    const reg_entry *reg;
> -  aarch64_opnd_qualifier_t base_qualifier;
> -  aarch64_opnd_qualifier_t offset_qualifier;
>    expressionS *exp = &inst.reloc.exp;
>  
> +  *base_qualifier = AARCH64_OPND_QLF_NIL;
> +  *offset_qualifier = AARCH64_OPND_QLF_NIL;
>    if (! skip_past_char (&p, '['))
>      {
>        /* =immediate or label.  */
> @@ -3336,10 +3397,10 @@ parse_address_main (char **str, aarch64_opnd_info *operand)
>  
>    /* [ */
>  
> -  reg = aarch64_reg_parse_32_64 (&p, &base_qualifier);
> -  if (!reg || !aarch64_check_reg_type (reg, REG_TYPE_R64_SP))
> +  reg = aarch64_addr_reg_parse (&p, base_type, base_qualifier);
> +  if (!reg || !aarch64_check_reg_type (reg, base_type))
>      {
> -      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R64_SP)));
> +      set_syntax_error (_(get_reg_expected_msg (base_type)));
>        return FALSE;
>      }
>    operand->addr.base_regno = reg->number;
> @@ -3350,12 +3411,12 @@ parse_address_main (char **str, aarch64_opnd_info *operand)
>        /* [Xn, */
>        operand->addr.preind = 1;
>  
> -      reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier);
> +      reg = aarch64_addr_reg_parse (&p, offset_type, offset_qualifier);
>        if (reg)
>  	{
> -	  if (!aarch64_check_reg_type (reg, REG_TYPE_R_Z))
> +	  if (!aarch64_check_reg_type (reg, offset_type))
>  	    {
> -	      set_syntax_error (_(get_reg_expected_msg (REG_TYPE_R_Z)));
> +	      set_syntax_error (_(get_reg_expected_msg (offset_type)));
>  	      return FALSE;
>  	    }
>  
> @@ -3379,13 +3440,19 @@ parse_address_main (char **str, aarch64_opnd_info *operand)
>  	      || operand->shifter.kind == AARCH64_MOD_LSL
>  	      || operand->shifter.kind == AARCH64_MOD_SXTX)
>  	    {
> -	      if (offset_qualifier == AARCH64_OPND_QLF_W)
> +	      if (*offset_qualifier == AARCH64_OPND_QLF_W)
>  		{
>  		  set_syntax_error (_("invalid use of 32-bit register offset"));
>  		  return FALSE;
>  		}
> +	      if (aarch64_get_qualifier_esize (*base_qualifier)
> +		  != aarch64_get_qualifier_esize (*offset_qualifier))
> +		{
> +		  set_syntax_error (_("offset has different size from base"));
> +		  return FALSE;
> +		}
>  	    }
> -	  else if (offset_qualifier == AARCH64_OPND_QLF_X)
> +	  else if (*offset_qualifier == AARCH64_OPND_QLF_X)
>  	    {
>  	      set_syntax_error (_("invalid use of 64-bit register offset"));
>  	      return FALSE;
> @@ -3468,7 +3535,7 @@ parse_address_main (char **str, aarch64_opnd_info *operand)
>  	  return FALSE;
>  	}
>  
> -      reg = aarch64_reg_parse_32_64 (&p, &offset_qualifier);
> +      reg = aarch64_reg_parse_32_64 (&p, offset_qualifier);
>        if (reg)
>  	{
>  	  /* [Xn],Xm */
> @@ -3513,7 +3580,21 @@ parse_address_main (char **str, aarch64_opnd_info *operand)
>  static bfd_boolean
>  parse_address (char **str, aarch64_opnd_info *operand)
>  {
> -  return parse_address_main (str, operand);
> +  aarch64_opnd_qualifier_t base_qualifier, offset_qualifier;
> +  return parse_address_main (str, operand, &base_qualifier, &offset_qualifier,
> +			     REG_TYPE_R64_SP, REG_TYPE_R_Z);
> +}
> +
> +/* Parse an address in which SVE vector registers are allowed.
> +   The arguments have the same meaning as for parse_address_main.
> +   Return TRUE on success.  */
> +static bfd_boolean
> +parse_sve_address (char **str, aarch64_opnd_info *operand,
> +		   aarch64_opnd_qualifier_t *base_qualifier,
> +		   aarch64_opnd_qualifier_t *offset_qualifier)
> +{
> +  return parse_address_main (str, operand, base_qualifier, offset_qualifier,
> +			     REG_TYPE_SVE_BASE, REG_TYPE_SVE_OFFSET);
>  }
>  
>  /* Parse an operand for a MOVZ, MOVN or MOVK instruction.
> @@ -5123,7 +5204,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>        int comma_skipped_p = 0;
>        aarch64_reg_type rtype;
>        struct vector_type_el vectype;
> -      aarch64_opnd_qualifier_t qualifier;
> +      aarch64_opnd_qualifier_t qualifier, base_qualifier, offset_qualifier;
>        aarch64_opnd_info *info = &inst.base.operands[i];
>        aarch64_reg_type reg_type;
>  
> @@ -5757,6 +5838,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	case AARCH64_OPND_ADDR_REGOFF:
>  	  /* [<Xn|SP>, <R><m>{, <extend> {<amount>}}]  */
>  	  po_misc_or_fail (parse_address (&str, info));
> +	regoff_addr:
>  	  if (info->addr.pcrel || !info->addr.offset.is_reg
>  	      || !info->addr.preind || info->addr.postind
>  	      || info->addr.writeback)
> @@ -5856,6 +5938,123 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  /* No qualifier.  */
>  	  break;
>  
> +	case AARCH64_OPND_SVE_ADDR_RI_U6:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
> +	  /* [X<n>{, #imm}]
> +	     but recognizing SVE registers.  */
> +	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
> +					      &offset_qualifier));
> +	  if (base_qualifier != AARCH64_OPND_QLF_X)
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	sve_regimm:
> +	  if (info->addr.pcrel || info->addr.offset.is_reg
> +	      || !info->addr.preind || info->addr.writeback)
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	  if (inst.reloc.type != BFD_RELOC_UNUSED
> +	      || inst.reloc.exp.X_op != O_constant)
> +	    {
> +	      /* Make sure this has priority over
> +		 "invalid addressing mode".  */
> +	      set_fatal_syntax_error (_("constant offset required"));
> +	      goto failure;
> +	    }
> +	  info->addr.offset.imm = inst.reloc.exp.X_add_number;
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ADDR_RR:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL3:
> +	case AARCH64_OPND_SVE_ADDR_RX:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL3:
> +	  /* [<Xn|SP>, <R><m>{, lsl #<amount>}]
> +	     but recognizing SVE registers.  */
> +	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
> +					      &offset_qualifier));
> +	  if (base_qualifier != AARCH64_OPND_QLF_X
> +	      || offset_qualifier != AARCH64_OPND_QLF_X)
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	  goto regoff_addr;
> +
> +	case AARCH64_OPND_SVE_ADDR_RZ:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
> +	  /* [<Xn|SP>, Z<m>.D{, LSL #<amount>}]
> +	     [<Xn|SP>, Z<m>.<T>, <extend> {#<amount>}]  */
> +	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
> +					      &offset_qualifier));
> +	  if (base_qualifier != AARCH64_OPND_QLF_X
> +	      || (offset_qualifier != AARCH64_OPND_QLF_S_S
> +		  && offset_qualifier != AARCH64_OPND_QLF_S_D))
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	  info->qualifier = offset_qualifier;
> +	  goto regoff_addr;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
> +	  /* [Z<n>.<T>{, #imm}]  */
> +	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
> +					      &offset_qualifier));
> +	  if (base_qualifier != AARCH64_OPND_QLF_S_S
> +	      && base_qualifier != AARCH64_OPND_QLF_S_D)
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	  info->qualifier = base_qualifier;
> +	  goto sve_regimm;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
> +	case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
> +	case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
> +	  /* [Z<n>.<T>, Z<m>.<T>{, LSL #<amount>}]
> +	     [Z<n>.D, Z<m>.D, <extend> {#<amount>}]
> +
> +	     We don't reject:
> +
> +	     [Z<n>.S, Z<m>.S, <extend> {#<amount>}]
> +
> +	     here since we get better error messages by leaving it to
> +	     the qualifier checking routines.  */
> +	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
> +					      &offset_qualifier));
> +	  if ((base_qualifier != AARCH64_OPND_QLF_S_S
> +	       && base_qualifier != AARCH64_OPND_QLF_S_D)
> +	      || offset_qualifier != base_qualifier)
> +	    {
> +	      set_syntax_error (_("invalid addressing mode"));
> +	      goto failure;
> +	    }
> +	  info->qualifier = base_qualifier;
> +	  goto regoff_addr;
> +
>  	case AARCH64_OPND_SYSREG:
>  	  if ((val = parse_sys_reg (&str, aarch64_sys_regs_hsh, 1, 0))
>  	      == PARSE_FAIL)
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index 49b4413..e61ac9c 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -244,6 +244,45 @@ enum aarch64_opnd
>    AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
>    AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
>  
> +  AARCH64_OPND_SVE_ADDR_RI_U6,	    /* SVE [<Xn|SP>, #<uimm6>].  */
> +  AARCH64_OPND_SVE_ADDR_RI_U6x2,    /* SVE [<Xn|SP>, #<uimm6>*2].  */
> +  AARCH64_OPND_SVE_ADDR_RI_U6x4,    /* SVE [<Xn|SP>, #<uimm6>*4].  */
> +  AARCH64_OPND_SVE_ADDR_RI_U6x8,    /* SVE [<Xn|SP>, #<uimm6>*8].  */
> +  AARCH64_OPND_SVE_ADDR_RR,	    /* SVE [<Xn|SP>, <Xm|XZR>].  */
> +  AARCH64_OPND_SVE_ADDR_RR_LSL1,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #1].  */
> +  AARCH64_OPND_SVE_ADDR_RR_LSL2,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #2].  */
> +  AARCH64_OPND_SVE_ADDR_RR_LSL3,    /* SVE [<Xn|SP>, <Xm|XZR>, LSL #3].  */
> +  AARCH64_OPND_SVE_ADDR_RX,	    /* SVE [<Xn|SP>, <Xm>].  */
> +  AARCH64_OPND_SVE_ADDR_RX_LSL1,    /* SVE [<Xn|SP>, <Xm>, LSL #1].  */
> +  AARCH64_OPND_SVE_ADDR_RX_LSL2,    /* SVE [<Xn|SP>, <Xm>, LSL #2].  */
> +  AARCH64_OPND_SVE_ADDR_RX_LSL3,    /* SVE [<Xn|SP>, <Xm>, LSL #3].  */
> +  AARCH64_OPND_SVE_ADDR_RZ,	    /* SVE [<Xn|SP>, Zm.D].  */
> +  AARCH64_OPND_SVE_ADDR_RZ_LSL1,    /* SVE [<Xn|SP>, Zm.D, LSL #1].  */
> +  AARCH64_OPND_SVE_ADDR_RZ_LSL2,    /* SVE [<Xn|SP>, Zm.D, LSL #2].  */
> +  AARCH64_OPND_SVE_ADDR_RZ_LSL3,    /* SVE [<Xn|SP>, Zm.D, LSL #3].  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW_14,  /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW].
> +				       Bit 14 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW_22,  /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW].
> +				       Bit 22 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW1_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #1].
> +				       Bit 14 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW1_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #1].
> +				       Bit 22 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW2_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #2].
> +				       Bit 14 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW2_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #2].
> +				       Bit 22 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW3_14, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #3].
> +				       Bit 14 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_RZ_XTW3_22, /* SVE [<Xn|SP>, Zm.<T>, (S|U)XTW #3].
> +				       Bit 22 controls S/U choice.  */
> +  AARCH64_OPND_SVE_ADDR_ZI_U5,	    /* SVE [Zn.<T>, #<uimm5>].  */
> +  AARCH64_OPND_SVE_ADDR_ZI_U5x2,    /* SVE [Zn.<T>, #<uimm5>*2].  */
> +  AARCH64_OPND_SVE_ADDR_ZI_U5x4,    /* SVE [Zn.<T>, #<uimm5>*4].  */
> +  AARCH64_OPND_SVE_ADDR_ZI_U5x8,    /* SVE [Zn.<T>, #<uimm5>*8].  */
> +  AARCH64_OPND_SVE_ADDR_ZZ_LSL,     /* SVE [Zn.<T>, Zm,<T>, LSL #<msz>].  */
> +  AARCH64_OPND_SVE_ADDR_ZZ_SXTW,    /* SVE [Zn.<T>, Zm,<T>, SXTW #<msz>].  */
> +  AARCH64_OPND_SVE_ADDR_ZZ_UXTW,    /* SVE [Zn.<T>, Zm,<T>, UXTW #<msz>].  */
>    AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
>    AARCH64_OPND_SVE_PATTERN_SCALED, /* Likewise, with additional MUL factor.  */
>    AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index 039b9be..47a414c 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -480,21 +480,21 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 92:
> -    case 93:
> -    case 94:
> -    case 95:
> -    case 96:
> -    case 97:
> -    case 98:
> -    case 99:
> -    case 100:
> -    case 101:
> -    case 102:
> -    case 103:
> -    case 104:
> -    case 105:
> -    case 108:
> +    case 123:
> +    case 124:
> +    case 125:
> +    case 126:
> +    case 127:
> +    case 128:
> +    case 129:
> +    case 130:
> +    case 131:
> +    case 132:
> +    case 133:
> +    case 134:
> +    case 135:
> +    case 136:
> +    case 139:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -531,8 +531,8 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 68:
>      case 69:
>      case 70:
> -    case 89:
> -    case 91:
> +    case 120:
> +    case 122:
>        return aarch64_ins_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -583,12 +583,50 @@ aarch64_insert_operand (const aarch64_operand *self,
>        return aarch64_ins_prfop (self, info, code, inst);
>      case 88:
>        return aarch64_ins_hint (self, info, code, inst);
> +    case 89:
>      case 90:
> -      return aarch64_ins_sve_scale (self, info, code, inst);
> +    case 91:
> +    case 92:
> +      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
> +    case 93:
> +    case 94:
> +    case 95:
> +    case 96:
> +    case 97:
> +    case 98:
> +    case 99:
> +    case 100:
> +    case 101:
> +    case 102:
> +    case 103:
> +    case 104:
> +      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
> +    case 105:
>      case 106:
> -      return aarch64_ins_sve_index (self, info, code, inst);
>      case 107:
> +    case 108:
>      case 109:
> +    case 110:
> +    case 111:
> +    case 112:
> +      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
> +    case 113:
> +    case 114:
> +    case 115:
> +    case 116:
> +      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
> +    case 117:
> +      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
> +    case 118:
> +      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 119:
> +      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 121:
> +      return aarch64_ins_sve_scale (self, info, code, inst);
> +    case 137:
> +      return aarch64_ins_sve_index (self, info, code, inst);
> +    case 138:
> +    case 140:
>        return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index 117a3c6..0d3b2c7 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -745,6 +745,114 @@ aarch64_ins_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return NULL;
>  }
>  
> +/* Encode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
> +   is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
> +   value.  fields[0] specifies the base register field.  */
> +const char *
> +aarch64_ins_sve_addr_ri_u6 (const aarch64_operand *self,
> +			    const aarch64_opnd_info *info, aarch64_insn *code,
> +			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int factor = 1 << get_operand_specific_data (self);
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (FLD_SVE_imm6, code, info->addr.offset.imm / factor, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [X<n>, X<m>{, LSL #<shift>}], where <shift>
> +   is SELF's operand-dependent value.  fields[0] specifies the base
> +   register field and fields[1] specifies the offset register field.  */
> +const char *
> +aarch64_ins_sve_addr_rr_lsl (const aarch64_operand *self,
> +			     const aarch64_opnd_info *info, aarch64_insn *code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [X<n>, Z<m>.<T>, (S|U)XTW {#<shift>}], where
> +   <shift> is SELF's operand-dependent value.  fields[0] specifies the
> +   base register field, fields[1] specifies the offset register field and
> +   fields[2] is a single-bit field that selects SXTW over UXTW.  */
> +const char *
> +aarch64_ins_sve_addr_rz_xtw (const aarch64_operand *self,
> +			     const aarch64_opnd_info *info, aarch64_insn *code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
> +  if (info->shifter.kind == AARCH64_MOD_UXTW)
> +    insert_field (self->fields[2], code, 0, 0);
> +  else
> +    insert_field (self->fields[2], code, 1, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [Z<n>.<T>, #<imm5> << <shift>], where <imm5> is a
> +   5-bit unsigned number and where <shift> is SELF's operand-dependent value.
> +   fields[0] specifies the base register field.  */
> +const char *
> +aarch64_ins_sve_addr_zi_u5 (const aarch64_operand *self,
> +			    const aarch64_opnd_info *info, aarch64_insn *code,
> +			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int factor = 1 << get_operand_specific_data (self);
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (FLD_imm5, code, info->addr.offset.imm / factor, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>{, <modifier> {#<msz>}}],
> +   where <modifier> is fixed by the instruction and where <msz> is a
> +   2-bit unsigned number.  fields[0] specifies the base register field
> +   and fields[1] specifies the offset register field.  */
> +static const char *
> +aarch64_ext_sve_addr_zz (const aarch64_operand *self,
> +			 const aarch64_opnd_info *info, aarch64_insn *code)
> +{
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (self->fields[1], code, info->addr.offset.regno, 0);
> +  insert_field (FLD_SVE_msz, code, info->shifter.amount, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>{, LSL #<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +const char *
> +aarch64_ins_sve_addr_zz_lsl (const aarch64_operand *self,
> +			     const aarch64_opnd_info *info, aarch64_insn *code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code);
> +}
> +
> +/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>, SXTW {#<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +const char *
> +aarch64_ins_sve_addr_zz_sxtw (const aarch64_operand *self,
> +			      const aarch64_opnd_info *info,
> +			      aarch64_insn *code,
> +			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code);
> +}
> +
> +/* Encode an SVE address [Z<n>.<T>, Z<m>.<T>, UXTW {#<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +const char *
> +aarch64_ins_sve_addr_zz_uxtw (const aarch64_operand *self,
> +			      const aarch64_opnd_info *info,
> +			      aarch64_insn *code,
> +			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code);
> +}
> +
>  /* Encode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
>     array specifies which field to use for Zn.  MM is encoded in the
>     concatenation of imm5 and SVE_tszh, with imm5 being the less
> diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
> index ac5faeb..b81cfa1 100644
> --- a/opcodes/aarch64-asm.h
> +++ b/opcodes/aarch64-asm.h
> @@ -69,6 +69,13 @@ AARCH64_DECL_OPD_INSERTER (ins_hint);
>  AARCH64_DECL_OPD_INSERTER (ins_prfop);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_u6);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rr_lsl);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rz_xtw);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zi_u5);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_lsl);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_sxtw);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_uxtw);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_index);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_scale);
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 124385d..3dd714f 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10426,21 +10426,21 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 92:
> -    case 93:
> -    case 94:
> -    case 95:
> -    case 96:
> -    case 97:
> -    case 98:
> -    case 99:
> -    case 100:
> -    case 101:
> -    case 102:
> -    case 103:
> -    case 104:
> -    case 105:
> -    case 108:
> +    case 123:
> +    case 124:
> +    case 125:
> +    case 126:
> +    case 127:
> +    case 128:
> +    case 129:
> +    case 130:
> +    case 131:
> +    case 132:
> +    case 133:
> +    case 134:
> +    case 135:
> +    case 136:
> +    case 139:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10482,8 +10482,8 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 68:
>      case 69:
>      case 70:
> -    case 89:
> -    case 91:
> +    case 120:
> +    case 122:
>        return aarch64_ext_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -10536,12 +10536,50 @@ aarch64_extract_operand (const aarch64_operand *self,
>        return aarch64_ext_prfop (self, info, code, inst);
>      case 88:
>        return aarch64_ext_hint (self, info, code, inst);
> +    case 89:
>      case 90:
> -      return aarch64_ext_sve_scale (self, info, code, inst);
> +    case 91:
> +    case 92:
> +      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
> +    case 93:
> +    case 94:
> +    case 95:
> +    case 96:
> +    case 97:
> +    case 98:
> +    case 99:
> +    case 100:
> +    case 101:
> +    case 102:
> +    case 103:
> +    case 104:
> +      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
> +    case 105:
>      case 106:
> -      return aarch64_ext_sve_index (self, info, code, inst);
>      case 107:
> +    case 108:
>      case 109:
> +    case 110:
> +    case 111:
> +    case 112:
> +      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
> +    case 113:
> +    case 114:
> +    case 115:
> +    case 116:
> +      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
> +    case 117:
> +      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
> +    case 118:
> +      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 119:
> +      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 121:
> +      return aarch64_ext_sve_scale (self, info, code, inst);
> +    case 137:
> +      return aarch64_ext_sve_index (self, info, code, inst);
> +    case 138:
> +    case 140:
>        return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index 1d00c0a..ed77b4d 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -1186,6 +1186,152 @@ aarch64_ext_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return 1;
>  }
>  
> +/* Decode an SVE address [<base>, #<offset> << <shift>], where <offset>
> +   is given by the OFFSET parameter and where <shift> is SELF's operand-
> +   dependent value.  fields[0] specifies the base register field <base>.  */
> +static int
> +aarch64_ext_sve_addr_reg_imm (const aarch64_operand *self,
> +			      aarch64_opnd_info *info, aarch64_insn code,
> +			      int64_t offset)
> +{
> +  info->addr.base_regno = extract_field (self->fields[0], code, 0);
> +  info->addr.offset.imm = offset * (1 << get_operand_specific_data (self));
> +  info->addr.offset.is_reg = FALSE;
> +  info->addr.writeback = FALSE;
> +  info->addr.preind = TRUE;
> +  info->shifter.operator_present = FALSE;
> +  info->shifter.amount_present = FALSE;
> +  return 1;
> +}
> +
> +/* Decode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
> +   is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
> +   value.  fields[0] specifies the base register field.  */
> +int
> +aarch64_ext_sve_addr_ri_u6 (const aarch64_operand *self,
> +			    aarch64_opnd_info *info, aarch64_insn code,
> +			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int offset = extract_field (FLD_SVE_imm6, code, 0);
> +  return aarch64_ext_sve_addr_reg_imm (self, info, code, offset);
> +}
> +
> +/* Decode an SVE address [X<n>, X<m>{, LSL #<shift>}], where <shift>
> +   is SELF's operand-dependent value.  fields[0] specifies the base
> +   register field and fields[1] specifies the offset register field.  */
> +int
> +aarch64_ext_sve_addr_rr_lsl (const aarch64_operand *self,
> +			     aarch64_opnd_info *info, aarch64_insn code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int index;
> +
> +  index = extract_field (self->fields[1], code, 0);
> +  if (index == 31 && (self->flags & OPD_F_NO_ZR) != 0)
> +    return 0;
> +
> +  info->addr.base_regno = extract_field (self->fields[0], code, 0);
> +  info->addr.offset.regno = index;
> +  info->addr.offset.is_reg = TRUE;
> +  info->addr.writeback = FALSE;
> +  info->addr.preind = TRUE;
> +  info->shifter.kind = AARCH64_MOD_LSL;
> +  info->shifter.amount = get_operand_specific_data (self);
> +  info->shifter.operator_present = (info->shifter.amount != 0);
> +  info->shifter.amount_present = (info->shifter.amount != 0);
> +  return 1;
> +}
> +
> +/* Decode an SVE address [X<n>, Z<m>.<T>, (S|U)XTW {#<shift>}], where
> +   <shift> is SELF's operand-dependent value.  fields[0] specifies the
> +   base register field, fields[1] specifies the offset register field and
> +   fields[2] is a single-bit field that selects SXTW over UXTW.  */
> +int
> +aarch64_ext_sve_addr_rz_xtw (const aarch64_operand *self,
> +			     aarch64_opnd_info *info, aarch64_insn code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  info->addr.base_regno = extract_field (self->fields[0], code, 0);
> +  info->addr.offset.regno = extract_field (self->fields[1], code, 0);
> +  info->addr.offset.is_reg = TRUE;
> +  info->addr.writeback = FALSE;
> +  info->addr.preind = TRUE;
> +  if (extract_field (self->fields[2], code, 0))
> +    info->shifter.kind = AARCH64_MOD_SXTW;
> +  else
> +    info->shifter.kind = AARCH64_MOD_UXTW;
> +  info->shifter.amount = get_operand_specific_data (self);
> +  info->shifter.operator_present = TRUE;
> +  info->shifter.amount_present = (info->shifter.amount != 0);
> +  return 1;
> +}
> +
> +/* Decode an SVE address [Z<n>.<T>, #<imm5> << <shift>], where <imm5> is a
> +   5-bit unsigned number and where <shift> is SELF's operand-dependent value.
> +   fields[0] specifies the base register field.  */
> +int
> +aarch64_ext_sve_addr_zi_u5 (const aarch64_operand *self,
> +			    aarch64_opnd_info *info, aarch64_insn code,
> +			    const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int offset = extract_field (FLD_imm5, code, 0);
> +  return aarch64_ext_sve_addr_reg_imm (self, info, code, offset);
> +}
> +
> +/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>{, <modifier> {#<msz>}}],
> +   where <modifier> is given by KIND and where <msz> is a 2-bit unsigned
> +   number.  fields[0] specifies the base register field and fields[1]
> +   specifies the offset register field.  */
> +static int
> +aarch64_ext_sve_addr_zz (const aarch64_operand *self, aarch64_opnd_info *info,
> +			 aarch64_insn code, enum aarch64_modifier_kind kind)
> +{
> +  info->addr.base_regno = extract_field (self->fields[0], code, 0);
> +  info->addr.offset.regno = extract_field (self->fields[1], code, 0);
> +  info->addr.offset.is_reg = TRUE;
> +  info->addr.writeback = FALSE;
> +  info->addr.preind = TRUE;
> +  info->shifter.kind = kind;
> +  info->shifter.amount = extract_field (FLD_SVE_msz, code, 0);
> +  info->shifter.operator_present = (kind != AARCH64_MOD_LSL
> +				    || info->shifter.amount != 0);
> +  info->shifter.amount_present = (info->shifter.amount != 0);
> +  return 1;
> +}
> +
> +/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>{, LSL #<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +int
> +aarch64_ext_sve_addr_zz_lsl (const aarch64_operand *self,
> +			     aarch64_opnd_info *info, aarch64_insn code,
> +			     const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_LSL);
> +}
> +
> +/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>, SXTW {#<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +int
> +aarch64_ext_sve_addr_zz_sxtw (const aarch64_operand *self,
> +			      aarch64_opnd_info *info, aarch64_insn code,
> +			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_SXTW);
> +}
> +
> +/* Decode an SVE address [Z<n>.<T>, Z<m>.<T>, UXTW {#<msz>}], where
> +   <msz> is a 2-bit unsigned number.  fields[0] specifies the base register
> +   field and fields[1] specifies the offset register field.  */
> +int
> +aarch64_ext_sve_addr_zz_uxtw (const aarch64_operand *self,
> +			      aarch64_opnd_info *info, aarch64_insn code,
> +			      const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_UXTW);
> +}
> +
>  /* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
>     array specifies which field to use for Zn.  MM is encoded in the
>     concatenation of imm5 and SVE_tszh, with imm5 being the less
> diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
> index 92f5ad4..0ce2d89 100644
> --- a/opcodes/aarch64-dis.h
> +++ b/opcodes/aarch64-dis.h
> @@ -91,6 +91,13 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_hint);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_prfop);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_u6);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rr_lsl);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rz_xtw);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zi_u5);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_lsl);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_sxtw);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_uxtw);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_scale);
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index 8f221b8..ed2b70b 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -113,6 +113,37 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 2"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 4"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x8", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 8"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL1", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL2", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RR_LSL3", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX", (0 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL1", (1 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL2", (2 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RX_LSL3", (3 << OPD_F_OD_LSB) | OPD_F_NO_ZR | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_Rm}, "an address with a scalar register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL1", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL2", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_LSL3", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW_14", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW_22", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW1_14", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW1_22", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW2_14", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW2_22", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW3_14", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RZ_XTW3_22", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 2"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 4"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZI_U5x8", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn}, "an address with a 5-bit unsigned offset, multiplied by 8"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_LSL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_SXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_UXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN_SCALED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 326b94e..6617e28 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -280,9 +280,13 @@ const aarch64_field fields[] =
>      {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
>      {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
>      { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
> +    { 16,  6 }, /* SVE_imm6: 6-bit immediate field.  */
> +    { 10,  2 }, /* SVE_msz: 2-bit shift amount for ADR.  */
>      {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
>      {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
>      { 22,  2 }, /* SVE_tszh: triangular size select high, bits [23,22].  */
> +    { 14,  1 }, /* SVE_xs_14: UXTW/SXTW select (bit 14).  */
> +    { 22,  1 }  /* SVE_xs_22: UXTW/SXTW select (bit 22).  */
>  };
>  
>  enum aarch64_operand_class
> @@ -1368,9 +1372,9 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  				  const aarch64_opcode *opcode,
>  				  aarch64_operand_error *mismatch_detail)
>  {
> -  unsigned num;
> +  unsigned num, modifiers;
>    unsigned char size;
> -  int64_t imm;
> +  int64_t imm, min_value, max_value;
>    const aarch64_opnd_info *opnd = opnds + idx;
>    aarch64_opnd_qualifier_t qualifier = opnd->qualifier;
>  
> @@ -1662,6 +1666,113 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SVE_ADDR_RI_U6:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
> +	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
> +	  min_value = 0;
> +	  max_value = 63;
> +	sve_imm_offset:
> +	  assert (!opnd->addr.offset.is_reg);
> +	  assert (opnd->addr.preind);
> +	  num = 1 << get_operand_specific_data (&aarch64_operands[type]);
> +	  min_value *= num;
> +	  max_value *= num;
> +	  if (opnd->shifter.operator_present
> +	      || opnd->shifter.amount_present)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid addressing mode"));
> +	      return 0;
> +	    }
> +	  if (!value_in_range_p (opnd->addr.offset.imm, min_value, max_value))
> +	    {
> +	      set_offset_out_of_range_error (mismatch_detail, idx,
> +					     min_value, max_value);
> +	      return 0;
> +	    }
> +	  if (!value_aligned_p (opnd->addr.offset.imm, num))
> +	    {
> +	      set_unaligned_error (mismatch_detail, idx, num);
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ADDR_RR:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RR_LSL3:
> +	case AARCH64_OPND_SVE_ADDR_RX:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RX_LSL3:
> +	case AARCH64_OPND_SVE_ADDR_RZ:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
> +	case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
> +	  modifiers = 1 << AARCH64_MOD_LSL;
> +	sve_rr_operand:
> +	  assert (opnd->addr.offset.is_reg);
> +	  assert (opnd->addr.preind);
> +	  if ((aarch64_operands[type].flags & OPD_F_NO_ZR) != 0
> +	      && opnd->addr.offset.regno == 31)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("index register xzr is not allowed"));
> +	      return 0;
> +	    }
> +	  if (((1 << opnd->shifter.kind) & modifiers) == 0
> +	      || (opnd->shifter.amount
> +		  != get_operand_specific_data (&aarch64_operands[type])))
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid addressing mode"));
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
> +	case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
> +	  modifiers = (1 << AARCH64_MOD_SXTW) | (1 << AARCH64_MOD_UXTW);
> +	  goto sve_rr_operand;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
> +	case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
> +	  min_value = 0;
> +	  max_value = 31;
> +	  goto sve_imm_offset;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
> +	  modifiers = 1 << AARCH64_MOD_LSL;
> +	sve_zz_operand:
> +	  assert (opnd->addr.offset.is_reg);
> +	  assert (opnd->addr.preind);
> +	  if (((1 << opnd->shifter.kind) & modifiers) == 0
> +	      || opnd->shifter.amount < 0
> +	      || opnd->shifter.amount > 3)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid addressing mode"));
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
> +	  modifiers = (1 << AARCH64_MOD_SXTW);
> +	  goto sve_zz_operand;
> +
> +	case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
> +	  modifiers = 1 << AARCH64_MOD_UXTW;
> +	  goto sve_zz_operand;
> +
>  	default:
>  	  break;
>  	}
> @@ -2330,6 +2441,17 @@ static const char *int_reg[2][2][32] = {
>  #undef R64
>  #undef R32
>  };
> +
> +/* Names of the SVE vector registers, first with .S suffixes,
> +   then with .D suffixes.  */
> +
> +static const char *sve_reg[2][32] = {
> +#define ZS(X) "z" #X ".s"
> +#define ZD(X) "z" #X ".d"
> +  BANK (ZS, ZS (31)), BANK (ZD, ZD (31))
> +#undef ZD
> +#undef ZS
> +};
>  #undef BANK
>  
>  /* Return the integer register name.
> @@ -2373,6 +2495,17 @@ get_offset_int_reg_name (const aarch64_opnd_info *opnd)
>      }
>  }
>  
> +/* Get the name of the SVE vector offset register in OPND, using the operand
> +   qualifier to decide whether the suffix should be .S or .D.  */
> +
> +static inline const char *
> +get_addr_sve_reg_name (int regno, aarch64_opnd_qualifier_t qualifier)
> +{
> +  assert (qualifier == AARCH64_OPND_QLF_S_S
> +	  || qualifier == AARCH64_OPND_QLF_S_D);
> +  return sve_reg[qualifier == AARCH64_OPND_QLF_S_D][regno];
> +}
> +
>  /* Types for expanding an encoded 8-bit value to a floating-point value.  */
>  
>  typedef union
> @@ -2948,18 +3081,65 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>        break;
>  
>      case AARCH64_OPND_ADDR_REGOFF:
> +    case AARCH64_OPND_SVE_ADDR_RR:
> +    case AARCH64_OPND_SVE_ADDR_RR_LSL1:
> +    case AARCH64_OPND_SVE_ADDR_RR_LSL2:
> +    case AARCH64_OPND_SVE_ADDR_RR_LSL3:
> +    case AARCH64_OPND_SVE_ADDR_RX:
> +    case AARCH64_OPND_SVE_ADDR_RX_LSL1:
> +    case AARCH64_OPND_SVE_ADDR_RX_LSL2:
> +    case AARCH64_OPND_SVE_ADDR_RX_LSL3:
>        print_register_offset_address
>  	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1),
>  	 get_offset_int_reg_name (opnd));
>        break;
>  
> +    case AARCH64_OPND_SVE_ADDR_RZ:
> +    case AARCH64_OPND_SVE_ADDR_RZ_LSL1:
> +    case AARCH64_OPND_SVE_ADDR_RZ_LSL2:
> +    case AARCH64_OPND_SVE_ADDR_RZ_LSL3:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW_14:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW_22:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW1_14:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW1_22:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW2_14:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW2_22:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW3_14:
> +    case AARCH64_OPND_SVE_ADDR_RZ_XTW3_22:
> +      print_register_offset_address
> +	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1),
> +	 get_addr_sve_reg_name (opnd->addr.offset.regno, opnd->qualifier));
> +      break;
> +
>      case AARCH64_OPND_ADDR_SIMM7:
>      case AARCH64_OPND_ADDR_SIMM9:
>      case AARCH64_OPND_ADDR_SIMM9_2:
> +    case AARCH64_OPND_SVE_ADDR_RI_U6:
> +    case AARCH64_OPND_SVE_ADDR_RI_U6x2:
> +    case AARCH64_OPND_SVE_ADDR_RI_U6x4:
> +    case AARCH64_OPND_SVE_ADDR_RI_U6x8:
>        print_immediate_offset_address
>  	(buf, size, opnd, get_64bit_int_reg_name (opnd->addr.base_regno, 1));
>        break;
>  
> +    case AARCH64_OPND_SVE_ADDR_ZI_U5:
> +    case AARCH64_OPND_SVE_ADDR_ZI_U5x2:
> +    case AARCH64_OPND_SVE_ADDR_ZI_U5x4:
> +    case AARCH64_OPND_SVE_ADDR_ZI_U5x8:
> +      print_immediate_offset_address
> +	(buf, size, opnd,
> +	 get_addr_sve_reg_name (opnd->addr.base_regno, opnd->qualifier));
> +      break;
> +
> +    case AARCH64_OPND_SVE_ADDR_ZZ_LSL:
> +    case AARCH64_OPND_SVE_ADDR_ZZ_SXTW:
> +    case AARCH64_OPND_SVE_ADDR_ZZ_UXTW:
> +      print_register_offset_address
> +	(buf, size, opnd,
> +	 get_addr_sve_reg_name (opnd->addr.base_regno, opnd->qualifier),
> +	 get_addr_sve_reg_name (opnd->addr.offset.regno, opnd->qualifier));
> +      break;
> +
>      case AARCH64_OPND_ADDR_UIMM12:
>        name = get_64bit_int_reg_name (opnd->addr.base_regno, 1);
>        if (opnd->addr.offset.imm)
> diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
> index 3406f6e..e823146 100644
> --- a/opcodes/aarch64-opc.h
> +++ b/opcodes/aarch64-opc.h
> @@ -107,9 +107,13 @@ enum aarch64_field_kind
>    FLD_SVE_Zn,
>    FLD_SVE_Zt,
>    FLD_SVE_imm4,
> +  FLD_SVE_imm6,
> +  FLD_SVE_msz,
>    FLD_SVE_pattern,
>    FLD_SVE_prfop,
>    FLD_SVE_tszh,
> +  FLD_SVE_xs_14,
> +  FLD_SVE_xs_22,
>  };
>  
>  /* Field description.  */
> @@ -156,6 +160,9 @@ extern const aarch64_operand aarch64_operands[];
>  						   value by 2 to get the value
>  						   of an immediate operand.  */
>  #define OPD_F_MAYBE_SP		0x00000010	/* May potentially be SP.  */
> +#define OPD_F_OD_MASK		0x00000060	/* Operand-dependent data.  */
> +#define OPD_F_OD_LSB		5
> +#define OPD_F_NO_ZR		0x00000080	/* ZR index not allowed.  */
>  
>  static inline bfd_boolean
>  operand_has_inserter (const aarch64_operand *operand)
> @@ -187,6 +194,13 @@ operand_maybe_stack_pointer (const aarch64_operand *operand)
>    return (operand->flags & OPD_F_MAYBE_SP) ? TRUE : FALSE;
>  }
>  
> +/* Return the value of the operand-specific data field (OPD_F_OD_MASK).  */
> +static inline unsigned int
> +get_operand_specific_data (const aarch64_operand *operand)
> +{
> +  return (operand->flags & OPD_F_OD_MASK) >> OPD_F_OD_LSB;
> +}
> +
>  /* Return the total width of the operand *OPERAND.  */
>  static inline unsigned
>  get_operand_fields_width (const aarch64_operand *operand)
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index 491235f..aba4b2d 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2818,8 +2818,95 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "the ISB option name SY or an optional 4-bit unsigned immediate")	\
>      Y(SYSTEM, prfop, "PRFOP", 0, F(),					\
>        "a prefetch operation specifier")					\
> -    Y (SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
> +    Y(SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
>        "the PSB option name CSYNC")					\
> +    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6", 0 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn), "an address with a 6-bit unsigned offset")		\
> +    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn),							\
> +      "an address with a 6-bit unsigned offset, multiplied by 2")	\
> +    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn),							\
> +      "an address with a 6-bit unsigned offset, multiplied by 4")	\
> +    Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x8", 3 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn),							\
> +      "an address with a 6-bit unsigned offset, multiplied by 8")	\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR", 0 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL1", 1 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL2", 2 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RR_LSL3", 3 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_Rm), "an address with a scalar register offset")	\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX",				\
> +      (0 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
> +      "an address with a scalar register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL1",			\
> +      (1 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
> +      "an address with a scalar register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL2",			\
> +      (2 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
> +      "an address with a scalar register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RX_LSL3",			\
> +      (3 << OPD_F_OD_LSB) | OPD_F_NO_ZR, F(FLD_Rn,FLD_Rm),		\
> +      "an address with a scalar register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ", 0 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_SVE_Zm_16),						\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL1", 1 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_SVE_Zm_16),						\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL2", 2 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_SVE_Zm_16),						\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rr_lsl, "SVE_ADDR_RZ_LSL3", 3 << OPD_F_OD_LSB,	\
> +      F(FLD_Rn,FLD_SVE_Zm_16),						\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW_14",			\
> +      0 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW_22",			\
> +      0 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW1_14",			\
> +      1 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW1_22",			\
> +      1 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW2_14",			\
> +      2 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW2_22",			\
> +      2 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW3_14",			\
> +      3 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_14),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_rz_xtw, "SVE_ADDR_RZ_XTW3_22",			\
> +      3 << OPD_F_OD_LSB, F(FLD_Rn,FLD_SVE_Zm_16,FLD_SVE_xs_22),		\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5", 0 << OPD_F_OD_LSB,	\
> +      F(FLD_SVE_Zn), "an address with a 5-bit unsigned offset")		\
> +    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x2", 1 << OPD_F_OD_LSB,	\
> +      F(FLD_SVE_Zn),							\
> +      "an address with a 5-bit unsigned offset, multiplied by 2")	\
> +    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x4", 2 << OPD_F_OD_LSB,	\
> +      F(FLD_SVE_Zn),							\
> +      "an address with a 5-bit unsigned offset, multiplied by 4")	\
> +    Y(ADDRESS, sve_addr_zi_u5, "SVE_ADDR_ZI_U5x8", 3 << OPD_F_OD_LSB,	\
> +      F(FLD_SVE_Zn),							\
> +      "an address with a 5-bit unsigned offset, multiplied by 8")	\
> +    Y(ADDRESS, sve_addr_zz_lsl, "SVE_ADDR_ZZ_LSL", 0,			\
> +      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_zz_sxtw, "SVE_ADDR_ZZ_SXTW", 0,			\
> +      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
> +      "an address with a vector register offset")			\
> +    Y(ADDRESS, sve_addr_zz_uxtw, "SVE_ADDR_ZZ_UXTW", 0,			\
> +      F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
> +      "an address with a vector register offset")			\
>      Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
>        "an enumeration value such as POW2")				\
>      Y(IMMEDIATE, sve_scale, "SVE_PATTERN_SCALED", 0,			\
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [AArch64][SVE 26/32] Add SVE MUL VL addressing modes
  2016-09-16 12:10     ` Richard Sandiford
@ 2016-09-20 13:51       ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 76+ messages in thread
From: Richard Earnshaw (lists) @ 2016-09-20 13:51 UTC (permalink / raw)
  To: binutils, richard.sandiford

On 16/09/16 13:10, Richard Sandiford wrote:
> "Richard Earnshaw (lists)" <Richard.Earnshaw@arm.com> writes:
>> On 23/08/16 10:23, Richard Sandiford wrote:
>>> This patch adds support for addresses of the form:
>>>
>>>    [<base>, #<offset>, MUL VL]
>>>
>>> This involves adding a new AARCH64_MOD_MUL_VL modifier, which is
>>> why I split it out from the other addressing modes.
>>>
>>> For LD2, LD3 and LD4, the offset must be a multiple of the structure
>>> size, so for LD3 the possible values are 0, 3, 6, ....  The patch
>>> therefore extends value_aligned_p to handle non-power-of-2 alignments.
>>>
>>> OK to install?
>>>
>>> Thanks,
>>> Richard
>>>
>>>
>>> include/opcode/
>>> 	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_S4xVL): New aarch64_opnd.
>>> 	(AARCH64_OPND_SVE_ADDR_RI_S4x2xVL, AARCH64_OPND_SVE_ADDR_RI_S4x3xVL)
>>> 	(AARCH64_OPND_SVE_ADDR_RI_S4x4xVL, AARCH64_OPND_SVE_ADDR_RI_S6xVL)
>>> 	(AARCH64_OPND_SVE_ADDR_RI_S9xVL): Likewise.
>>> 	(AARCH64_MOD_MUL_VL): New aarch64_modifier_kind.
>>>
>>> opcodes/
>>> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for new MUL VL
>>> 	operands.
>>> 	* aarch64-opc.c (aarch64_operand_modifiers): Initialize
>>> 	the AARCH64_MOD_MUL_VL entry.
>>> 	(value_aligned_p): Cope with non-power-of-two alignments.
>>> 	(operand_general_constraint_met_p): Handle the new MUL VL addresses.
>>> 	(print_immediate_offset_address): Likewise.
>>> 	(aarch64_print_operand): Likewise.
>>> 	* aarch64-opc-2.c: Regenerate.
>>> 	* aarch64-asm.h (ins_sve_addr_ri_s4xvl, ins_sve_addr_ri_s6xvl)
>>> 	(ins_sve_addr_ri_s9xvl): New inserters.
>>> 	* aarch64-asm.c (aarch64_ins_sve_addr_ri_s4xvl): New function.
>>> 	(aarch64_ins_sve_addr_ri_s6xvl): Likewise.
>>> 	(aarch64_ins_sve_addr_ri_s9xvl): Likewise.
>>> 	* aarch64-asm-2.c: Regenerate.
>>> 	* aarch64-dis.h (ext_sve_addr_ri_s4xvl, ext_sve_addr_ri_s6xvl)
>>> 	(ext_sve_addr_ri_s9xvl): New extractors.
>>> 	* aarch64-dis.c (aarch64_ext_sve_addr_reg_mul_vl): New function.
>>> 	(aarch64_ext_sve_addr_ri_s4xvl): Likewise.
>>> 	(aarch64_ext_sve_addr_ri_s6xvl): Likewise.
>>> 	(aarch64_ext_sve_addr_ri_s9xvl): Likewise.
>>> 	* aarch64-dis-2.c: Regenerate.
>>>
>>> gas/
>>> 	* config/tc-aarch64.c (SHIFTED_MUL_VL): New parse_shift_mode.
>>> 	(parse_shift): Handle it.
>>> 	(parse_address_main): Handle the new MUL VL addresses.
>>> 	(parse_operands): Likewise.
>>>
>>
>> OK.
> 
> Here's an update that uses a parse_shift_mode rather than a boolean
> parameter to say what kinds of shift are allowed for immediate offsets.
> 
> Tested on aarch64-linux-gnu.  OK to install?

OK.

R.

> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_SVE_ADDR_RI_S4xVL): New aarch64_opnd.
> 	(AARCH64_OPND_SVE_ADDR_RI_S4x2xVL, AARCH64_OPND_SVE_ADDR_RI_S4x3xVL)
> 	(AARCH64_OPND_SVE_ADDR_RI_S4x4xVL, AARCH64_OPND_SVE_ADDR_RI_S6xVL)
> 	(AARCH64_OPND_SVE_ADDR_RI_S9xVL): Likewise.
> 	(AARCH64_MOD_MUL_VL): New aarch64_modifier_kind.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for new MUL VL
> 	operands.
> 	* aarch64-opc.c (aarch64_operand_modifiers): Initialize
> 	the AARCH64_MOD_MUL_VL entry.
> 	(value_aligned_p): Cope with non-power-of-two alignments.
> 	(operand_general_constraint_met_p): Handle the new MUL VL addresses.
> 	(print_immediate_offset_address): Likewise.
> 	(aarch64_print_operand): Likewise.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm.h (ins_sve_addr_ri_s4xvl, ins_sve_addr_ri_s6xvl)
> 	(ins_sve_addr_ri_s9xvl): New inserters.
> 	* aarch64-asm.c (aarch64_ins_sve_addr_ri_s4xvl): New function.
> 	(aarch64_ins_sve_addr_ri_s6xvl): Likewise.
> 	(aarch64_ins_sve_addr_ri_s9xvl): Likewise.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.h (ext_sve_addr_ri_s4xvl, ext_sve_addr_ri_s6xvl)
> 	(ext_sve_addr_ri_s9xvl): New extractors.
> 	* aarch64-dis.c (aarch64_ext_sve_addr_reg_mul_vl): New function.
> 	(aarch64_ext_sve_addr_ri_s4xvl): Likewise.
> 	(aarch64_ext_sve_addr_ri_s6xvl): Likewise.
> 	(aarch64_ext_sve_addr_ri_s9xvl): Likewise.
> 	* aarch64-dis-2.c: Regenerate.
> 
> gas/
> 	* config/tc-aarch64.c (SHIFTED_NONE, SHIFTED_MUL_VL): New
> 	parse_shift_modes.
> 	(parse_shift): Handle SHIFTED_MUL_VL.
> 	(parse_address_main): Add an imm_shift_mode parameter.
> 	(parse_address, parse_sve_address): Update accordingly.
> 	(parse_operands): Handle MUL VL addressing modes.
> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index e59333f..930b07a 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -2922,6 +2922,7 @@ find_reloc_table_entry (char **str)
>  /* Mode argument to parse_shift and parser_shifter_operand.  */
>  enum parse_shift_mode
>  {
> +  SHIFTED_NONE,			/* no shifter allowed  */
>    SHIFTED_ARITH_IMM,		/* "rn{,lsl|lsr|asl|asr|uxt|sxt #n}" or
>  				   "#imm{,lsl #n}"  */
>    SHIFTED_LOGIC_IMM,		/* "rn{,lsl|lsr|asl|asr|ror #n}" or
> @@ -2929,6 +2930,7 @@ enum parse_shift_mode
>    SHIFTED_LSL,			/* bare "lsl #n"  */
>    SHIFTED_MUL,			/* bare "mul #n"  */
>    SHIFTED_LSL_MSL,		/* "lsl|msl #n"  */
> +  SHIFTED_MUL_VL,		/* "mul vl"  */
>    SHIFTED_REG_OFFSET		/* [su]xtw|sxtx {#n} or lsl #n  */
>  };
>  
> @@ -2970,7 +2972,8 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>      }
>  
>    if (kind == AARCH64_MOD_MUL
> -      && mode != SHIFTED_MUL)
> +      && mode != SHIFTED_MUL
> +      && mode != SHIFTED_MUL_VL)
>      {
>        set_syntax_error (_("invalid use of 'MUL'"));
>        return FALSE;
> @@ -3010,6 +3013,22 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>  	}
>        break;
>  
> +    case SHIFTED_MUL_VL:
> +      /* "MUL VL" consists of two separate tokens.  Require the first
> +	 token to be "MUL" and look for a following "VL".  */
> +      if (kind == AARCH64_MOD_MUL)
> +	{
> +	  skip_whitespace (p);
> +	  if (strncasecmp (p, "vl", 2) == 0 && !ISALPHA (p[2]))
> +	    {
> +	      p += 2;
> +	      kind = AARCH64_MOD_MUL_VL;
> +	      break;
> +	    }
> +	}
> +      set_syntax_error (_("only 'MUL VL' is permitted"));
> +      return FALSE;
> +
>      case SHIFTED_REG_OFFSET:
>        if (kind != AARCH64_MOD_UXTW && kind != AARCH64_MOD_LSL
>  	  && kind != AARCH64_MOD_SXTW && kind != AARCH64_MOD_SXTX)
> @@ -3037,7 +3056,7 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>  
>    /* Parse shift amount.  */
>    exp_has_prefix = 0;
> -  if (mode == SHIFTED_REG_OFFSET && *p == ']')
> +  if ((mode == SHIFTED_REG_OFFSET && *p == ']') || kind == AARCH64_MOD_MUL_VL)
>      exp.X_op = O_absent;
>    else
>      {
> @@ -3048,7 +3067,11 @@ parse_shift (char **str, aarch64_opnd_info *operand, enum parse_shift_mode mode)
>  	}
>        my_get_expression (&exp, &p, GE_NO_PREFIX, 0);
>      }
> -  if (exp.X_op == O_absent)
> +  if (kind == AARCH64_MOD_MUL_VL)
> +    /* For consistency, give MUL VL the same shift amount as an implicit
> +       MUL #1.  */
> +    operand->shifter.amount = 1;
> +  else if (exp.X_op == O_absent)
>      {
>        if (aarch64_extend_operator_p (kind) == FALSE || exp_has_prefix)
>  	{
> @@ -3268,6 +3291,7 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>     PC-relative (literal)
>       label
>     SVE:
> +     [base,#imm,MUL VL]
>       [base,Zm.D{,LSL #imm}]
>       [base,Zm.S,(S|U)XTW {#imm}]
>       [base,Zm.D,(S|U)XTW {#imm}] // ignores top 32 bits of Zm.D elements
> @@ -3307,15 +3331,20 @@ parse_shifter_operand_reloc (char **str, aarch64_opnd_info *operand,
>     corresponding register.
>  
>     BASE_TYPE says which types of base register should be accepted and
> -   OFFSET_TYPE says the same for offset registers.  In all other respects,
> -   it is the caller's responsibility to check for addressing modes not
> -   supported by the instruction, and to set inst.reloc.type.  */
> +   OFFSET_TYPE says the same for offset registers.  IMM_SHIFT_MODE
> +   is the type of shifter that is allowed for immediate offsets,
> +   or SHIFTED_NONE if none.
> +
> +   In all other respects, it is the caller's responsibility to check
> +   for addressing modes not supported by the instruction, and to set
> +   inst.reloc.type.  */
>  
>  static bfd_boolean
>  parse_address_main (char **str, aarch64_opnd_info *operand,
>  		    aarch64_opnd_qualifier_t *base_qualifier,
>  		    aarch64_opnd_qualifier_t *offset_qualifier,
> -		    aarch64_reg_type base_type, aarch64_reg_type offset_type)
> +		    aarch64_reg_type base_type, aarch64_reg_type offset_type,
> +		    enum parse_shift_mode imm_shift_mode)
>  {
>    char *p = *str;
>    const reg_entry *reg;
> @@ -3497,12 +3526,19 @@ parse_address_main (char **str, aarch64_opnd_info *operand,
>  	      inst.reloc.type = entry->ldst_type;
>  	      inst.reloc.pc_rel = entry->pc_rel;
>  	    }
> -	  else if (! my_get_expression (exp, &p, GE_OPT_PREFIX, 1))
> +	  else
>  	    {
> -	      set_syntax_error (_("invalid expression in the address"));
> -	      return FALSE;
> +	      if (! my_get_expression (exp, &p, GE_OPT_PREFIX, 1))
> +		{
> +		  set_syntax_error (_("invalid expression in the address"));
> +		  return FALSE;
> +		}
> +	      /* [Xn,<expr>  */
> +	      if (imm_shift_mode != SHIFTED_NONE && skip_past_comma (&p))
> +		/* [Xn,<expr>,<shifter>  */
> +		if (! parse_shift (&p, operand, imm_shift_mode))
> +		  return FALSE;
>  	    }
> -	  /* [Xn,<expr>  */
>  	}
>      }
>  
> @@ -3582,10 +3618,10 @@ parse_address (char **str, aarch64_opnd_info *operand)
>  {
>    aarch64_opnd_qualifier_t base_qualifier, offset_qualifier;
>    return parse_address_main (str, operand, &base_qualifier, &offset_qualifier,
> -			     REG_TYPE_R64_SP, REG_TYPE_R_Z);
> +			     REG_TYPE_R64_SP, REG_TYPE_R_Z, SHIFTED_NONE);
>  }
>  
> -/* Parse an address in which SVE vector registers are allowed.
> +/* Parse an address in which SVE vector registers and MUL VL are allowed.
>     The arguments have the same meaning as for parse_address_main.
>     Return TRUE on success.  */
>  static bfd_boolean
> @@ -3594,7 +3630,8 @@ parse_sve_address (char **str, aarch64_opnd_info *operand,
>  		   aarch64_opnd_qualifier_t *offset_qualifier)
>  {
>    return parse_address_main (str, operand, base_qualifier, offset_qualifier,
> -			     REG_TYPE_SVE_BASE, REG_TYPE_SVE_OFFSET);
> +			     REG_TYPE_SVE_BASE, REG_TYPE_SVE_OFFSET,
> +			     SHIFTED_MUL_VL);
>  }
>  
>  /* Parse an operand for a MOVZ, MOVN or MOVK instruction.
> @@ -5938,11 +5975,18 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  /* No qualifier.  */
>  	  break;
>  
> +	case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6x8:
> -	  /* [X<n>{, #imm}]
> +	  /* [X<n>{, #imm, MUL VL}]
> +	     [X<n>{, #imm}]
>  	     but recognizing SVE registers.  */
>  	  po_misc_or_fail (parse_sve_address (&str, info, &base_qualifier,
>  					      &offset_qualifier));
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index e61ac9c..837d6bd 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -244,6 +244,12 @@ enum aarch64_opnd
>    AARCH64_OPND_PRFOP,		/* Prefetch operation.  */
>    AARCH64_OPND_BARRIER_PSB,	/* Barrier operand for PSB.  */
>  
> +  AARCH64_OPND_SVE_ADDR_RI_S4xVL,   /* SVE [<Xn|SP>, #<simm4>, MUL VL].  */
> +  AARCH64_OPND_SVE_ADDR_RI_S4x2xVL, /* SVE [<Xn|SP>, #<simm4>*2, MUL VL].  */
> +  AARCH64_OPND_SVE_ADDR_RI_S4x3xVL, /* SVE [<Xn|SP>, #<simm4>*3, MUL VL].  */
> +  AARCH64_OPND_SVE_ADDR_RI_S4x4xVL, /* SVE [<Xn|SP>, #<simm4>*4, MUL VL].  */
> +  AARCH64_OPND_SVE_ADDR_RI_S6xVL,   /* SVE [<Xn|SP>, #<simm6>, MUL VL].  */
> +  AARCH64_OPND_SVE_ADDR_RI_S9xVL,   /* SVE [<Xn|SP>, #<simm9>, MUL VL].  */
>    AARCH64_OPND_SVE_ADDR_RI_U6,	    /* SVE [<Xn|SP>, #<uimm6>].  */
>    AARCH64_OPND_SVE_ADDR_RI_U6x2,    /* SVE [<Xn|SP>, #<uimm6>*2].  */
>    AARCH64_OPND_SVE_ADDR_RI_U6x4,    /* SVE [<Xn|SP>, #<uimm6>*4].  */
> @@ -786,6 +792,7 @@ enum aarch64_modifier_kind
>    AARCH64_MOD_SXTW,
>    AARCH64_MOD_SXTX,
>    AARCH64_MOD_MUL,
> +  AARCH64_MOD_MUL_VL,
>  };
>  
>  bfd_boolean
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index 47a414c..da590ca 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -480,12 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 123:
> -    case 124:
> -    case 125:
> -    case 126:
> -    case 127:
> -    case 128:
>      case 129:
>      case 130:
>      case 131:
> @@ -494,7 +488,13 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 134:
>      case 135:
>      case 136:
> +    case 137:
> +    case 138:
>      case 139:
> +    case 140:
> +    case 141:
> +    case 142:
> +    case 145:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -531,8 +531,8 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 68:
>      case 69:
>      case 70:
> -    case 120:
> -    case 122:
> +    case 126:
> +    case 128:
>        return aarch64_ins_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -587,46 +587,55 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 90:
>      case 91:
>      case 92:
> -      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
> +      return aarch64_ins_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 93:
> +      return aarch64_ins_sve_addr_ri_s6xvl (self, info, code, inst);
>      case 94:
> +      return aarch64_ins_sve_addr_ri_s9xvl (self, info, code, inst);
>      case 95:
>      case 96:
>      case 97:
>      case 98:
> +      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
>      case 99:
>      case 100:
>      case 101:
>      case 102:
>      case 103:
>      case 104:
> -      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
>      case 105:
>      case 106:
>      case 107:
>      case 108:
>      case 109:
>      case 110:
> +      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
>      case 111:
>      case 112:
> -      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
>      case 113:
>      case 114:
>      case 115:
>      case 116:
> -      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
>      case 117:
> -      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
>      case 118:
> -      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
> +      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
>      case 119:
> -      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 120:
>      case 121:
> +    case 122:
> +      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
> +    case 123:
> +      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
> +    case 124:
> +      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 125:
> +      return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 127:
>        return aarch64_ins_sve_scale (self, info, code, inst);
> -    case 137:
> +    case 143:
>        return aarch64_ins_sve_index (self, info, code, inst);
> -    case 138:
> -    case 140:
> +    case 144:
> +    case 146:
>        return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index 0d3b2c7..944a9eb 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -745,6 +745,56 @@ aarch64_ins_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return NULL;
>  }
>  
> +/* Encode an SVE address [<base>, #<simm4>*<factor>, MUL VL],
> +   where <simm4> is a 4-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm4> is encoded in the SVE_imm4 field.  */
> +const char *
> +aarch64_ins_sve_addr_ri_s4xvl (const aarch64_operand *self,
> +			       const aarch64_opnd_info *info,
> +			       aarch64_insn *code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int factor = 1 + get_operand_specific_data (self);
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (FLD_SVE_imm4, code, info->addr.offset.imm / factor, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [<base>, #<simm6>*<factor>, MUL VL],
> +   where <simm6> is a 6-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm6> is encoded in the SVE_imm6 field.  */
> +const char *
> +aarch64_ins_sve_addr_ri_s6xvl (const aarch64_operand *self,
> +			       const aarch64_opnd_info *info,
> +			       aarch64_insn *code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int factor = 1 + get_operand_specific_data (self);
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_field (FLD_SVE_imm6, code, info->addr.offset.imm / factor, 0);
> +  return NULL;
> +}
> +
> +/* Encode an SVE address [<base>, #<simm9>*<factor>, MUL VL],
> +   where <simm9> is a 9-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm9> is encoded in the concatenation of the SVE_imm6
> +   and imm3 fields, with imm3 being the less-significant part.  */
> +const char *
> +aarch64_ins_sve_addr_ri_s9xvl (const aarch64_operand *self,
> +			       const aarch64_opnd_info *info,
> +			       aarch64_insn *code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int factor = 1 + get_operand_specific_data (self);
> +  insert_field (self->fields[0], code, info->addr.base_regno, 0);
> +  insert_fields (code, info->addr.offset.imm / factor, 0,
> +		 2, FLD_imm3, FLD_SVE_imm6);
> +  return NULL;
> +}
> +
>  /* Encode an SVE address [X<n>, #<SVE_imm6> << <shift>], where <SVE_imm6>
>     is a 6-bit unsigned number and where <shift> is SELF's operand-dependent
>     value.  fields[0] specifies the base register field.  */
> diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
> index b81cfa1..5e13de0 100644
> --- a/opcodes/aarch64-asm.h
> +++ b/opcodes/aarch64-asm.h
> @@ -69,6 +69,9 @@ AARCH64_DECL_OPD_INSERTER (ins_hint);
>  AARCH64_DECL_OPD_INSERTER (ins_prfop);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_extended);
>  AARCH64_DECL_OPD_INSERTER (ins_reg_shifted);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s4xvl);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s6xvl);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_s9xvl);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_ri_u6);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rr_lsl);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_rz_xtw);
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 3dd714f..48d6ce7 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10426,12 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 123:
> -    case 124:
> -    case 125:
> -    case 126:
> -    case 127:
> -    case 128:
>      case 129:
>      case 130:
>      case 131:
> @@ -10440,7 +10434,13 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 134:
>      case 135:
>      case 136:
> +    case 137:
> +    case 138:
>      case 139:
> +    case 140:
> +    case 141:
> +    case 142:
> +    case 145:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10482,8 +10482,8 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 68:
>      case 69:
>      case 70:
> -    case 120:
> -    case 122:
> +    case 126:
> +    case 128:
>        return aarch64_ext_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -10540,46 +10540,55 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 90:
>      case 91:
>      case 92:
> -      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
> +      return aarch64_ext_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 93:
> +      return aarch64_ext_sve_addr_ri_s6xvl (self, info, code, inst);
>      case 94:
> +      return aarch64_ext_sve_addr_ri_s9xvl (self, info, code, inst);
>      case 95:
>      case 96:
>      case 97:
>      case 98:
> +      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
>      case 99:
>      case 100:
>      case 101:
>      case 102:
>      case 103:
>      case 104:
> -      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
>      case 105:
>      case 106:
>      case 107:
>      case 108:
>      case 109:
>      case 110:
> +      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
>      case 111:
>      case 112:
> -      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
>      case 113:
>      case 114:
>      case 115:
>      case 116:
> -      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
>      case 117:
> -      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
>      case 118:
> -      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
> +      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
>      case 119:
> -      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 120:
>      case 121:
> +    case 122:
> +      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
> +    case 123:
> +      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
> +    case 124:
> +      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 125:
> +      return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
> +    case 127:
>        return aarch64_ext_sve_scale (self, info, code, inst);
> -    case 137:
> +    case 143:
>        return aarch64_ext_sve_index (self, info, code, inst);
> -    case 138:
> -    case 140:
> +    case 144:
> +    case 146:
>        return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index ed77b4d..ba6befd 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -1186,6 +1186,78 @@ aarch64_ext_reg_shifted (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return 1;
>  }
>  
> +/* Decode an SVE address [<base>, #<offset>*<factor>, MUL VL],
> +   where <offset> is given by the OFFSET parameter and where <factor> is
> +   1 plus SELF's operand-dependent value.  fields[0] specifies the field
> +   that holds <base>.  */
> +static int
> +aarch64_ext_sve_addr_reg_mul_vl (const aarch64_operand *self,
> +				 aarch64_opnd_info *info, aarch64_insn code,
> +				 int64_t offset)
> +{
> +  info->addr.base_regno = extract_field (self->fields[0], code, 0);
> +  info->addr.offset.imm = offset * (1 + get_operand_specific_data (self));
> +  info->addr.offset.is_reg = FALSE;
> +  info->addr.writeback = FALSE;
> +  info->addr.preind = TRUE;
> +  if (offset != 0)
> +    info->shifter.kind = AARCH64_MOD_MUL_VL;
> +  info->shifter.amount = 1;
> +  info->shifter.operator_present = (info->addr.offset.imm != 0);
> +  info->shifter.amount_present = FALSE;
> +  return 1;
> +}
> +
> +/* Decode an SVE address [<base>, #<simm4>*<factor>, MUL VL],
> +   where <simm4> is a 4-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm4> is encoded in the SVE_imm4 field.  */
> +int
> +aarch64_ext_sve_addr_ri_s4xvl (const aarch64_operand *self,
> +			       aarch64_opnd_info *info, aarch64_insn code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int offset;
> +
> +  offset = extract_field (FLD_SVE_imm4, code, 0);
> +  offset = ((offset + 8) & 15) - 8;
> +  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
> +}
> +
> +/* Decode an SVE address [<base>, #<simm6>*<factor>, MUL VL],
> +   where <simm6> is a 6-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm6> is encoded in the SVE_imm6 field.  */
> +int
> +aarch64_ext_sve_addr_ri_s6xvl (const aarch64_operand *self,
> +			       aarch64_opnd_info *info, aarch64_insn code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int offset;
> +
> +  offset = extract_field (FLD_SVE_imm6, code, 0);
> +  offset = (((offset + 32) & 63) - 32);
> +  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
> +}
> +
> +/* Decode an SVE address [<base>, #<simm9>*<factor>, MUL VL],
> +   where <simm9> is a 9-bit signed value and where <factor> is 1 plus
> +   SELF's operand-dependent value.  fields[0] specifies the field that
> +   holds <base>.  <simm9> is encoded in the concatenation of the SVE_imm6
> +   and imm3 fields, with imm3 being the less-significant part.  */
> +int
> +aarch64_ext_sve_addr_ri_s9xvl (const aarch64_operand *self,
> +			       aarch64_opnd_info *info,
> +			       aarch64_insn code,
> +			       const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  int offset;
> +
> +  offset = extract_fields (code, 0, 2, FLD_SVE_imm6, FLD_imm3);
> +  offset = (((offset + 256) & 511) - 256);
> +  return aarch64_ext_sve_addr_reg_mul_vl (self, info, code, offset);
> +}
> +
>  /* Decode an SVE address [<base>, #<offset> << <shift>], where <offset>
>     is given by the OFFSET parameter and where <shift> is SELF's operand-
>     dependent value.  fields[0] specifies the base register field <base>.  */
> diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
> index 0ce2d89..5619877 100644
> --- a/opcodes/aarch64-dis.h
> +++ b/opcodes/aarch64-dis.h
> @@ -91,6 +91,9 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_hint);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_prfop);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_extended);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_reg_shifted);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s4xvl);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s6xvl);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_s9xvl);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_ri_u6);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rr_lsl);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_rz_xtw);
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index ed2b70b..a72f577 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -113,6 +113,12 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_ISB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the ISB option name SY or an optional 4-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_SYSTEM, "PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "a prefetch operation specifier"},
>    {AARCH64_OPND_CLASS_SYSTEM, "BARRIER_PSB", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {}, "the PSB option name CSYNC"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by VL"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x2xVL", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 2*VL"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x3xVL", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 3*VL"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S4x4xVL", 3 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 4-bit signed offset, multiplied by 4*VL"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S6xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit signed offset, multiplied by VL"},
> +  {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_S9xVL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 9-bit signed offset, multiplied by VL"},
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset"},
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 2"},
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_RI_U6x4", 2 << OPD_F_OD_LSB | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_Rn}, "an address with a 6-bit unsigned offset, multiplied by 4"},
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index 6617e28..d0959b5 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -365,6 +365,7 @@ const struct aarch64_name_value_pair aarch64_operand_modifiers [] =
>      {"sxtw", 0x6},
>      {"sxtx", 0x7},
>      {"mul", 0x0},
> +    {"mul vl", 0x0},
>      {NULL, 0},
>  };
>  
> @@ -486,10 +487,11 @@ value_in_range_p (int64_t value, int low, int high)
>    return (value >= low && value <= high) ? 1 : 0;
>  }
>  
> +/* Return true if VALUE is a multiple of ALIGN.  */
>  static inline int
>  value_aligned_p (int64_t value, int align)
>  {
> -  return ((value & (align - 1)) == 0) ? 1 : 0;
> +  return (value % align) == 0;
>  }
>  
>  /* A signed value fits in a field.  */
> @@ -1666,6 +1668,49 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
> +	case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
> +	  min_value = -8;
> +	  max_value = 7;
> +	sve_imm_offset_vl:
> +	  assert (!opnd->addr.offset.is_reg);
> +	  assert (opnd->addr.preind);
> +	  num = 1 + get_operand_specific_data (&aarch64_operands[type]);
> +	  min_value *= num;
> +	  max_value *= num;
> +	  if ((opnd->addr.offset.imm != 0 && !opnd->shifter.operator_present)
> +	      || (opnd->shifter.operator_present
> +		  && opnd->shifter.kind != AARCH64_MOD_MUL_VL))
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid addressing mode"));
> +	      return 0;
> +	    }
> +	  if (!value_in_range_p (opnd->addr.offset.imm, min_value, max_value))
> +	    {
> +	      set_offset_out_of_range_error (mismatch_detail, idx,
> +					     min_value, max_value);
> +	      return 0;
> +	    }
> +	  if (!value_aligned_p (opnd->addr.offset.imm, num))
> +	    {
> +	      set_unaligned_error (mismatch_detail, idx, num);
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
> +	  min_value = -32;
> +	  max_value = 31;
> +	  goto sve_imm_offset_vl;
> +
> +	case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
> +	  min_value = -256;
> +	  max_value = 255;
> +	  goto sve_imm_offset_vl;
> +
>  	case AARCH64_OPND_SVE_ADDR_RI_U6:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6x2:
>  	case AARCH64_OPND_SVE_ADDR_RI_U6x4:
> @@ -2645,7 +2690,13 @@ print_immediate_offset_address (char *buf, size_t size,
>      }
>    else
>      {
> -      if (opnd->addr.offset.imm)
> +      if (opnd->shifter.operator_present)
> +	{
> +	  assert (opnd->shifter.kind == AARCH64_MOD_MUL_VL);
> +	  snprintf (buf, size, "[%s,#%d,mul vl]",
> +		    base, opnd->addr.offset.imm);
> +	}
> +      else if (opnd->addr.offset.imm)
>  	snprintf (buf, size, "[%s,#%d]", base, opnd->addr.offset.imm);
>        else
>  	snprintf (buf, size, "[%s]", base);
> @@ -3114,6 +3165,12 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_ADDR_SIMM7:
>      case AARCH64_OPND_ADDR_SIMM9:
>      case AARCH64_OPND_ADDR_SIMM9_2:
> +    case AARCH64_OPND_SVE_ADDR_RI_S4xVL:
> +    case AARCH64_OPND_SVE_ADDR_RI_S4x2xVL:
> +    case AARCH64_OPND_SVE_ADDR_RI_S4x3xVL:
> +    case AARCH64_OPND_SVE_ADDR_RI_S4x4xVL:
> +    case AARCH64_OPND_SVE_ADDR_RI_S6xVL:
> +    case AARCH64_OPND_SVE_ADDR_RI_S9xVL:
>      case AARCH64_OPND_SVE_ADDR_RI_U6:
>      case AARCH64_OPND_SVE_ADDR_RI_U6x2:
>      case AARCH64_OPND_SVE_ADDR_RI_U6x4:
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index aba4b2d..986cef6 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2820,6 +2820,24 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "a prefetch operation specifier")					\
>      Y(SYSTEM, hint, "BARRIER_PSB", 0, F (),				\
>        "the PSB option name CSYNC")					\
> +    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4xVL",			\
> +      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 4-bit signed offset, multiplied by VL")	\
> +    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x2xVL",		\
> +      1 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 4-bit signed offset, multiplied by 2*VL")	\
> +    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x3xVL",		\
> +      2 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 4-bit signed offset, multiplied by 3*VL")	\
> +    Y(ADDRESS, sve_addr_ri_s4xvl, "SVE_ADDR_RI_S4x4xVL",		\
> +      3 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 4-bit signed offset, multiplied by 4*VL")	\
> +    Y(ADDRESS, sve_addr_ri_s6xvl, "SVE_ADDR_RI_S6xVL",			\
> +      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 6-bit signed offset, multiplied by VL")	\
> +    Y(ADDRESS, sve_addr_ri_s9xvl, "SVE_ADDR_RI_S9xVL",			\
> +      0 << OPD_F_OD_LSB, F(FLD_Rn),					\
> +      "an address with a 9-bit signed offset, multiplied by VL")	\
>      Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6", 0 << OPD_F_OD_LSB,	\
>        F(FLD_Rn), "an address with a 6-bit unsigned offset")		\
>      Y(ADDRESS, sve_addr_ri_u6, "SVE_ADDR_RI_U6x2", 1 << OPD_F_OD_LSB,	\
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

end of thread, other threads:[~2016-09-20 13:51 UTC | newest]

Thread overview: 76+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-23  9:05 [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Richard Sandiford
2016-08-23  9:06 ` [AArch64][SVE 02/32] Avoid hard-coded limit in indented_print Richard Sandiford
2016-08-23 14:35   ` Richard Earnshaw (lists)
2016-08-23  9:06 ` [AArch64][SVE 01/32] Remove parse_neon_operand_type Richard Sandiford
2016-08-23 14:28   ` Richard Earnshaw (lists)
2016-08-23  9:07 ` [AArch64][SVE 04/32] Rename neon_type_el to vector_type_el Richard Sandiford
2016-08-23 14:37   ` Richard Earnshaw (lists)
2016-08-23  9:07 ` [AArch64][SVE 03/32] Rename neon_el_type to vector_el_type Richard Sandiford
2016-08-23 14:36   ` Richard Earnshaw (lists)
2016-08-23  9:08 ` [AArch64][SVE 06/32] Generalise parse_neon_reg_list Richard Sandiford
2016-08-23 14:39   ` Richard Earnshaw (lists)
2016-08-23  9:08 ` [AArch64][SVE 05/32] Rename parse_neon_type_for_operand Richard Sandiford
2016-08-23 14:37   ` Richard Earnshaw (lists)
2016-08-23  9:09 ` [AArch64][SVE 07/32] Replace hard-coded uses of REG_TYPE_R_Z_BHSDQ_V Richard Sandiford
2016-08-25 10:36   ` Richard Earnshaw (lists)
2016-08-23  9:10 ` [AArch64][SVE 08/32] Generalise aarch64_double_precision_fmovable Richard Sandiford
2016-08-25 13:17   ` Richard Earnshaw (lists)
2016-08-23  9:11 ` [AArch64][SVE 09/32] Improve error messages for invalid floats Richard Sandiford
2016-08-25 13:19   ` Richard Earnshaw (lists)
2016-08-23  9:11 ` [AArch64][SVE 10/32] Move range check out of parse_aarch64_imm_float Richard Sandiford
2016-08-25 13:20   ` Richard Earnshaw (lists)
2016-08-23  9:12 ` [AArch64][SVE 11/32] Tweak aarch64_reg_parse_32_64 interface Richard Sandiford
2016-08-25 13:27   ` Richard Earnshaw (lists)
2016-09-16 11:51     ` Richard Sandiford
2016-09-20 10:47       ` Richard Earnshaw (lists)
2016-08-23  9:13 ` [AArch64][SVE 12/32] Make more use of bfd_boolean Richard Sandiford
2016-08-25 13:39   ` Richard Earnshaw (lists)
2016-09-16 11:56     ` Richard Sandiford
2016-09-20 12:39       ` Richard Earnshaw (lists)
2016-08-23  9:14 ` [AArch64][SVE 13/32] Add an F_STRICT flag Richard Sandiford
2016-08-25 13:45   ` Richard Earnshaw (lists)
2016-08-23  9:15 ` [AArch64][SVE 15/32] Add {insert,extract}_all_fields helpers Richard Sandiford
2016-08-25 13:50   ` Richard Earnshaw (lists)
2016-08-23  9:15 ` [AArch64][SVE 14/32] Make aarch64_logical_immediate_p take an element size Richard Sandiford
2016-08-25 13:48   ` Richard Earnshaw (lists)
2016-08-23  9:16 ` [AArch64][SVE 18/32] Tidy definition of aarch64-opc.c:int_reg Richard Sandiford
2016-08-25 13:55   ` Richard Earnshaw (lists)
2016-08-23  9:16 ` [AArch64][SVE 16/32] Use specific insert/extract methods for fpimm Richard Sandiford
2016-08-25 13:52   ` Richard Earnshaw (lists)
2016-08-23  9:16 ` [AArch64][SVE 17/32] Add a prefix parameter to print_register_list Richard Sandiford
2016-08-25 13:53   ` Richard Earnshaw (lists)
2016-08-23  9:17 ` [AArch64][SVE 19/32] Refactor address-printing code Richard Sandiford
2016-08-25 13:57   ` Richard Earnshaw (lists)
2016-08-23  9:18 ` [AArch64][SVE 20/32] Add support for tied operands Richard Sandiford
2016-08-25 13:59   ` Richard Earnshaw (lists)
2016-08-23  9:18 ` [AArch64][SVE 21/32] Add Zn and Pn registers Richard Sandiford
2016-08-25 14:07   ` Richard Earnshaw (lists)
2016-08-23  9:19 ` [AArch64][SVE 22/32] Add qualifiers for merging and zeroing predication Richard Sandiford
2016-08-25 14:08   ` Richard Earnshaw (lists)
2016-08-23  9:20 ` [AArch64][SVE 23/32] Add SVE pattern and prfop operands Richard Sandiford
2016-08-25 14:12   ` Richard Earnshaw (lists)
2016-08-23  9:21 ` [AArch64][SVE 25/32] Add support for SVE addressing modes Richard Sandiford
2016-08-25 14:38   ` Richard Earnshaw (lists)
2016-09-16 12:06     ` Richard Sandiford
2016-09-20 13:40       ` Richard Earnshaw (lists)
2016-08-23  9:21 ` [AArch64][SVE 24/32] Add AARCH64_OPND_SVE_PATTERN_SCALED Richard Sandiford
2016-08-25 14:28   ` Richard Earnshaw (lists)
2016-08-23  9:23 ` [AArch64][SVE 26/32] Add SVE MUL VL addressing modes Richard Sandiford
2016-08-25 14:44   ` Richard Earnshaw (lists)
2016-09-16 12:10     ` Richard Sandiford
2016-09-20 13:51       ` Richard Earnshaw (lists)
2016-08-23  9:24 ` [AArch64][SVE 27/32] Add SVE integer immediate operands Richard Sandiford
2016-08-25 14:51   ` Richard Earnshaw (lists)
2016-08-23  9:25 ` [AArch64][SVE 29/32] Add new SVE core & FP register operands Richard Sandiford
2016-08-25 15:01   ` Richard Earnshaw (lists)
2016-08-23  9:25 ` [AArch64][SVE 28/32] Add SVE FP immediate operands Richard Sandiford
2016-08-25 14:59   ` Richard Earnshaw (lists)
2016-08-23  9:26 ` [AArch64][SVE 30/32] Add SVE instruction classes Richard Sandiford
2016-08-25 15:07   ` Richard Earnshaw (lists)
2016-08-23  9:29 ` [AArch64][SVE 31/32] Add SVE instructions Richard Sandiford
2016-08-25 15:18   ` Richard Earnshaw (lists)
2016-08-23  9:31 ` [AArch64][SVE 32/32] Add SVE tests Richard Sandiford
2016-08-25 15:23   ` Richard Earnshaw (lists)
2016-08-30 21:23     ` Richard Sandiford
2016-08-31  9:47       ` Richard Earnshaw (lists)
2016-08-30 13:04 ` [AArch64][SVE 00/32] Add support for the ARMv8-A Scalable Vector Extension Nick Clifton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).