public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] RFC: New compact syntax for insn and insn_split in Machine Descriptions
@ 2023-04-18 16:30 Tamar Christina
  2023-04-21 17:18 ` Richard Sandiford
                   ` (2 more replies)
  0 siblings, 3 replies; 21+ messages in thread
From: Tamar Christina @ 2023-04-18 16:30 UTC (permalink / raw)
  To: gcc-patches; +Cc: nd, richard.sandiford, richard.earnshaw

[-- Attachment #1: Type: text/plain, Size: 45776 bytes --]

Hi All,

This patch adds support for a compact syntax for specifying constraints in
instruction patterns. Credit for the idea goes to Richard Earnshaw.

I am sending up this RFC to get feedback for it's inclusion in GCC 14.
With this new syntax we want a clean break from the current limitations to make
something that is hopefully easier to use and maintain.

The idea behind this compact syntax is that often times it's quite hard to
correlate the entries in the constrains list, attributes and instruction lists.

One has to count and this often is tedious.  Additionally when changing a single
line in the insn multiple lines in a diff change, making it harder to see what's
going on.

This new syntax takes into account many of the common things that are done in MD
files.   It's also worth saying that this version is intended to deal with the
common case of a string based alternatives.   For C chunks we have some ideas
but those are not intended to be addressed here.

It's easiest to explain with an example:

normal syntax:

(define_insn_and_split "*movsi_aarch64"
  [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m, m,  r,  r,  r, w,r,w, w")
	(match_operand:SI 1 "aarch64_mov_operand"  " r,r,k,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Ds"))]
  "(register_operand (operands[0], SImode)
    || aarch64_reg_or_zero (operands[1], SImode))"
  "@
   mov\\t%w0, %w1
   mov\\t%w0, %w1
   mov\\t%w0, %w1
   mov\\t%w0, %1
   #
   * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]);
   ldr\\t%w0, %1
   ldr\\t%s0, %1
   str\\t%w1, %0
   str\\t%s1, %0
   adrp\\t%x0, %A1\;ldr\\t%w0, [%x0, %L1]
   adr\\t%x0, %c1
   adrp\\t%x0, %A1
   fmov\\t%s0, %w1
   fmov\\t%w0, %s1
   fmov\\t%s0, %s1
   * return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);"
  "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode)
    && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
   [(const_int 0)]
   "{
       aarch64_expand_mov_immediate (operands[0], operands[1]);
       DONE;
    }"
  ;; The "mov_imm" type for CNT is just a placeholder.
  [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,load_4,
		    load_4,store_4,store_4,load_4,adr,adr,f_mcr,f_mrc,fmov,neon_move")
   (set_attr "arch"   "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd")
   (set_attr "length" "4,4,4,4,*,  4,4, 4,4, 4,8,4,4, 4, 4, 4,   4")
]
)

New syntax:

(define_insn_and_split "*movsi_aarch64"
  [(set (match_operand:SI 0 "nonimmediate_operand")
	(match_operand:SI 1 "aarch64_mov_operand"))]
  "(register_operand (operands[0], SImode)
    || aarch64_reg_or_zero (operands[1], SImode))"
  "@@ (cons: 0 1; attrs: type arch length)
   [=r, r  ; mov_reg  , *   , 4] mov\t%w0, %w1
   [k , r  ; mov_reg  , *   , 4] ^
   [r , k  ; mov_reg  , *   , 4] ^
   [r , M  ; mov_imm  , *   , 4] mov\t%w0, %1
   [r , n  ; mov_imm  , *   , *] #
   [r , Usv; mov_imm  , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);
   [r , m  ; load_4   , *   , 4] ldr\t%w0, %1
   [w , m  ; load_4   , fp  , 4] ldr\t%s0, %1
   [m , rZ ; store_4  , *   , 4] str\t%w1, %0
   [m , w  ; store_4  , fp  , 4] str\t%s1, %0
   [r , Usw; load_4   , *   , 8] adrp\t%x0, %A1;ldr\t%w0, [%x0, %L1]
   [r , Usa; adr      , *   , 4] adr\t%x0, %c1
   [r , Ush; adr      , *   , 4] adrp\t%x0, %A1
   [w , rZ ; f_mcr    , fp  , 4] fmov\t%s0, %w1
   [r , w  ; f_mrc    , fp  , 4] fmov\t%w0, %s1
   [w , w  ; fmov     , fp  , 4] fmov\t%s0, %s1
   [w , Ds ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);"
  "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode)
    && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
  [(const_int 0)]
  {
    aarch64_expand_mov_immediate (operands[0], operands[1]);
    DONE;
  }
  ;; The "mov_imm" type for CNT is just a placeholder.
)

The patch contains some more rewritten examples for both Arm and AArch64.  I
have included them for examples in this RFC but the final version posted in
GCC 14 will have these split out.

The main syntax rules are as follows (See docs for full rules):
  - Template must start with "@@" to use the new syntax.
  - "@@" is followed by a layout in parentheses which is "cons:" followed by
    a list of match_operand/match_scratch IDs, then a semicolon, then the
    same for attributes ("attrs:"). Both sections are optional (so you can
    use only cons, or only attrs, or both), and cons must come before attrs
    if present.
  - Each alternative begins with any amount of whitespace.
  - Following the whitespace is a comma-separated list of constraints and/or
    attributes within brackets [], with sections separated by a semicolon.
  - Following the closing ']' is any amount of whitespace, and then the actual
    asm output.
  - Spaces are allowed in the list (they will simply be removed).
  - All alternatives should be specified: a blank list should be
    "[,,]", "[,,;,]" etc., not "[]" or "" (however genattr may segfault if
    you leave certain attributes empty, I have found).
  - The actual constraint string in the match_operand or match_scratch, and
    the attribute string in the set_attr, must be blank or an empty string
    (you can't combine the old and new syntaxes).
  - The common idion * return can be shortened by using <<.
  - Any unexpanded iterators left during processing will result in an error at
    compile time.   If for some reason <> is needed in the output then these
    must be escaped using \.
  - Inside a @@ block '' is treated as "" when there are multiple characters
    inside the single quotes.  This version does not handle multi byte literals
    like specifying characters as their numerical encoding, like \003 nor does
    it handle unicode, especially multibyte encodings.  This feature may be more
    trouble than it's worth so have no finished it off, however this means one
    can use 'foo' instead of \"foo\" to denote a multicharacter string.
  - Inside an @@ block any unexpanded iterators will result in a compile time
    fault instead of incorrect assembly being generated at runtime.  If the
    literal <> is needed in the output this needs to be escaped with \<\>.
  - This check is not performed inside C blocks (lines starting with *).
  - Instead of copying the previous instruction again in the next pattern, one
    can use ^ to refer to the previous asm string.

This patch works by blindly transforming the new syntax into the old syntax,
so it doesn't do extensive checking. However, it does verify that:
	- The correct number of constraints/attributes are specified.
	- You haven't mixed old and new syntax.
	- The specified operand IDs/attribute names actually exist.

If something goes wrong, it may write invalid constraints/attributes/template
back into the rtx. But this shouldn't matter because error_at will cause the
program to fail on exit anyway.

Because this transformation occurs as early as possible (before patterns are
queued), the rest of the compiler can completely ignore the new syntax and
assume that the old syntax will always be used.

This doesn't seem to have any measurable effect on the runtime of gen*
programs.

Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.

Any feedback?

Thanks,
Tamar

gcc/ChangeLog:

	* config/aarch64/aarch64.md (arches): Add nosimd.
	(*mov<mode>_aarch64, *movsi_aarch64, *movdi_aarch64): Rewrite to
	compact syntax.
	* config/arm/arm.md (*arm_addsi3): Rewrite to compact syntax.
	* doc/md.texi: Document new syntax.
	* gensupport.cc (class conlist, add_constraints, add_attributes,
	create_missing_attributes, skip_spaces, expect_char,
	preprocess_compact_syntax, parse_section_layout, parse_section,
	convert_syntax): New.
	(process_rtx): Check for conversion.
	* genoutput.cc (process_template): Check for unresolved iterators.
	(class data): Add compact_syntax_p.
	(gen_insn): Use it.
	* gensupport.h (compact_syntax): New.
	(hash-set.h): Include.

Co-Authored-By: Omar Tahir <Omar.Tahir2@arm.com>

--- inline copy of patch -- 
diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
index 022eef80bc1e93299f329610dcd2321917d5770a..331eb2ff57a0e1ff300f3321f154829a57772679 100644
--- a/gcc/config/aarch64/aarch64.md
+++ b/gcc/config/aarch64/aarch64.md
@@ -375,7 +375,7 @@ (define_constants
 ;; As a convenience, "fp_q" means "fp" + the ability to move between
 ;; Q registers and is equivalent to "simd".
 
-(define_enum "arches" [ any rcpc8_4 fp fp_q simd sve fp16])
+(define_enum "arches" [ any rcpc8_4 fp fp_q simd nosimd sve fp16])
 
 (define_enum_attr "arch" "arches" (const_string "any"))
 
@@ -406,6 +406,9 @@ (define_attr "arch_enabled" "no,yes"
 	(and (eq_attr "arch" "fp_q, simd")
 	     (match_test "TARGET_SIMD"))
 
+	(and (eq_attr "arch" "nosimd")
+	     (match_test "!TARGET_SIMD"))
+
 	(and (eq_attr "arch" "fp16")
 	     (match_test "TARGET_FP_F16INST"))
 
@@ -1215,44 +1218,26 @@ (define_expand "mov<mode>"
 )
 
 (define_insn "*mov<mode>_aarch64"
-  [(set (match_operand:SHORT 0 "nonimmediate_operand" "=r,r,    w,r  ,r,w, m,m,r,w,w")
-	(match_operand:SHORT 1 "aarch64_mov_operand"  " r,M,D<hq>,Usv,m,m,rZ,w,w,rZ,w"))]
+  [(set (match_operand:SHORT 0 "nonimmediate_operand")
+	(match_operand:SHORT 1 "aarch64_mov_operand"))]
   "(register_operand (operands[0], <MODE>mode)
     || aarch64_reg_or_zero (operands[1], <MODE>mode))"
-{
-   switch (which_alternative)
-     {
-     case 0:
-       return "mov\t%w0, %w1";
-     case 1:
-       return "mov\t%w0, %1";
-     case 2:
-       return aarch64_output_scalar_simd_mov_immediate (operands[1],
-							<MODE>mode);
-     case 3:
-       return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]);
-     case 4:
-       return "ldr<size>\t%w0, %1";
-     case 5:
-       return "ldr\t%<size>0, %1";
-     case 6:
-       return "str<size>\t%w1, %0";
-     case 7:
-       return "str\t%<size>1, %0";
-     case 8:
-       return TARGET_SIMD ? "umov\t%w0, %1.<v>[0]" : "fmov\t%w0, %s1";
-     case 9:
-       return TARGET_SIMD ? "dup\t%0.<Vallxd>, %w1" : "fmov\t%s0, %w1";
-     case 10:
-       return TARGET_SIMD ? "dup\t%<Vetype>0, %1.<v>[0]" : "fmov\t%s0, %s1";
-     default:
-       gcc_unreachable ();
-     }
-}
+  "@@ (cons: 0 1; attrs: type arch)
+  [=r, r    ; mov_reg        , *     ] mov\t%w0, %w1
+  [r , M    ; mov_imm        , *     ] mov\t%w0, %1
+  [w , D<hq>; neon_move      , simd  ] << aarch64_output_scalar_simd_mov_immediate (operands[1], <MODE>mode);
+  [r , Usv  ; mov_imm        , sve   ] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);
+  [r , m    ; load_4         , *     ] ldr<size>\t%w0, %1
+  [w , m    ; load_4         , *     ] ldr\t%<size>0, %1
+  [m , rZ   ; store_4        , *     ] str<size>\\t%w1, %0
+  [m , w    ; store_4        , *     ] str\t%<size>1, %0
+  [r , w    ; neon_to_gp<q>  , simd  ] umov\t%w0, %1.<v>[0]
+  [r , w    ; neon_to_gp<q>  , nosimd] fmov\t%w0, %s1
+  [w , rZ   ; neon_from_gp<q>, simd  ] dup\t%0.<Vallxd>, %w1
+  [w , rZ   ; neon_from_gp<q>, nosimd] fmov\t%s0, %w1
+  [w , w    ; neon_dup       , simd  ] dup\t%<Vetype>0, %1.<v>[0]
+  [w , w    ; neon_dup       , nosimd] fmov\t%s0, %s1"
   ;; The "mov_imm" type for CNT is just a placeholder.
-  [(set_attr "type" "mov_reg,mov_imm,neon_move,mov_imm,load_4,load_4,store_4,
-		     store_4,neon_to_gp<q>,neon_from_gp<q>,neon_dup")
-   (set_attr "arch" "*,*,simd,sve,*,*,*,*,*,*,*")]
 )
 
 (define_expand "mov<mode>"
@@ -1289,79 +1274,69 @@ (define_expand "mov<mode>"
 )
 
 (define_insn_and_split "*movsi_aarch64"
-  [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m, m,  r,  r,  r, w,r,w, w")
-	(match_operand:SI 1 "aarch64_mov_operand"  " r,r,k,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Ds"))]
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
   "(register_operand (operands[0], SImode)
     || aarch64_reg_or_zero (operands[1], SImode))"
-  "@
-   mov\\t%w0, %w1
-   mov\\t%w0, %w1
-   mov\\t%w0, %w1
-   mov\\t%w0, %1
-   #
-   * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]);
-   ldr\\t%w0, %1
-   ldr\\t%s0, %1
-   str\\t%w1, %0
-   str\\t%s1, %0
-   adrp\\t%x0, %A1\;ldr\\t%w0, [%x0, %L1]
-   adr\\t%x0, %c1
-   adrp\\t%x0, %A1
-   fmov\\t%s0, %w1
-   fmov\\t%w0, %s1
-   fmov\\t%s0, %s1
-   * return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);"
+  "@@ (cons: 0 1; attrs: type arch length)
+   [=r, r  ; mov_reg  , *   , 4] mov\t%w0, %w1
+   [k , r  ; mov_reg  , *   , 4] ^
+   [r , k  ; mov_reg  , *   , 4] ^
+   [r , M  ; mov_imm  , *   , 4] mov\t%w0, %1
+   [r , n  ; mov_imm  , *   ,16] #
+   [r , Usv; mov_imm  , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);
+   [r , m  ; load_4   , *   , 4] ldr\t%w0, %1
+   [w , m  ; load_4   , fp  , 4] ldr\t%s0, %1
+   [m , rZ ; store_4  , *   , 4] str\t%w1, %0
+   [m , w  ; store_4  , fp  , 4] str\t%s1, %0
+   [r , Usw; load_4   , *   , 8] adrp\t%x0, %A1;ldr\t%w0, [%x0, %L1]
+   [r , Usa; adr      , *   , 4] adr\t%x0, %c1
+   [r , Ush; adr      , *   , 4] adrp\t%x0, %A1
+   [w , rZ ; f_mcr    , fp  , 4] fmov\t%s0, %w1
+   [r , w  ; f_mrc    , fp  , 4] fmov\t%w0, %s1
+   [w , w  ; fmov     , fp  , 4] fmov\t%s0, %s1
+   [w , Ds ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);"
   "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode)
     && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
-   [(const_int 0)]
-   "{
-       aarch64_expand_mov_immediate (operands[0], operands[1]);
-       DONE;
-    }"
+  [(const_int 0)]
+  {
+    aarch64_expand_mov_immediate (operands[0], operands[1]);
+    DONE;
+  }
   ;; The "mov_imm" type for CNT is just a placeholder.
-  [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,load_4,
-		    load_4,store_4,store_4,load_4,adr,adr,f_mcr,f_mrc,fmov,neon_move")
-   (set_attr "arch"   "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd")
-   (set_attr "length" "4,4,4,4,*,  4,4, 4,4, 4,8,4,4, 4, 4, 4,   4")
-]
 )
 
 (define_insn_and_split "*movdi_aarch64"
-  [(set (match_operand:DI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m,m,   r,  r,  r, w,r,w, w")
-	(match_operand:DI 1 "aarch64_mov_operand"  " r,r,k,O,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Dd"))]
+  [(set (match_operand:DI 0 "nonimmediate_operand")
+	(match_operand:DI 1 "aarch64_mov_operand"))]
   "(register_operand (operands[0], DImode)
     || aarch64_reg_or_zero (operands[1], DImode))"
-  "@
-   mov\\t%x0, %x1
-   mov\\t%0, %x1
-   mov\\t%x0, %1
-   * return aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? \"mov\\t%x0, %1\" : \"mov\\t%w0, %1\";
-   #
-   * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]);
-   ldr\\t%x0, %1
-   ldr\\t%d0, %1
-   str\\t%x1, %0
-   str\\t%d1, %0
-   * return TARGET_ILP32 ? \"adrp\\t%0, %A1\;ldr\\t%w0, [%0, %L1]\" : \"adrp\\t%0, %A1\;ldr\\t%0, [%0, %L1]\";
-   adr\\t%x0, %c1
-   adrp\\t%x0, %A1
-   fmov\\t%d0, %x1
-   fmov\\t%x0, %d1
-   fmov\\t%d0, %d1
-   * return aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);"
-   "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode)
-    && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
-   [(const_int 0)]
-   "{
-       aarch64_expand_mov_immediate (operands[0], operands[1]);
-       DONE;
-    }"
+  "@@ (cons: 0 1; attrs: type arch length)
+   [=r, r  ; mov_reg  , *   , 4] mov\t%x0, %x1
+   [k , r  ; mov_reg  , *   , 4] mov\t%0, %x1
+   [r , k  ; mov_reg  , *   , 4] mov\t%x0, %1
+   [r , O  ; mov_imm  , *   , 4] << aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? 'mov\t%x0, %1' : 'mov\t%w0, %1';
+   [r , n  ; mov_imm  , *   ,16] #
+   [r , Usv; mov_imm  , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);
+   [r , m  ; load_8   , *   , 4] ldr\t%x0, %1
+   [w , m  ; load_8   , fp  , 4] ldr\t%d0, %1
+   [m , rZ ; store_8  , *   , 4] str\t%x1, %0
+   [m , w  ; store_8  , fp  , 4] str\t%d1, %0
+   [r , Usw; load_8   , *   , 8] << TARGET_ILP32 ? 'adrp\t%0, %A1;ldr\t%w0, [%0, %L1]' : 'adrp\t%0, %A1;ldr\t%0, [%0, %L1]';
+   [r , Usa; adr      , *   , 4] adr\t%x0, %c1
+   [r , Ush; adr      , *   , 4] adrp\t%x0, %A1
+   [w , rZ ; f_mcr    , fp  , 4] fmov\t%d0, %x1
+   [r , w  ; f_mrc    , fp  , 4] fmov\t%x0, %d1
+   [w , w  ; fmov     , fp  , 4] fmov\t%d0, %d1
+   [w , Dd ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);"
+  "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode)
+   && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
+  [(const_int 0)]
+  {
+      aarch64_expand_mov_immediate (operands[0], operands[1]);
+      DONE;
+  }
   ;; The "mov_imm" type for CNTD is just a placeholder.
-  [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,
-		     load_8,load_8,store_8,store_8,load_8,adr,adr,f_mcr,f_mrc,
-		     fmov,neon_move")
-   (set_attr "arch"   "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd")
-   (set_attr "length" "4,4,4,4,*,  4,4, 4,4, 4,8,4,4, 4, 4, 4,   4")]
 )
 
 (define_insn "insv_imm<mode>"
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index cbfc4543531452b0708a38bdf4abf5105b54f8b7..16c50b4a7c414a72b234cef7745a37745e6a41fc 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -924,27 +924,27 @@ (define_peephole2
 ;;  (plus (reg rN) (reg sp)) into (reg rN).  In this case reload will
 ;; put the duplicated register first, and not try the commutative version.
 (define_insn_and_split "*arm_addsi3"
-  [(set (match_operand:SI          0 "s_register_operand" "=rk,l,l ,l ,r ,k ,r,k ,r ,k ,r ,k,k,r ,k ,r")
-	(plus:SI (match_operand:SI 1 "s_register_operand" "%0 ,l,0 ,l ,rk,k ,r,r ,rk,k ,rk,k,r,rk,k ,rk")
-		 (match_operand:SI 2 "reg_or_int_operand" "rk ,l,Py,Pd,rI,rI,k,rI,Pj,Pj,L ,L,L,PJ,PJ,?n")))]
-  "TARGET_32BIT"
-  "@
-   add%?\\t%0, %0, %2
-   add%?\\t%0, %1, %2
-   add%?\\t%0, %1, %2
-   add%?\\t%0, %1, %2
-   add%?\\t%0, %1, %2
-   add%?\\t%0, %1, %2
-   add%?\\t%0, %2, %1
-   add%?\\t%0, %1, %2
-   addw%?\\t%0, %1, %2
-   addw%?\\t%0, %1, %2
-   sub%?\\t%0, %1, #%n2
-   sub%?\\t%0, %1, #%n2
-   sub%?\\t%0, %1, #%n2
-   subw%?\\t%0, %1, #%n2
-   subw%?\\t%0, %1, #%n2
-   #"
+  [(set (match_operand:SI 0 "s_register_operand")
+        (plus:SI (match_operand:SI 1 "s_register_operand")
+                 (match_operand:SI 2 "reg_or_int_operand")))]
+  "TARGET_32BIT"
+  "@@ (cons: 0 1 2; attrs: length predicable_short_it arch)
+   [=rk, %0, rk; 2,  yes, t2] add%?\\t%0, %0, %2
+   [l,   l,  l ; 4,  yes, t2] add%?\\t%0, %1, %2
+   [l,   0,  Py; 4,  yes, t2] add%?\\t%0, %1, %2
+   [l,   l,  Pd; 4,  yes, t2] add%?\\t%0, %1, %2
+   [r,   rk, rI; 4,  no,  * ] add%?\\t%0, %1, %2
+   [k,   k,  rI; 4,  no,  * ] add%?\\t%0, %1, %2
+   [r,   r,  k ; 4,  no,  * ] add%?\\t%0, %2, %1
+   [k,   r,  rI; 4,  no,  a ] add%?\\t%0, %1, %2
+   [r,   rk, Pj; 4,  no,  t2] addw%?\\t%0, %1, %2
+   [k,   k,  Pj; 4,  no,  t2] addw%?\\t%0, %1, %2
+   [r,   rk, L ; 4,  no,  * ] sub%?\\t%0, %1, #%n2
+   [k,   k,  L ; 4,  no,  * ] sub%?\\t%0, %1, #%n2
+   [k,   r,  L ; 4,  no,  a ] sub%?\\t%0, %1, #%n2
+   [r,   rk, PJ; 4,  no,  t2] subw%?\\t%0, %1, #%n2
+   [k,   k,  PJ; 4,  no,  t2] subw%?\\t%0, %1, #%n2
+   [r,   rk, ?n; 16, no,  * ] #"
   "TARGET_32BIT
    && CONST_INT_P (operands[2])
    && !const_ok_for_op (INTVAL (operands[2]), PLUS)
@@ -956,10 +956,10 @@ (define_insn_and_split "*arm_addsi3"
 		      operands[1], 0);
   DONE;
   "
-  [(set_attr "length" "2,4,4,4,4,4,4,4,4,4,4,4,4,4,4,16")
+  [(set_attr "length")
    (set_attr "predicable" "yes")
-   (set_attr "predicable_short_it" "yes,yes,yes,yes,no,no,no,no,no,no,no,no,no,no,no,no")
-   (set_attr "arch" "t2,t2,t2,t2,*,*,*,a,t2,t2,*,*,a,t2,t2,*")
+   (set_attr "predicable_short_it")
+   (set_attr "arch")
    (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
 		      (const_string "alu_imm")
 		      (const_string "alu_sreg")))
diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi
index 07bf8bdebffb2e523f25a41f2b57e43c0276b745..199f2315432dc56cadfdfc03a8ab381fe02a43b3 100644
--- a/gcc/doc/md.texi
+++ b/gcc/doc/md.texi
@@ -27,6 +27,7 @@ See the next chapter for information on the C header file.
                         from such an insn.
 * Output Statement::    For more generality, write C code to output
                         the assembler code.
+* Compact Syntax::      Compact syntax for writing Machine descriptors.
 * Predicates::          Controlling what kinds of operands can be used
                         for an insn.
 * Constraints::         Fine-tuning operand selection.
@@ -713,6 +714,211 @@ you can use @samp{*} inside of a @samp{@@} multi-alternative template:
 @end group
 @end smallexample
 
+@node Compact Syntax
+@section Compact Syntax
+@cindex compact syntax
+
+In cases where the number of alternatives in a @code{define_insn} or
+@code{define_insn_and_split} are large then it may be beneficial to use the
+compact syntax when specifying alternatives.
+
+This syntax puts the constraints and attributes on the same horizontal line as
+the instruction assembly template.
+
+As an example
+
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r")
+	(match_operand:SI 1 "aarch64_mov_operand"  " r,r,k,M,n,Usv"))]
+  ""
+  "@
+   mov\\t%w0, %w1
+   mov\\t%w0, %w1
+   mov\\t%w0, %w1
+   mov\\t%w0, %1
+   #
+   * return aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);"
+  "&& true"
+   [(const_int 0)]
+  @{
+     aarch64_expand_mov_immediate (operands[0], operands[1]);
+     DONE;
+  @}
+  [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm")
+   (set_attr "arch"   "*,*,*,*,*,sve")
+   (set_attr "length" "4,4,4,4,*,  4")
+]
+)
+@end group
+@end smallexample
+
+can be better expressed as:
+
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
+  ""
+  "@@ (cons: 0 1; attrs: type arch length)
+   [=r, r  ; mov_reg  , *   , 4] mov\t%w0, %w1
+   [k , r  ; mov_reg  , *   , 4] ^
+   [r , k  ; mov_reg  , *   , 4] ^
+   [r , M  ; mov_imm  , *   , 4] mov\t%w0, %1
+   [r , n  ; mov_imm  , *   , *] #
+   [r , Usv; mov_imm  , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);"
+  "&& true"
+  [(const_int 0)]
+  @{
+    aarch64_expand_mov_immediate (operands[0], operands[1]);
+    DONE;
+  @}
+)
+@end group
+@end smallexample
+
+The syntax rules are as follows:
+@itemize @bullet
+@item
+Template must start with "@@" to use the new syntax.
+
+@item
+"@@" is followed by a layout in parentheses which is @samp{"cons:"} followed by
+a list of @code{match_operand}/@code{match_scratch} operand numbers, then a
+semicolon, followed by the same for attributes (@samp{"attrs:"}). Both sections
+are optional (so you can use only @samp{cons}, or only @samp{attrs}, or both),
+and @samp{cons} must come before @samp{attrs} if present.
+
+@item
+Each alternative begins with any amount of whitespace.
+
+@item
+Following the whitespace is a comma-separated list of @samp{constraints} and/or
+@samp{attributes} within brackets @code{[]}, with sections separated by a
+semicolon.
+
+@item
+Should you want to copy the previous asm line, the symbol @code{^} can be used.
+This allows less copy pasting between alternative and reduces the number of
+lines to update on changes.
+
+@item
+When using C functions for output, the idiom @code{* return <function>;} can be
+replaced with the shorthand @code{<< <function>;}.
+
+@item
+Following the closing ']' is any amount of whitespace, and then the actual asm
+output.
+
+@item
+Spaces are allowed in the list (they will simply be removed).
+
+@item
+All alternatives should be specified: a blank list should be "[,,]", "[,,;,]"
+etc., not "[]" or "".
+
+@item
+Within an @@ block, @code{''} is treated the same as @code{""} in cases where a
+single character would be invalid in C.  This means a multicharacter string can
+be created using @code{''} which allows for less escaping.
+
+@item
+Any unexpanded iterators within the block will result in a compile time error
+rather than accepting the generating the @code{<..>} in the output asm.  If the
+literal @code{<..>} is required it should be escaped as @code{\<..\>}.
+
+@item
+Within an @@ block, any iterators that do not get expanded will result in an
+error.  If for some reason it is required to have @code{<>} in the output then
+these must be escaped using @backslashchar{}.
+
+@item
+The actual constraint string in the @code{match_operand} or
+@code{match_scratch}, and the attribute string in the @code{set_attr}, must be
+blank or an empty string (you can't combine the old and new syntaxes).
+
+@item
+@code{set_attr} are optional.  If a @code{set_attr} is defined in the
+@samp{attrs} section then that declaration can be both definition and
+declaration.  If both @samp{attrs} and @code{set_attr} are defined for the same
+entry then the attribute string must be empty or blank.
+
+@item
+Additional @code{set_attr} can be specified other than the ones in the
+@samp{attrs} list.  These must use the @samp{normal} syntax and must be defined
+after all @samp{attrs} specified.
+
+In other words, the following are valid:
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
+  ""
+  "@@ (cons: 0 1; attrs: type arch length)"
+  ...
+  [(set_attr "type")]
+  [(set_attr "arch")]
+  [(set_attr "length")]
+  [(set_attr "foo" "mov_imm")]
+)
+@end group
+@end smallexample
+
+and
+
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
+  ""
+  "@@ (cons: 0 1; attrs: type arch length)"
+  ...
+  [(set_attr "foo" "mov_imm")]
+)
+@end group
+@end smallexample
+
+but these are not valid:
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
+  ""
+  "@@ (cons: 0 1; attrs: type arch length)"
+  ...
+  [(set_attr "type")]
+  [(set_attr "arch")]
+  [(set_attr "foo" "mov_imm")]
+)
+@end group
+@end smallexample
+
+and
+
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
+  ""
+  "@@ (cons: 0 1; attrs: type arch length)"
+  ...
+  [(set_attr "type")]
+  [(set_attr "foo" "mov_imm")]
+  [(set_attr "arch")]
+  [(set_attr "length")]
+)
+@end group
+@end smallexample
+
+because the order of the entries don't match and new entries must be last.
+@end itemize
+
 @node Predicates
 @section Predicates
 @cindex predicates
diff --git a/gcc/genoutput.cc b/gcc/genoutput.cc
index 163e8dfef4ca2c2c92ce1cf001ee6be40a54ca3e..4e67cd6ca5356c62165382de01da6bbc6f3c5fa2 100644
--- a/gcc/genoutput.cc
+++ b/gcc/genoutput.cc
@@ -91,6 +91,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "errors.h"
 #include "read-md.h"
 #include "gensupport.h"
+#include <string>
 
 /* No instruction can have more operands than this.  Sorry for this
    arbitrary limit, but what machine will have an instruction with
@@ -157,6 +158,7 @@ public:
   int n_alternatives;		/* Number of alternatives in each constraint */
   int operand_number;		/* Operand index in the big array.  */
   int output_format;		/* INSN_OUTPUT_FORMAT_*.  */
+  bool compact_syntax_p;
   struct operand_data operand[MAX_MAX_OPERANDS];
 };
 
@@ -700,12 +702,37 @@ process_template (class data *d, const char *template_code)
 	  if (sp != ep)
 	    message_at (d->loc, "trailing whitespace in output template");
 
-	  while (cp < sp)
+	  /* Check for any unexpanded iterators.  */
+	  std::string buff (cp, sp - cp);
+	  if (bp[0] != '*' && d->compact_syntax_p)
 	    {
-	      putchar (*cp);
-	      cp++;
+	      size_t start = buff.find ('<');
+	      size_t end = buff.find ('>', start + 1);
+	      if (end != std::string::npos || start != std::string::npos)
+		{
+		  if (end == std::string::npos || start == std::string::npos)
+		    fatal_at (d->loc, "unmatched angle brackets, likely an "
+			      "error in iterator syntax in %s", buff.c_str ());
+
+		  if (start != 0
+		      && buff[start-1] == '\\'
+		      && buff[end-1] == '\\')
+		    {
+		      /* Found a valid escape sequence, erase the characters for
+			 output.  */
+		      buff.erase (end-1, 1);
+		      buff.erase (start-1, 1);
+		    }
+		  else
+		    fatal_at (d->loc, "unresolved iterator '%s' in '%s'",
+			      buff.substr(start+1, end - start-1).c_str (),
+			      buff.c_str ());
+		}
 	    }
 
+	  printf ("%s", buff.c_str ());
+	  cp = sp;
+
 	  if (!found_star)
 	    puts ("\",");
 	  else if (*bp != '*')
@@ -881,6 +908,8 @@ gen_insn (md_rtx_info *info)
   else
     d->name = 0;
 
+  d->compact_syntax_p = compact_syntax.contains (insn);
+
   /* Build up the list in the same order as the insns are seen
      in the machine description.  */
   d->next = 0;
diff --git a/gcc/gensupport.h b/gcc/gensupport.h
index a1edfbd71908b6244b40f801c6c01074de56777e..7925e22ed418767576567cad583bddf83c0846b1 100644
--- a/gcc/gensupport.h
+++ b/gcc/gensupport.h
@@ -20,6 +20,7 @@ along with GCC; see the file COPYING3.  If not see
 #ifndef GCC_GENSUPPORT_H
 #define GCC_GENSUPPORT_H
 
+#include "hash-set.h"
 #include "read-md.h"
 
 struct obstack;
@@ -218,6 +219,8 @@ struct pattern_stats
   int num_operand_vars;
 };
 
+extern hash_set<rtx> compact_syntax;
+
 extern void get_pattern_stats (struct pattern_stats *ranges, rtvec vec);
 extern void compute_test_codes (rtx, file_location, char *);
 extern file_location get_file_location (rtx);
diff --git a/gcc/gensupport.cc b/gcc/gensupport.cc
index f9efc6eb7572a44b8bb154b0b22be3815bd0d244..c6a731968d2d6c7c9b01ad00e9dabb2b6d5f173e 100644
--- a/gcc/gensupport.cc
+++ b/gcc/gensupport.cc
@@ -27,12 +27,16 @@
 #include "read-md.h"
 #include "gensupport.h"
 #include "vec.h"
+#include <string>
+#include <vector>
 
 #define MAX_OPERANDS 40
 
 static rtx operand_data[MAX_OPERANDS];
 static rtx match_operand_entries_in_pattern[MAX_OPERANDS];
 static char used_operands_numbers[MAX_OPERANDS];
+/* List of entries which are part of the new syntax.  */
+hash_set<rtx> compact_syntax;
 
 
 /* In case some macros used by files we include need it, define this here.  */
@@ -545,6 +549,532 @@ gen_rewrite_sequence (rtvec vec)
   return new_vec;
 }
 
+/* The following is for handling the compact syntax for constraints and
+   attributes.
+
+   The normal syntax looks like this:
+
+       ...
+       (match_operand: 0 "s_register_operand" "r,I,k")
+       (match_operand: 2 "s_register_operand" "r,k,I")
+       ...
+       "@
+	<asm>
+	<asm>
+	<asm>"
+       ...
+       (set_attr "length" "4,8,8")
+
+   The compact syntax looks like this:
+
+       ...
+       (match_operand: 0 "s_register_operand")
+       (match_operand: 2 "s_register_operand")
+       ...
+       "@@ (cons: 0 2; attrs: length)
+	[r,r; 4] <asm>
+	[I,k; 8] <asm>
+	[k,I; 8] <asm>"
+       ...
+       (set_attr "length")
+
+   This is the only place where this syntax needs to be handled.  Relevant
+   patterns are transformed from compact to the normal syntax before they are
+   queued, so none of the gen* programs need to know about this syntax at all.
+
+   Conversion process (convert_syntax):
+
+   0) Check that pattern actually uses new syntax (check for "@@").
+
+   1) Get the "layout", i.e. the "(cons: 0 2; attrs: length)" from the above
+      example.  cons must come first; both are optional. Set up two vecs,
+      convec and attrvec, for holding the results of the transformation.
+
+   2) For each alternative: parse the list of constraints and/or attributes,
+      and enqueue them in the relevant lists in convec and attrvec.  By the end
+      of this process, convec[N].con and attrvec[N].con should contain regular
+      syntax constraint/attribute lists like "r,I,k".  Copy the asm to a string
+      as we go.
+
+   3) Search the rtx and write the constraint and attribute lists into the
+      correct places. Write the asm back into the template.  */
+
+/* Helper class for shuffling constraints/attributes in convert_syntax and
+   add_constraints/add_attributes.  This includes commas but not whitespace.  */
+
+class conlist {
+private:
+  std::string con;
+
+public:
+  std::string name;
+
+  /* [ns..ns + len) should be a string with the id of the rtx to match
+     i.e. if rtx is the relevant match_operand or match_scratch then
+     [ns..ns + len) should equal itoa (XINT (rtx, 0)), and if set_attr then
+     [ns..ns + len) should equal XSTR (rtx, 0).  */
+  conlist (const char *ns, unsigned int len)
+  {
+    name.assign (ns, len);
+  }
+
+  /* Adds a character to the end of the string.  */
+  void add (char c)
+  {
+    con += c;
+  }
+
+  /* Output the string in the form of a brand-new char *, then effectively
+     clear the internal string by resetting len to 0.  */
+  char * out ()
+  {
+    /* Final character is always a trailing comma, so strip it out.  */
+    char * q = xstrndup (con.c_str (), con.size () - 1);
+    con.clear ();
+    return q;
+  }
+};
+
+typedef std::vector<conlist> vec_conlist;
+
+/* Add constraints to an rtx. The match_operand/match_scratch that are matched
+   must be in depth-first order i.e. read from top to bottom in the pattern.
+   index is the index of the conlist we are up to so far.
+   This function is similar to remove_constraints.
+   Errors if adding the constraints would overwrite existing constraints.
+   Returns 1 + index of last conlist to be matched.  */
+
+static unsigned int
+add_constraints (rtx part, file_location loc, unsigned int index,
+		 vec_conlist &cons)
+{
+  const char *format_ptr;
+  char id[3];
+
+  if (part == NULL_RTX || index == cons.size ())
+    return index;
+
+  /* If match_op or match_scr, check if we have the right one, and if so, copy
+     over the constraint list.  */
+  if (GET_CODE (part) == MATCH_OPERAND || GET_CODE (part) == MATCH_SCRATCH)
+    {
+      int field = GET_CODE (part) == MATCH_OPERAND ? 2 : 1;
+
+      snprintf (id, 3, "%d", XINT (part, 0));
+      if (cons[index].name.compare (id) == 0)
+	{
+	  if (XSTR (part, field)[0] != '\0')
+	    {
+	      error_at (loc, "can't mix normal and compact constraint syntax");
+	      return cons.size ();
+	    }
+	  XSTR (part, field) = cons[index].out ();
+
+	  ++index;
+	}
+    }
+
+  format_ptr = GET_RTX_FORMAT (GET_CODE (part));
+
+  /* Recursively search the rtx.  */
+  for (int i = 0; i < GET_RTX_LENGTH (GET_CODE (part)); i++)
+    switch (*format_ptr++)
+      {
+      case 'e':
+      case 'u':
+	index = add_constraints (XEXP (part, i), loc, index, cons);
+	break;
+      case 'E':
+	if (XVEC (part, i) != NULL)
+	  for (int j = 0; j < XVECLEN (part, i); j++)
+	    index = add_constraints (XVECEXP (part, i, j), loc, index, cons);
+	break;
+      default:
+	continue;
+      }
+
+  return index;
+}
+
+/* Add attributes to an rtx. The attributes that are matched must be in order
+   i.e. read from top to bottom in the pattern.
+   Errors if adding the attributes would overwrite existing attributes.
+   Returns 1 + index of last conlist to be matched.  */
+
+static unsigned int
+add_attributes (rtx x, file_location loc, vec_conlist &attrs)
+{
+  unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3;
+  unsigned int index = 0;
+
+  if (XVEC (x, attr_index) == NULL)
+    return index;
+
+  for (int i = 0; i < XVECLEN (x, attr_index); ++i)
+    {
+      rtx part = XVECEXP (x, attr_index, i);
+
+      if (GET_CODE (part) != SET_ATTR)
+	continue;
+
+      if (attrs[index].name.compare (XSTR (part, 0)) == 0)
+	{
+	  if (XSTR (part, 1) && XSTR (part, 1)[0] != '\0')
+	    {
+	      error_at (loc, "can't mix normal and compact attribute syntax");
+	      break;
+	    }
+	  XSTR (part, 1) = attrs[index].out ();
+
+	  ++index;
+	  if (index == attrs.size ())
+	    break;
+	}
+    }
+
+  return index;
+}
+
+/* Modify the attributes list to make space for the implicitly declared
+   attributes in the attrs: list.  */
+
+static void
+create_missing_attributes (rtx x, file_location /* loc */, vec_conlist &attrs)
+{
+  if (attrs.empty ())
+    return;
+
+  unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3;
+  vec_conlist missing;
+
+  /* This is an O(n*m) loop but it's fine, both n and m will always be very
+     small.  */
+  for (conlist cl : attrs)
+    {
+      bool found = false;
+      for (int i = 0; XVEC (x, attr_index) && i < XVECLEN (x, attr_index); ++i)
+	{
+	  rtx part = XVECEXP (x, attr_index, i);
+
+	  if (GET_CODE (part) != SET_ATTR
+	      || cl.name.compare (XSTR (part, 0)) == 0)
+	    {
+	      found = true;
+	      break;
+	    }
+	}
+
+      if (!found)
+	missing.push_back (cl);
+    }
+
+  rtvec orig = XVEC (x, attr_index);
+  size_t n_curr = orig ? XVECLEN (x, attr_index) : 0;
+  rtvec copy = rtvec_alloc (n_curr + missing.size ());
+
+  /* Create a shallow copy of existing entries.  */
+  memcpy (&copy->elem[missing.size ()], &orig->elem[0], sizeof (rtx) * n_curr);
+  XVEC (x, attr_index) = copy;
+
+  /* Create the new elements.  */
+  for (unsigned i = 0; i < missing.size (); i++)
+    {
+      rtx attr = rtx_alloc (SET_ATTR);
+      XSTR (attr, 0) = xstrdup (attrs[i].name.c_str ());
+      XSTR (attr, 1) = NULL;
+      XVECEXP (x, attr_index, i) = attr;
+    }
+
+  return;
+}
+
+/* Consumes spaces and tabs.  */
+
+static inline void
+skip_spaces (const char **str)
+{
+  while (**str == ' ' || **str == '\t')
+    (*str)++;
+}
+
+/* Consumes the given character, if it's there.  */
+
+static inline bool
+expect_char (const char **str, char c)
+{
+  if (**str != c)
+    return false;
+  (*str)++;
+  return true;
+}
+
+/* Parses the section layout that follows a "@@" if using new syntax. Builds
+   a vector for a single section. E.g. if we have "attrs: length arch)..."
+   then list will have two elements, the first for "length" and the second
+   for "arch".  */
+
+static void
+parse_section_layout (const char **templ, const char *label,
+		      vec_conlist &list)
+{
+  const char *name_start;
+  size_t label_len = strlen (label);
+  if (strncmp (label, *templ, label_len) == 0)
+    {
+      *templ += label_len;
+
+      /* Gather the names.  */
+      while (**templ != ';' && **templ != ')')
+	{
+	  skip_spaces (templ);
+	  name_start = *templ;
+	  int len = 0;
+	  while ((*templ)[len] != ' ' && (*templ)[len] != '\t'
+		 && (*templ)[len] != ';' && (*templ)[len] != ')')
+	    len++;
+	  *templ += len;
+	  list.push_back (conlist (name_start, len));
+	}
+    }
+}
+
+/* Parse a section, a section is defined as a named space separated list, e.g.
+
+   foo: a b c
+
+   is a section named "foo" with entries a,b and c.  */
+
+static void
+parse_section (const char **templ, unsigned int n_elems, unsigned int alt_no,
+	       vec_conlist &list, file_location loc, const char *name)
+{
+  unsigned int i;
+
+  /* Go through the list, one character at a time, adding said character
+     to the correct string.  */
+  for (i = 0; **templ != ']' && **templ != ';'; (*templ)++)
+    {
+      if (**templ != ' ' && **templ != '\t')
+	{
+	  list[i].add(**templ);
+	  if (**templ == ',')
+	    {
+	      ++i;
+	      if (i == n_elems)
+		fatal_at (loc, "too many %ss in alternative %d: expected %d",
+			  name, alt_no, n_elems);
+	    }
+	}
+    }
+
+  if (i + 1 < n_elems)
+    fatal_at (loc, "too few %ss in alternative %d: expected %d, got %d",
+	      name, alt_no, n_elems, i);
+
+  list[i].add(',');
+}
+
+/* The compact syntax has more convience syntaxes.  As such we post process
+   the lines to get them back to something the normal syntax understands.  */
+
+static void
+preprocess_compact_syntax (file_location loc, int alt_no, std::string &line,
+			   std::string &last_line)
+{
+  /* Check if we're copying the last statement.  */
+  if (line.find ("^") == 0 && line.size () == 1)
+    {
+      if (last_line.empty ())
+	fatal_at (loc, "found instruction to copy previous line (^) in"
+		       "alternative %d but no previous line to copy", alt_no);
+      line = last_line;
+      return;
+    }
+
+  std::string result;
+  std::string buffer;
+  /* Check if we have << which means return c statement.  */
+  if (line.find ("<<") == 0)
+    {
+      result.append ("* return ");
+      buffer.append (line.substr (3));
+    }
+  else
+    buffer.append (line);
+
+  /* Now perform string expansion.  Replace ' with " if more than one character
+     in the string.  "*/
+  bool double_quoted = false;
+  bool quote_open = false;
+  for (unsigned i = 0; i < buffer.length (); i++)
+    {
+      char chr = buffer[i];
+      if (chr == '\'')
+	{
+	  if (quote_open)
+	    {
+	      if (double_quoted)
+		result += '"';
+	      else
+		result += chr;
+	      quote_open = false;
+	    }
+	  else
+	    {
+	      if (i + 2 < buffer.length ()
+		  && buffer[i+1] != '\''
+		  && buffer[i+2] != '\'')
+		{
+		  double_quoted = true;
+		  result += '"';
+		}
+	      else
+		result += chr;
+	      quote_open = true;
+	    }
+	}
+      else
+	result += chr;
+    }
+
+  /* Braces were mismatched.  Abort.  */
+  if (quote_open)
+    fatal_at (loc, "brace mismatch in instruction template '%s'",
+	      line.c_str ());
+
+  line = result;
+  return;
+}
+
+/* Converts an rtx from compact syntax to normal syntax if possible.  */
+
+static void
+convert_syntax (rtx x, file_location loc)
+{
+  int alt_no;
+  unsigned int index, templ_index;
+  const char *templ;
+  vec_conlist convec, attrvec;
+
+  templ_index = GET_CODE (x) == DEFINE_INSN ? 3 : 2;
+
+  templ = XTMPL (x, templ_index);
+
+  /* Templates with constraints start with "@@".  */
+  if (strncmp ("@@", templ, 2))
+    return;
+
+  /* Get the layout for the template.  */
+  templ += 2;
+  skip_spaces (&templ);
+
+  if (!expect_char (&templ, '('))
+    fatal_at (loc, "expecing `(' to begin section list");
+
+  parse_section_layout (&templ, "cons:", convec);
+
+  if (*templ != ')')
+    {
+      if (*templ == ';')
+	skip_spaces (&(++templ));
+      parse_section_layout (&templ, "attrs:", attrvec);
+      create_missing_attributes (x, loc, attrvec);
+    }
+
+  if (!expect_char (&templ, ')'))
+    {
+      fatal_at (loc, "expecting `)` to end section list - section list "
+		"must have cons first, attrs second");
+    }
+
+  /* We will write the un-constrainified template into new_templ.  */
+  std::string new_templ;
+  new_templ.append ("@\n");
+
+  /* Skip to the first proper line.  */
+  while (*templ++ != '\n');
+  alt_no = 0;
+
+  std::string last_line;
+
+  /* Process the alternatives.  */
+  while (*(templ - 1) != '\0')
+    {
+      /* Copy leading whitespace.  */
+      while (*templ == ' ' || *templ == '\t')
+	new_templ += *templ++;
+
+      if (expect_char (&templ, '['))
+	{
+	  /* Parse the constraint list, then the attribute list.  */
+	  if (convec.size () > 0)
+	    parse_section (&templ, convec.size (), alt_no, convec, loc,
+			   "constraint");
+
+	  if (attrvec.size () > 0)
+	    {
+	      if (convec.size () > 0 && !expect_char (&templ, ';'))
+		fatal_at (loc, "expected `;' to separate constraints "
+			       "and attributes in alternative %d", alt_no);
+
+	      parse_section (&templ, attrvec.size (), alt_no,
+			     attrvec, loc, "attribute");
+	    }
+
+	  if (!expect_char (&templ, ']'))
+	    fatal_at (loc, "expected end of constraint/attribute list but "
+			   "missing an ending `]' in alternative %d", alt_no);
+	}
+      else
+	fatal_at (loc, "expected constraint/attribute list at beginning of "
+		       "alternative %d but missing a starting `['", alt_no);
+
+      /* Skip whitespace between list and asm.  */
+      ++templ;
+      skip_spaces (&templ);
+
+      /* Copy asm to new template.  */
+      std::string line;
+      while (*templ != '\n' && *templ != '\0')
+	line += *templ++;
+
+      /* Apply any pre-processing needed to the line.  */
+      preprocess_compact_syntax (loc, alt_no, line, last_line);
+      new_templ.append (line);
+      last_line = line;
+
+      new_templ += *templ++;
+      ++alt_no;
+    }
+
+  /* Write the constraints and attributes into their proper places.  */
+  if (convec.size () > 0)
+    {
+      index = add_constraints (x, loc, 0, convec);
+      if (index < convec.size ())
+	fatal_at (loc, "could not find match_operand/scratch with id %s",
+		  convec[index].name.c_str ());
+    }
+
+  if (attrvec.size () > 0)
+    {
+      index = add_attributes (x, loc, attrvec);
+      if (index < attrvec.size ())
+	fatal_at (loc, "could not find set_attr for attribute %s",
+		  attrvec[index].name.c_str ());
+    }
+
+  /* Copy over the new un-constrainified template.  */
+  XTMPL (x, templ_index) = xstrdup (new_templ.c_str ());
+
+  /* Register for later checks during iterator expansions.  */
+  compact_syntax.add (x);
+
+#if DEBUG
+  print_rtl_single (stderr, x);
+#endif
+}
+
 /* Process a top level rtx in some way, queuing as appropriate.  */
 
 static void
@@ -553,10 +1083,12 @@ process_rtx (rtx desc, file_location loc)
   switch (GET_CODE (desc))
     {
     case DEFINE_INSN:
+      convert_syntax (desc, loc);
       queue_pattern (desc, &define_insn_tail, loc);
       break;
 
     case DEFINE_COND_EXEC:
+      convert_syntax (desc, loc);
       queue_pattern (desc, &define_cond_exec_tail, loc);
       break;
 
@@ -631,6 +1163,7 @@ process_rtx (rtx desc, file_location loc)
 	attr = XVEC (desc, split_code + 1);
 	PUT_CODE (desc, DEFINE_INSN);
 	XVEC (desc, 4) = attr;
+	convert_syntax (desc, loc);
 
 	/* Queue them.  */
 	insn_elem = queue_pattern (desc, &define_insn_tail, loc);




-- 

[-- Attachment #2: rb17151.patch --]
[-- Type: text/plain, Size: 37703 bytes --]

diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
index 022eef80bc1e93299f329610dcd2321917d5770a..331eb2ff57a0e1ff300f3321f154829a57772679 100644
--- a/gcc/config/aarch64/aarch64.md
+++ b/gcc/config/aarch64/aarch64.md
@@ -375,7 +375,7 @@ (define_constants
 ;; As a convenience, "fp_q" means "fp" + the ability to move between
 ;; Q registers and is equivalent to "simd".
 
-(define_enum "arches" [ any rcpc8_4 fp fp_q simd sve fp16])
+(define_enum "arches" [ any rcpc8_4 fp fp_q simd nosimd sve fp16])
 
 (define_enum_attr "arch" "arches" (const_string "any"))
 
@@ -406,6 +406,9 @@ (define_attr "arch_enabled" "no,yes"
 	(and (eq_attr "arch" "fp_q, simd")
 	     (match_test "TARGET_SIMD"))
 
+	(and (eq_attr "arch" "nosimd")
+	     (match_test "!TARGET_SIMD"))
+
 	(and (eq_attr "arch" "fp16")
 	     (match_test "TARGET_FP_F16INST"))
 
@@ -1215,44 +1218,26 @@ (define_expand "mov<mode>"
 )
 
 (define_insn "*mov<mode>_aarch64"
-  [(set (match_operand:SHORT 0 "nonimmediate_operand" "=r,r,    w,r  ,r,w, m,m,r,w,w")
-	(match_operand:SHORT 1 "aarch64_mov_operand"  " r,M,D<hq>,Usv,m,m,rZ,w,w,rZ,w"))]
+  [(set (match_operand:SHORT 0 "nonimmediate_operand")
+	(match_operand:SHORT 1 "aarch64_mov_operand"))]
   "(register_operand (operands[0], <MODE>mode)
     || aarch64_reg_or_zero (operands[1], <MODE>mode))"
-{
-   switch (which_alternative)
-     {
-     case 0:
-       return "mov\t%w0, %w1";
-     case 1:
-       return "mov\t%w0, %1";
-     case 2:
-       return aarch64_output_scalar_simd_mov_immediate (operands[1],
-							<MODE>mode);
-     case 3:
-       return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]);
-     case 4:
-       return "ldr<size>\t%w0, %1";
-     case 5:
-       return "ldr\t%<size>0, %1";
-     case 6:
-       return "str<size>\t%w1, %0";
-     case 7:
-       return "str\t%<size>1, %0";
-     case 8:
-       return TARGET_SIMD ? "umov\t%w0, %1.<v>[0]" : "fmov\t%w0, %s1";
-     case 9:
-       return TARGET_SIMD ? "dup\t%0.<Vallxd>, %w1" : "fmov\t%s0, %w1";
-     case 10:
-       return TARGET_SIMD ? "dup\t%<Vetype>0, %1.<v>[0]" : "fmov\t%s0, %s1";
-     default:
-       gcc_unreachable ();
-     }
-}
+  "@@ (cons: 0 1; attrs: type arch)
+  [=r, r    ; mov_reg        , *     ] mov\t%w0, %w1
+  [r , M    ; mov_imm        , *     ] mov\t%w0, %1
+  [w , D<hq>; neon_move      , simd  ] << aarch64_output_scalar_simd_mov_immediate (operands[1], <MODE>mode);
+  [r , Usv  ; mov_imm        , sve   ] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);
+  [r , m    ; load_4         , *     ] ldr<size>\t%w0, %1
+  [w , m    ; load_4         , *     ] ldr\t%<size>0, %1
+  [m , rZ   ; store_4        , *     ] str<size>\\t%w1, %0
+  [m , w    ; store_4        , *     ] str\t%<size>1, %0
+  [r , w    ; neon_to_gp<q>  , simd  ] umov\t%w0, %1.<v>[0]
+  [r , w    ; neon_to_gp<q>  , nosimd] fmov\t%w0, %s1
+  [w , rZ   ; neon_from_gp<q>, simd  ] dup\t%0.<Vallxd>, %w1
+  [w , rZ   ; neon_from_gp<q>, nosimd] fmov\t%s0, %w1
+  [w , w    ; neon_dup       , simd  ] dup\t%<Vetype>0, %1.<v>[0]
+  [w , w    ; neon_dup       , nosimd] fmov\t%s0, %s1"
   ;; The "mov_imm" type for CNT is just a placeholder.
-  [(set_attr "type" "mov_reg,mov_imm,neon_move,mov_imm,load_4,load_4,store_4,
-		     store_4,neon_to_gp<q>,neon_from_gp<q>,neon_dup")
-   (set_attr "arch" "*,*,simd,sve,*,*,*,*,*,*,*")]
 )
 
 (define_expand "mov<mode>"
@@ -1289,79 +1274,69 @@ (define_expand "mov<mode>"
 )
 
 (define_insn_and_split "*movsi_aarch64"
-  [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m, m,  r,  r,  r, w,r,w, w")
-	(match_operand:SI 1 "aarch64_mov_operand"  " r,r,k,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Ds"))]
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
   "(register_operand (operands[0], SImode)
     || aarch64_reg_or_zero (operands[1], SImode))"
-  "@
-   mov\\t%w0, %w1
-   mov\\t%w0, %w1
-   mov\\t%w0, %w1
-   mov\\t%w0, %1
-   #
-   * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]);
-   ldr\\t%w0, %1
-   ldr\\t%s0, %1
-   str\\t%w1, %0
-   str\\t%s1, %0
-   adrp\\t%x0, %A1\;ldr\\t%w0, [%x0, %L1]
-   adr\\t%x0, %c1
-   adrp\\t%x0, %A1
-   fmov\\t%s0, %w1
-   fmov\\t%w0, %s1
-   fmov\\t%s0, %s1
-   * return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);"
+  "@@ (cons: 0 1; attrs: type arch length)
+   [=r, r  ; mov_reg  , *   , 4] mov\t%w0, %w1
+   [k , r  ; mov_reg  , *   , 4] ^
+   [r , k  ; mov_reg  , *   , 4] ^
+   [r , M  ; mov_imm  , *   , 4] mov\t%w0, %1
+   [r , n  ; mov_imm  , *   ,16] #
+   [r , Usv; mov_imm  , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);
+   [r , m  ; load_4   , *   , 4] ldr\t%w0, %1
+   [w , m  ; load_4   , fp  , 4] ldr\t%s0, %1
+   [m , rZ ; store_4  , *   , 4] str\t%w1, %0
+   [m , w  ; store_4  , fp  , 4] str\t%s1, %0
+   [r , Usw; load_4   , *   , 8] adrp\t%x0, %A1;ldr\t%w0, [%x0, %L1]
+   [r , Usa; adr      , *   , 4] adr\t%x0, %c1
+   [r , Ush; adr      , *   , 4] adrp\t%x0, %A1
+   [w , rZ ; f_mcr    , fp  , 4] fmov\t%s0, %w1
+   [r , w  ; f_mrc    , fp  , 4] fmov\t%w0, %s1
+   [w , w  ; fmov     , fp  , 4] fmov\t%s0, %s1
+   [w , Ds ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);"
   "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode)
     && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
-   [(const_int 0)]
-   "{
-       aarch64_expand_mov_immediate (operands[0], operands[1]);
-       DONE;
-    }"
+  [(const_int 0)]
+  {
+    aarch64_expand_mov_immediate (operands[0], operands[1]);
+    DONE;
+  }
   ;; The "mov_imm" type for CNT is just a placeholder.
-  [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,load_4,
-		    load_4,store_4,store_4,load_4,adr,adr,f_mcr,f_mrc,fmov,neon_move")
-   (set_attr "arch"   "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd")
-   (set_attr "length" "4,4,4,4,*,  4,4, 4,4, 4,8,4,4, 4, 4, 4,   4")
-]
 )
 
 (define_insn_and_split "*movdi_aarch64"
-  [(set (match_operand:DI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m,m,   r,  r,  r, w,r,w, w")
-	(match_operand:DI 1 "aarch64_mov_operand"  " r,r,k,O,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Dd"))]
+  [(set (match_operand:DI 0 "nonimmediate_operand")
+	(match_operand:DI 1 "aarch64_mov_operand"))]
   "(register_operand (operands[0], DImode)
     || aarch64_reg_or_zero (operands[1], DImode))"
-  "@
-   mov\\t%x0, %x1
-   mov\\t%0, %x1
-   mov\\t%x0, %1
-   * return aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? \"mov\\t%x0, %1\" : \"mov\\t%w0, %1\";
-   #
-   * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]);
-   ldr\\t%x0, %1
-   ldr\\t%d0, %1
-   str\\t%x1, %0
-   str\\t%d1, %0
-   * return TARGET_ILP32 ? \"adrp\\t%0, %A1\;ldr\\t%w0, [%0, %L1]\" : \"adrp\\t%0, %A1\;ldr\\t%0, [%0, %L1]\";
-   adr\\t%x0, %c1
-   adrp\\t%x0, %A1
-   fmov\\t%d0, %x1
-   fmov\\t%x0, %d1
-   fmov\\t%d0, %d1
-   * return aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);"
-   "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode)
-    && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
-   [(const_int 0)]
-   "{
-       aarch64_expand_mov_immediate (operands[0], operands[1]);
-       DONE;
-    }"
+  "@@ (cons: 0 1; attrs: type arch length)
+   [=r, r  ; mov_reg  , *   , 4] mov\t%x0, %x1
+   [k , r  ; mov_reg  , *   , 4] mov\t%0, %x1
+   [r , k  ; mov_reg  , *   , 4] mov\t%x0, %1
+   [r , O  ; mov_imm  , *   , 4] << aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? 'mov\t%x0, %1' : 'mov\t%w0, %1';
+   [r , n  ; mov_imm  , *   ,16] #
+   [r , Usv; mov_imm  , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);
+   [r , m  ; load_8   , *   , 4] ldr\t%x0, %1
+   [w , m  ; load_8   , fp  , 4] ldr\t%d0, %1
+   [m , rZ ; store_8  , *   , 4] str\t%x1, %0
+   [m , w  ; store_8  , fp  , 4] str\t%d1, %0
+   [r , Usw; load_8   , *   , 8] << TARGET_ILP32 ? 'adrp\t%0, %A1;ldr\t%w0, [%0, %L1]' : 'adrp\t%0, %A1;ldr\t%0, [%0, %L1]';
+   [r , Usa; adr      , *   , 4] adr\t%x0, %c1
+   [r , Ush; adr      , *   , 4] adrp\t%x0, %A1
+   [w , rZ ; f_mcr    , fp  , 4] fmov\t%d0, %x1
+   [r , w  ; f_mrc    , fp  , 4] fmov\t%x0, %d1
+   [w , w  ; fmov     , fp  , 4] fmov\t%d0, %d1
+   [w , Dd ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);"
+  "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode)
+   && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))"
+  [(const_int 0)]
+  {
+      aarch64_expand_mov_immediate (operands[0], operands[1]);
+      DONE;
+  }
   ;; The "mov_imm" type for CNTD is just a placeholder.
-  [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,
-		     load_8,load_8,store_8,store_8,load_8,adr,adr,f_mcr,f_mrc,
-		     fmov,neon_move")
-   (set_attr "arch"   "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd")
-   (set_attr "length" "4,4,4,4,*,  4,4, 4,4, 4,8,4,4, 4, 4, 4,   4")]
 )
 
 (define_insn "insv_imm<mode>"
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index cbfc4543531452b0708a38bdf4abf5105b54f8b7..16c50b4a7c414a72b234cef7745a37745e6a41fc 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -924,27 +924,27 @@ (define_peephole2
 ;;  (plus (reg rN) (reg sp)) into (reg rN).  In this case reload will
 ;; put the duplicated register first, and not try the commutative version.
 (define_insn_and_split "*arm_addsi3"
-  [(set (match_operand:SI          0 "s_register_operand" "=rk,l,l ,l ,r ,k ,r,k ,r ,k ,r ,k,k,r ,k ,r")
-	(plus:SI (match_operand:SI 1 "s_register_operand" "%0 ,l,0 ,l ,rk,k ,r,r ,rk,k ,rk,k,r,rk,k ,rk")
-		 (match_operand:SI 2 "reg_or_int_operand" "rk ,l,Py,Pd,rI,rI,k,rI,Pj,Pj,L ,L,L,PJ,PJ,?n")))]
-  "TARGET_32BIT"
-  "@
-   add%?\\t%0, %0, %2
-   add%?\\t%0, %1, %2
-   add%?\\t%0, %1, %2
-   add%?\\t%0, %1, %2
-   add%?\\t%0, %1, %2
-   add%?\\t%0, %1, %2
-   add%?\\t%0, %2, %1
-   add%?\\t%0, %1, %2
-   addw%?\\t%0, %1, %2
-   addw%?\\t%0, %1, %2
-   sub%?\\t%0, %1, #%n2
-   sub%?\\t%0, %1, #%n2
-   sub%?\\t%0, %1, #%n2
-   subw%?\\t%0, %1, #%n2
-   subw%?\\t%0, %1, #%n2
-   #"
+  [(set (match_operand:SI 0 "s_register_operand")
+        (plus:SI (match_operand:SI 1 "s_register_operand")
+                 (match_operand:SI 2 "reg_or_int_operand")))]
+  "TARGET_32BIT"
+  "@@ (cons: 0 1 2; attrs: length predicable_short_it arch)
+   [=rk, %0, rk; 2,  yes, t2] add%?\\t%0, %0, %2
+   [l,   l,  l ; 4,  yes, t2] add%?\\t%0, %1, %2
+   [l,   0,  Py; 4,  yes, t2] add%?\\t%0, %1, %2
+   [l,   l,  Pd; 4,  yes, t2] add%?\\t%0, %1, %2
+   [r,   rk, rI; 4,  no,  * ] add%?\\t%0, %1, %2
+   [k,   k,  rI; 4,  no,  * ] add%?\\t%0, %1, %2
+   [r,   r,  k ; 4,  no,  * ] add%?\\t%0, %2, %1
+   [k,   r,  rI; 4,  no,  a ] add%?\\t%0, %1, %2
+   [r,   rk, Pj; 4,  no,  t2] addw%?\\t%0, %1, %2
+   [k,   k,  Pj; 4,  no,  t2] addw%?\\t%0, %1, %2
+   [r,   rk, L ; 4,  no,  * ] sub%?\\t%0, %1, #%n2
+   [k,   k,  L ; 4,  no,  * ] sub%?\\t%0, %1, #%n2
+   [k,   r,  L ; 4,  no,  a ] sub%?\\t%0, %1, #%n2
+   [r,   rk, PJ; 4,  no,  t2] subw%?\\t%0, %1, #%n2
+   [k,   k,  PJ; 4,  no,  t2] subw%?\\t%0, %1, #%n2
+   [r,   rk, ?n; 16, no,  * ] #"
   "TARGET_32BIT
    && CONST_INT_P (operands[2])
    && !const_ok_for_op (INTVAL (operands[2]), PLUS)
@@ -956,10 +956,10 @@ (define_insn_and_split "*arm_addsi3"
 		      operands[1], 0);
   DONE;
   "
-  [(set_attr "length" "2,4,4,4,4,4,4,4,4,4,4,4,4,4,4,16")
+  [(set_attr "length")
    (set_attr "predicable" "yes")
-   (set_attr "predicable_short_it" "yes,yes,yes,yes,no,no,no,no,no,no,no,no,no,no,no,no")
-   (set_attr "arch" "t2,t2,t2,t2,*,*,*,a,t2,t2,*,*,a,t2,t2,*")
+   (set_attr "predicable_short_it")
+   (set_attr "arch")
    (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
 		      (const_string "alu_imm")
 		      (const_string "alu_sreg")))
diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi
index 07bf8bdebffb2e523f25a41f2b57e43c0276b745..199f2315432dc56cadfdfc03a8ab381fe02a43b3 100644
--- a/gcc/doc/md.texi
+++ b/gcc/doc/md.texi
@@ -27,6 +27,7 @@ See the next chapter for information on the C header file.
                         from such an insn.
 * Output Statement::    For more generality, write C code to output
                         the assembler code.
+* Compact Syntax::      Compact syntax for writing Machine descriptors.
 * Predicates::          Controlling what kinds of operands can be used
                         for an insn.
 * Constraints::         Fine-tuning operand selection.
@@ -713,6 +714,211 @@ you can use @samp{*} inside of a @samp{@@} multi-alternative template:
 @end group
 @end smallexample
 
+@node Compact Syntax
+@section Compact Syntax
+@cindex compact syntax
+
+In cases where the number of alternatives in a @code{define_insn} or
+@code{define_insn_and_split} are large then it may be beneficial to use the
+compact syntax when specifying alternatives.
+
+This syntax puts the constraints and attributes on the same horizontal line as
+the instruction assembly template.
+
+As an example
+
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r")
+	(match_operand:SI 1 "aarch64_mov_operand"  " r,r,k,M,n,Usv"))]
+  ""
+  "@
+   mov\\t%w0, %w1
+   mov\\t%w0, %w1
+   mov\\t%w0, %w1
+   mov\\t%w0, %1
+   #
+   * return aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);"
+  "&& true"
+   [(const_int 0)]
+  @{
+     aarch64_expand_mov_immediate (operands[0], operands[1]);
+     DONE;
+  @}
+  [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm")
+   (set_attr "arch"   "*,*,*,*,*,sve")
+   (set_attr "length" "4,4,4,4,*,  4")
+]
+)
+@end group
+@end smallexample
+
+can be better expressed as:
+
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
+  ""
+  "@@ (cons: 0 1; attrs: type arch length)
+   [=r, r  ; mov_reg  , *   , 4] mov\t%w0, %w1
+   [k , r  ; mov_reg  , *   , 4] ^
+   [r , k  ; mov_reg  , *   , 4] ^
+   [r , M  ; mov_imm  , *   , 4] mov\t%w0, %1
+   [r , n  ; mov_imm  , *   , *] #
+   [r , Usv; mov_imm  , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);"
+  "&& true"
+  [(const_int 0)]
+  @{
+    aarch64_expand_mov_immediate (operands[0], operands[1]);
+    DONE;
+  @}
+)
+@end group
+@end smallexample
+
+The syntax rules are as follows:
+@itemize @bullet
+@item
+Template must start with "@@" to use the new syntax.
+
+@item
+"@@" is followed by a layout in parentheses which is @samp{"cons:"} followed by
+a list of @code{match_operand}/@code{match_scratch} operand numbers, then a
+semicolon, followed by the same for attributes (@samp{"attrs:"}). Both sections
+are optional (so you can use only @samp{cons}, or only @samp{attrs}, or both),
+and @samp{cons} must come before @samp{attrs} if present.
+
+@item
+Each alternative begins with any amount of whitespace.
+
+@item
+Following the whitespace is a comma-separated list of @samp{constraints} and/or
+@samp{attributes} within brackets @code{[]}, with sections separated by a
+semicolon.
+
+@item
+Should you want to copy the previous asm line, the symbol @code{^} can be used.
+This allows less copy pasting between alternative and reduces the number of
+lines to update on changes.
+
+@item
+When using C functions for output, the idiom @code{* return <function>;} can be
+replaced with the shorthand @code{<< <function>;}.
+
+@item
+Following the closing ']' is any amount of whitespace, and then the actual asm
+output.
+
+@item
+Spaces are allowed in the list (they will simply be removed).
+
+@item
+All alternatives should be specified: a blank list should be "[,,]", "[,,;,]"
+etc., not "[]" or "".
+
+@item
+Within an @@ block, @code{''} is treated the same as @code{""} in cases where a
+single character would be invalid in C.  This means a multicharacter string can
+be created using @code{''} which allows for less escaping.
+
+@item
+Any unexpanded iterators within the block will result in a compile time error
+rather than accepting the generating the @code{<..>} in the output asm.  If the
+literal @code{<..>} is required it should be escaped as @code{\<..\>}.
+
+@item
+Within an @@ block, any iterators that do not get expanded will result in an
+error.  If for some reason it is required to have @code{<>} in the output then
+these must be escaped using @backslashchar{}.
+
+@item
+The actual constraint string in the @code{match_operand} or
+@code{match_scratch}, and the attribute string in the @code{set_attr}, must be
+blank or an empty string (you can't combine the old and new syntaxes).
+
+@item
+@code{set_attr} are optional.  If a @code{set_attr} is defined in the
+@samp{attrs} section then that declaration can be both definition and
+declaration.  If both @samp{attrs} and @code{set_attr} are defined for the same
+entry then the attribute string must be empty or blank.
+
+@item
+Additional @code{set_attr} can be specified other than the ones in the
+@samp{attrs} list.  These must use the @samp{normal} syntax and must be defined
+after all @samp{attrs} specified.
+
+In other words, the following are valid:
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
+  ""
+  "@@ (cons: 0 1; attrs: type arch length)"
+  ...
+  [(set_attr "type")]
+  [(set_attr "arch")]
+  [(set_attr "length")]
+  [(set_attr "foo" "mov_imm")]
+)
+@end group
+@end smallexample
+
+and
+
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
+  ""
+  "@@ (cons: 0 1; attrs: type arch length)"
+  ...
+  [(set_attr "foo" "mov_imm")]
+)
+@end group
+@end smallexample
+
+but these are not valid:
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
+  ""
+  "@@ (cons: 0 1; attrs: type arch length)"
+  ...
+  [(set_attr "type")]
+  [(set_attr "arch")]
+  [(set_attr "foo" "mov_imm")]
+)
+@end group
+@end smallexample
+
+and
+
+@smallexample
+@group
+(define_insn_and_split ""
+  [(set (match_operand:SI 0 "nonimmediate_operand")
+	(match_operand:SI 1 "aarch64_mov_operand"))]
+  ""
+  "@@ (cons: 0 1; attrs: type arch length)"
+  ...
+  [(set_attr "type")]
+  [(set_attr "foo" "mov_imm")]
+  [(set_attr "arch")]
+  [(set_attr "length")]
+)
+@end group
+@end smallexample
+
+because the order of the entries don't match and new entries must be last.
+@end itemize
+
 @node Predicates
 @section Predicates
 @cindex predicates
diff --git a/gcc/genoutput.cc b/gcc/genoutput.cc
index 163e8dfef4ca2c2c92ce1cf001ee6be40a54ca3e..4e67cd6ca5356c62165382de01da6bbc6f3c5fa2 100644
--- a/gcc/genoutput.cc
+++ b/gcc/genoutput.cc
@@ -91,6 +91,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "errors.h"
 #include "read-md.h"
 #include "gensupport.h"
+#include <string>
 
 /* No instruction can have more operands than this.  Sorry for this
    arbitrary limit, but what machine will have an instruction with
@@ -157,6 +158,7 @@ public:
   int n_alternatives;		/* Number of alternatives in each constraint */
   int operand_number;		/* Operand index in the big array.  */
   int output_format;		/* INSN_OUTPUT_FORMAT_*.  */
+  bool compact_syntax_p;
   struct operand_data operand[MAX_MAX_OPERANDS];
 };
 
@@ -700,12 +702,37 @@ process_template (class data *d, const char *template_code)
 	  if (sp != ep)
 	    message_at (d->loc, "trailing whitespace in output template");
 
-	  while (cp < sp)
+	  /* Check for any unexpanded iterators.  */
+	  std::string buff (cp, sp - cp);
+	  if (bp[0] != '*' && d->compact_syntax_p)
 	    {
-	      putchar (*cp);
-	      cp++;
+	      size_t start = buff.find ('<');
+	      size_t end = buff.find ('>', start + 1);
+	      if (end != std::string::npos || start != std::string::npos)
+		{
+		  if (end == std::string::npos || start == std::string::npos)
+		    fatal_at (d->loc, "unmatched angle brackets, likely an "
+			      "error in iterator syntax in %s", buff.c_str ());
+
+		  if (start != 0
+		      && buff[start-1] == '\\'
+		      && buff[end-1] == '\\')
+		    {
+		      /* Found a valid escape sequence, erase the characters for
+			 output.  */
+		      buff.erase (end-1, 1);
+		      buff.erase (start-1, 1);
+		    }
+		  else
+		    fatal_at (d->loc, "unresolved iterator '%s' in '%s'",
+			      buff.substr(start+1, end - start-1).c_str (),
+			      buff.c_str ());
+		}
 	    }
 
+	  printf ("%s", buff.c_str ());
+	  cp = sp;
+
 	  if (!found_star)
 	    puts ("\",");
 	  else if (*bp != '*')
@@ -881,6 +908,8 @@ gen_insn (md_rtx_info *info)
   else
     d->name = 0;
 
+  d->compact_syntax_p = compact_syntax.contains (insn);
+
   /* Build up the list in the same order as the insns are seen
      in the machine description.  */
   d->next = 0;
diff --git a/gcc/gensupport.h b/gcc/gensupport.h
index a1edfbd71908b6244b40f801c6c01074de56777e..7925e22ed418767576567cad583bddf83c0846b1 100644
--- a/gcc/gensupport.h
+++ b/gcc/gensupport.h
@@ -20,6 +20,7 @@ along with GCC; see the file COPYING3.  If not see
 #ifndef GCC_GENSUPPORT_H
 #define GCC_GENSUPPORT_H
 
+#include "hash-set.h"
 #include "read-md.h"
 
 struct obstack;
@@ -218,6 +219,8 @@ struct pattern_stats
   int num_operand_vars;
 };
 
+extern hash_set<rtx> compact_syntax;
+
 extern void get_pattern_stats (struct pattern_stats *ranges, rtvec vec);
 extern void compute_test_codes (rtx, file_location, char *);
 extern file_location get_file_location (rtx);
diff --git a/gcc/gensupport.cc b/gcc/gensupport.cc
index f9efc6eb7572a44b8bb154b0b22be3815bd0d244..c6a731968d2d6c7c9b01ad00e9dabb2b6d5f173e 100644
--- a/gcc/gensupport.cc
+++ b/gcc/gensupport.cc
@@ -27,12 +27,16 @@
 #include "read-md.h"
 #include "gensupport.h"
 #include "vec.h"
+#include <string>
+#include <vector>
 
 #define MAX_OPERANDS 40
 
 static rtx operand_data[MAX_OPERANDS];
 static rtx match_operand_entries_in_pattern[MAX_OPERANDS];
 static char used_operands_numbers[MAX_OPERANDS];
+/* List of entries which are part of the new syntax.  */
+hash_set<rtx> compact_syntax;
 
 
 /* In case some macros used by files we include need it, define this here.  */
@@ -545,6 +549,532 @@ gen_rewrite_sequence (rtvec vec)
   return new_vec;
 }
 
+/* The following is for handling the compact syntax for constraints and
+   attributes.
+
+   The normal syntax looks like this:
+
+       ...
+       (match_operand: 0 "s_register_operand" "r,I,k")
+       (match_operand: 2 "s_register_operand" "r,k,I")
+       ...
+       "@
+	<asm>
+	<asm>
+	<asm>"
+       ...
+       (set_attr "length" "4,8,8")
+
+   The compact syntax looks like this:
+
+       ...
+       (match_operand: 0 "s_register_operand")
+       (match_operand: 2 "s_register_operand")
+       ...
+       "@@ (cons: 0 2; attrs: length)
+	[r,r; 4] <asm>
+	[I,k; 8] <asm>
+	[k,I; 8] <asm>"
+       ...
+       (set_attr "length")
+
+   This is the only place where this syntax needs to be handled.  Relevant
+   patterns are transformed from compact to the normal syntax before they are
+   queued, so none of the gen* programs need to know about this syntax at all.
+
+   Conversion process (convert_syntax):
+
+   0) Check that pattern actually uses new syntax (check for "@@").
+
+   1) Get the "layout", i.e. the "(cons: 0 2; attrs: length)" from the above
+      example.  cons must come first; both are optional. Set up two vecs,
+      convec and attrvec, for holding the results of the transformation.
+
+   2) For each alternative: parse the list of constraints and/or attributes,
+      and enqueue them in the relevant lists in convec and attrvec.  By the end
+      of this process, convec[N].con and attrvec[N].con should contain regular
+      syntax constraint/attribute lists like "r,I,k".  Copy the asm to a string
+      as we go.
+
+   3) Search the rtx and write the constraint and attribute lists into the
+      correct places. Write the asm back into the template.  */
+
+/* Helper class for shuffling constraints/attributes in convert_syntax and
+   add_constraints/add_attributes.  This includes commas but not whitespace.  */
+
+class conlist {
+private:
+  std::string con;
+
+public:
+  std::string name;
+
+  /* [ns..ns + len) should be a string with the id of the rtx to match
+     i.e. if rtx is the relevant match_operand or match_scratch then
+     [ns..ns + len) should equal itoa (XINT (rtx, 0)), and if set_attr then
+     [ns..ns + len) should equal XSTR (rtx, 0).  */
+  conlist (const char *ns, unsigned int len)
+  {
+    name.assign (ns, len);
+  }
+
+  /* Adds a character to the end of the string.  */
+  void add (char c)
+  {
+    con += c;
+  }
+
+  /* Output the string in the form of a brand-new char *, then effectively
+     clear the internal string by resetting len to 0.  */
+  char * out ()
+  {
+    /* Final character is always a trailing comma, so strip it out.  */
+    char * q = xstrndup (con.c_str (), con.size () - 1);
+    con.clear ();
+    return q;
+  }
+};
+
+typedef std::vector<conlist> vec_conlist;
+
+/* Add constraints to an rtx. The match_operand/match_scratch that are matched
+   must be in depth-first order i.e. read from top to bottom in the pattern.
+   index is the index of the conlist we are up to so far.
+   This function is similar to remove_constraints.
+   Errors if adding the constraints would overwrite existing constraints.
+   Returns 1 + index of last conlist to be matched.  */
+
+static unsigned int
+add_constraints (rtx part, file_location loc, unsigned int index,
+		 vec_conlist &cons)
+{
+  const char *format_ptr;
+  char id[3];
+
+  if (part == NULL_RTX || index == cons.size ())
+    return index;
+
+  /* If match_op or match_scr, check if we have the right one, and if so, copy
+     over the constraint list.  */
+  if (GET_CODE (part) == MATCH_OPERAND || GET_CODE (part) == MATCH_SCRATCH)
+    {
+      int field = GET_CODE (part) == MATCH_OPERAND ? 2 : 1;
+
+      snprintf (id, 3, "%d", XINT (part, 0));
+      if (cons[index].name.compare (id) == 0)
+	{
+	  if (XSTR (part, field)[0] != '\0')
+	    {
+	      error_at (loc, "can't mix normal and compact constraint syntax");
+	      return cons.size ();
+	    }
+	  XSTR (part, field) = cons[index].out ();
+
+	  ++index;
+	}
+    }
+
+  format_ptr = GET_RTX_FORMAT (GET_CODE (part));
+
+  /* Recursively search the rtx.  */
+  for (int i = 0; i < GET_RTX_LENGTH (GET_CODE (part)); i++)
+    switch (*format_ptr++)
+      {
+      case 'e':
+      case 'u':
+	index = add_constraints (XEXP (part, i), loc, index, cons);
+	break;
+      case 'E':
+	if (XVEC (part, i) != NULL)
+	  for (int j = 0; j < XVECLEN (part, i); j++)
+	    index = add_constraints (XVECEXP (part, i, j), loc, index, cons);
+	break;
+      default:
+	continue;
+      }
+
+  return index;
+}
+
+/* Add attributes to an rtx. The attributes that are matched must be in order
+   i.e. read from top to bottom in the pattern.
+   Errors if adding the attributes would overwrite existing attributes.
+   Returns 1 + index of last conlist to be matched.  */
+
+static unsigned int
+add_attributes (rtx x, file_location loc, vec_conlist &attrs)
+{
+  unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3;
+  unsigned int index = 0;
+
+  if (XVEC (x, attr_index) == NULL)
+    return index;
+
+  for (int i = 0; i < XVECLEN (x, attr_index); ++i)
+    {
+      rtx part = XVECEXP (x, attr_index, i);
+
+      if (GET_CODE (part) != SET_ATTR)
+	continue;
+
+      if (attrs[index].name.compare (XSTR (part, 0)) == 0)
+	{
+	  if (XSTR (part, 1) && XSTR (part, 1)[0] != '\0')
+	    {
+	      error_at (loc, "can't mix normal and compact attribute syntax");
+	      break;
+	    }
+	  XSTR (part, 1) = attrs[index].out ();
+
+	  ++index;
+	  if (index == attrs.size ())
+	    break;
+	}
+    }
+
+  return index;
+}
+
+/* Modify the attributes list to make space for the implicitly declared
+   attributes in the attrs: list.  */
+
+static void
+create_missing_attributes (rtx x, file_location /* loc */, vec_conlist &attrs)
+{
+  if (attrs.empty ())
+    return;
+
+  unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3;
+  vec_conlist missing;
+
+  /* This is an O(n*m) loop but it's fine, both n and m will always be very
+     small.  */
+  for (conlist cl : attrs)
+    {
+      bool found = false;
+      for (int i = 0; XVEC (x, attr_index) && i < XVECLEN (x, attr_index); ++i)
+	{
+	  rtx part = XVECEXP (x, attr_index, i);
+
+	  if (GET_CODE (part) != SET_ATTR
+	      || cl.name.compare (XSTR (part, 0)) == 0)
+	    {
+	      found = true;
+	      break;
+	    }
+	}
+
+      if (!found)
+	missing.push_back (cl);
+    }
+
+  rtvec orig = XVEC (x, attr_index);
+  size_t n_curr = orig ? XVECLEN (x, attr_index) : 0;
+  rtvec copy = rtvec_alloc (n_curr + missing.size ());
+
+  /* Create a shallow copy of existing entries.  */
+  memcpy (&copy->elem[missing.size ()], &orig->elem[0], sizeof (rtx) * n_curr);
+  XVEC (x, attr_index) = copy;
+
+  /* Create the new elements.  */
+  for (unsigned i = 0; i < missing.size (); i++)
+    {
+      rtx attr = rtx_alloc (SET_ATTR);
+      XSTR (attr, 0) = xstrdup (attrs[i].name.c_str ());
+      XSTR (attr, 1) = NULL;
+      XVECEXP (x, attr_index, i) = attr;
+    }
+
+  return;
+}
+
+/* Consumes spaces and tabs.  */
+
+static inline void
+skip_spaces (const char **str)
+{
+  while (**str == ' ' || **str == '\t')
+    (*str)++;
+}
+
+/* Consumes the given character, if it's there.  */
+
+static inline bool
+expect_char (const char **str, char c)
+{
+  if (**str != c)
+    return false;
+  (*str)++;
+  return true;
+}
+
+/* Parses the section layout that follows a "@@" if using new syntax. Builds
+   a vector for a single section. E.g. if we have "attrs: length arch)..."
+   then list will have two elements, the first for "length" and the second
+   for "arch".  */
+
+static void
+parse_section_layout (const char **templ, const char *label,
+		      vec_conlist &list)
+{
+  const char *name_start;
+  size_t label_len = strlen (label);
+  if (strncmp (label, *templ, label_len) == 0)
+    {
+      *templ += label_len;
+
+      /* Gather the names.  */
+      while (**templ != ';' && **templ != ')')
+	{
+	  skip_spaces (templ);
+	  name_start = *templ;
+	  int len = 0;
+	  while ((*templ)[len] != ' ' && (*templ)[len] != '\t'
+		 && (*templ)[len] != ';' && (*templ)[len] != ')')
+	    len++;
+	  *templ += len;
+	  list.push_back (conlist (name_start, len));
+	}
+    }
+}
+
+/* Parse a section, a section is defined as a named space separated list, e.g.
+
+   foo: a b c
+
+   is a section named "foo" with entries a,b and c.  */
+
+static void
+parse_section (const char **templ, unsigned int n_elems, unsigned int alt_no,
+	       vec_conlist &list, file_location loc, const char *name)
+{
+  unsigned int i;
+
+  /* Go through the list, one character at a time, adding said character
+     to the correct string.  */
+  for (i = 0; **templ != ']' && **templ != ';'; (*templ)++)
+    {
+      if (**templ != ' ' && **templ != '\t')
+	{
+	  list[i].add(**templ);
+	  if (**templ == ',')
+	    {
+	      ++i;
+	      if (i == n_elems)
+		fatal_at (loc, "too many %ss in alternative %d: expected %d",
+			  name, alt_no, n_elems);
+	    }
+	}
+    }
+
+  if (i + 1 < n_elems)
+    fatal_at (loc, "too few %ss in alternative %d: expected %d, got %d",
+	      name, alt_no, n_elems, i);
+
+  list[i].add(',');
+}
+
+/* The compact syntax has more convience syntaxes.  As such we post process
+   the lines to get them back to something the normal syntax understands.  */
+
+static void
+preprocess_compact_syntax (file_location loc, int alt_no, std::string &line,
+			   std::string &last_line)
+{
+  /* Check if we're copying the last statement.  */
+  if (line.find ("^") == 0 && line.size () == 1)
+    {
+      if (last_line.empty ())
+	fatal_at (loc, "found instruction to copy previous line (^) in"
+		       "alternative %d but no previous line to copy", alt_no);
+      line = last_line;
+      return;
+    }
+
+  std::string result;
+  std::string buffer;
+  /* Check if we have << which means return c statement.  */
+  if (line.find ("<<") == 0)
+    {
+      result.append ("* return ");
+      buffer.append (line.substr (3));
+    }
+  else
+    buffer.append (line);
+
+  /* Now perform string expansion.  Replace ' with " if more than one character
+     in the string.  "*/
+  bool double_quoted = false;
+  bool quote_open = false;
+  for (unsigned i = 0; i < buffer.length (); i++)
+    {
+      char chr = buffer[i];
+      if (chr == '\'')
+	{
+	  if (quote_open)
+	    {
+	      if (double_quoted)
+		result += '"';
+	      else
+		result += chr;
+	      quote_open = false;
+	    }
+	  else
+	    {
+	      if (i + 2 < buffer.length ()
+		  && buffer[i+1] != '\''
+		  && buffer[i+2] != '\'')
+		{
+		  double_quoted = true;
+		  result += '"';
+		}
+	      else
+		result += chr;
+	      quote_open = true;
+	    }
+	}
+      else
+	result += chr;
+    }
+
+  /* Braces were mismatched.  Abort.  */
+  if (quote_open)
+    fatal_at (loc, "brace mismatch in instruction template '%s'",
+	      line.c_str ());
+
+  line = result;
+  return;
+}
+
+/* Converts an rtx from compact syntax to normal syntax if possible.  */
+
+static void
+convert_syntax (rtx x, file_location loc)
+{
+  int alt_no;
+  unsigned int index, templ_index;
+  const char *templ;
+  vec_conlist convec, attrvec;
+
+  templ_index = GET_CODE (x) == DEFINE_INSN ? 3 : 2;
+
+  templ = XTMPL (x, templ_index);
+
+  /* Templates with constraints start with "@@".  */
+  if (strncmp ("@@", templ, 2))
+    return;
+
+  /* Get the layout for the template.  */
+  templ += 2;
+  skip_spaces (&templ);
+
+  if (!expect_char (&templ, '('))
+    fatal_at (loc, "expecing `(' to begin section list");
+
+  parse_section_layout (&templ, "cons:", convec);
+
+  if (*templ != ')')
+    {
+      if (*templ == ';')
+	skip_spaces (&(++templ));
+      parse_section_layout (&templ, "attrs:", attrvec);
+      create_missing_attributes (x, loc, attrvec);
+    }
+
+  if (!expect_char (&templ, ')'))
+    {
+      fatal_at (loc, "expecting `)` to end section list - section list "
+		"must have cons first, attrs second");
+    }
+
+  /* We will write the un-constrainified template into new_templ.  */
+  std::string new_templ;
+  new_templ.append ("@\n");
+
+  /* Skip to the first proper line.  */
+  while (*templ++ != '\n');
+  alt_no = 0;
+
+  std::string last_line;
+
+  /* Process the alternatives.  */
+  while (*(templ - 1) != '\0')
+    {
+      /* Copy leading whitespace.  */
+      while (*templ == ' ' || *templ == '\t')
+	new_templ += *templ++;
+
+      if (expect_char (&templ, '['))
+	{
+	  /* Parse the constraint list, then the attribute list.  */
+	  if (convec.size () > 0)
+	    parse_section (&templ, convec.size (), alt_no, convec, loc,
+			   "constraint");
+
+	  if (attrvec.size () > 0)
+	    {
+	      if (convec.size () > 0 && !expect_char (&templ, ';'))
+		fatal_at (loc, "expected `;' to separate constraints "
+			       "and attributes in alternative %d", alt_no);
+
+	      parse_section (&templ, attrvec.size (), alt_no,
+			     attrvec, loc, "attribute");
+	    }
+
+	  if (!expect_char (&templ, ']'))
+	    fatal_at (loc, "expected end of constraint/attribute list but "
+			   "missing an ending `]' in alternative %d", alt_no);
+	}
+      else
+	fatal_at (loc, "expected constraint/attribute list at beginning of "
+		       "alternative %d but missing a starting `['", alt_no);
+
+      /* Skip whitespace between list and asm.  */
+      ++templ;
+      skip_spaces (&templ);
+
+      /* Copy asm to new template.  */
+      std::string line;
+      while (*templ != '\n' && *templ != '\0')
+	line += *templ++;
+
+      /* Apply any pre-processing needed to the line.  */
+      preprocess_compact_syntax (loc, alt_no, line, last_line);
+      new_templ.append (line);
+      last_line = line;
+
+      new_templ += *templ++;
+      ++alt_no;
+    }
+
+  /* Write the constraints and attributes into their proper places.  */
+  if (convec.size () > 0)
+    {
+      index = add_constraints (x, loc, 0, convec);
+      if (index < convec.size ())
+	fatal_at (loc, "could not find match_operand/scratch with id %s",
+		  convec[index].name.c_str ());
+    }
+
+  if (attrvec.size () > 0)
+    {
+      index = add_attributes (x, loc, attrvec);
+      if (index < attrvec.size ())
+	fatal_at (loc, "could not find set_attr for attribute %s",
+		  attrvec[index].name.c_str ());
+    }
+
+  /* Copy over the new un-constrainified template.  */
+  XTMPL (x, templ_index) = xstrdup (new_templ.c_str ());
+
+  /* Register for later checks during iterator expansions.  */
+  compact_syntax.add (x);
+
+#if DEBUG
+  print_rtl_single (stderr, x);
+#endif
+}
+
 /* Process a top level rtx in some way, queuing as appropriate.  */
 
 static void
@@ -553,10 +1083,12 @@ process_rtx (rtx desc, file_location loc)
   switch (GET_CODE (desc))
     {
     case DEFINE_INSN:
+      convert_syntax (desc, loc);
       queue_pattern (desc, &define_insn_tail, loc);
       break;
 
     case DEFINE_COND_EXEC:
+      convert_syntax (desc, loc);
       queue_pattern (desc, &define_cond_exec_tail, loc);
       break;
 
@@ -631,6 +1163,7 @@ process_rtx (rtx desc, file_location loc)
 	attr = XVEC (desc, split_code + 1);
 	PUT_CODE (desc, DEFINE_INSN);
 	XVEC (desc, 4) = attr;
+	convert_syntax (desc, loc);
 
 	/* Queue them.  */
 	insn_elem = queue_pattern (desc, &define_insn_tail, loc);




^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-06-15  6:24 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-18 16:30 [PATCH] RFC: New compact syntax for insn and insn_split in Machine Descriptions Tamar Christina
2023-04-21 17:18 ` Richard Sandiford
2023-04-24  8:33   ` Richard Sandiford
2023-05-16 13:56     ` Richard Earnshaw (lists)
2023-04-24  9:05   ` Tamar Christina
2023-04-24  9:37     ` Richard Sandiford
2023-06-05 13:51 ` [PATCH v2] machine descriptor: " Tamar Christina
2023-06-05 20:35 ` Richard Sandiford
2023-06-06  7:47   ` Richard Sandiford
2023-06-06 12:00   ` Tamar Christina
2023-06-06 12:49     ` Richard Sandiford
2023-06-06 16:13       ` Richard Earnshaw (lists)
2023-06-08  9:58         ` Tamar Christina
2023-06-08 10:12           ` Andreas Schwab
2023-06-08 10:29             ` Richard Earnshaw (lists)
2023-06-08 10:33               ` Richard Earnshaw (lists)
2023-06-08 14:24           ` Richard Sandiford
2023-06-08 16:49           ` Mikael Morin
2023-06-13 15:26             ` Tamar Christina
2023-06-14 19:41               ` Richard Sandiford
2023-06-15  6:24                 ` Richard Sandiford

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).