public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>
To: Richard Sandiford <Richard.Sandiford@arm.com>,
	Evandro Menezes via Gcc-patches <gcc-patches@gcc.gnu.org>
Cc: "evandro+gcc@gcc.gnu.org" <evandro+gcc@gcc.gnu.org>,
	Evandro Menezes <ebahapo@icloud.com>,
	Tamar Christina <Tamar.Christina@arm.com>
Subject: RE: [PATCH] aarch64: Add SVE instruction types
Date: Mon, 15 May 2023 09:49:10 +0000	[thread overview]
Message-ID: <PAXPR08MB6926EF7B648D52442993CB9793789@PAXPR08MB6926.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <mpt8rdq7yvh.fsf@arm.com>



> -----Original Message-----
> From: Richard Sandiford <richard.sandiford@arm.com>
> Sent: Monday, May 15, 2023 10:01 AM
> To: Evandro Menezes via Gcc-patches <gcc-patches@gcc.gnu.org>
> Cc: evandro+gcc@gcc.gnu.org; Evandro Menezes <ebahapo@icloud.com>;
> Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>; Tamar Christina
> <Tamar.Christina@arm.com>
> Subject: Re: [PATCH] aarch64: Add SVE instruction types
> 
> Evandro Menezes via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> > This patch adds the attribute `type` to most SVE1 instructions, as in the
> other
> > instructions.
> 
> Thanks for doing this.
> 
> Could you say what criteria you used for picking the granularity?  Other
> maintainers might disagree, but personally I'd prefer to distinguish two
> instructions only if:
> 
> (a) a scheduling description really needs to distinguish them or
> (b) grouping them together would be very artificial (because they're
>     logically unrelated)
> 
> It's always possible to split types later if new scheduling descriptions
> require it.  Because of that, I don't think we should try to predict ahead
> of time what future scheduling descriptions will need.
> 
> Of course, this depends on having results that show that scheduling
> makes a significant difference on an SVE core.  I think one of the
> problems here is that, when a different scheduling model changes the
> performance of a particular test, it's difficult to tell whether
> the gain/loss is caused by the model being more/less accurate than
> the previous one, or if it's due to important "secondary" effects
> on register live ranges.  Instinctively, I'd have expected these
> secondary effects to dominate on OoO cores.

I agree with Richard on these points. The key here is getting the granularity right without having too maintain too many types that aren't useful in the models.
FWIW I had posted https://gcc.gnu.org/pipermail/gcc-patches/2022-November/607101.html in November. It adds annotations to SVE2 patterns as well as for base SVE.
Feel free to reuse it if you'd like.
I see you had posted a Neoverse V1 scheduling model. Does that give an improvement on SVE code when combined with the scheduling attributes somehow?
Thanks,
Kyrill

> 
> Richard
> 
> >
> > --
> > Evandro Menezes
> >
> >
> >
> > From be61df66d1a86bc7ec415eb23504002831c67c51 Mon Sep 17 00:00:00
> 2001
> > From: Evandro Menezes <evandro@gcc.gnu.org>
> > Date: Mon, 8 May 2023 17:39:10 -0500
> > Subject: [PATCH 2/3] aarch64: Add SVE instruction types
> >
> > gcc/ChangeLog:
> >
> > 	* config/aarch64/aarch64-sve.md: Use the instruction types.
> > 	* config/arm/types.md: (sve_loop_p, sve_loop_ps, sve_loop_gs,
> > 	  sve_loop_end, sve_logic_p, sve_logic_ps, sve_cnt_p,
> > 	  sve_cnt_pv, sve_cnt_pvx, sve_rev_p, sve_sel_p, sve_set_p,
> > 	  sve_set_ps, sve_trn_p, sve_upk_p, sve_zip_p, sve_arith,
> > 	  sve_arith_r, sve_arith_sat, sve_arith_sat_x, sve_arith_x,
> > 	  sve_logic, sve_logic_r, sve_logic_x, sve_shift, sve_shift_d,
> > 	  sve_shift_dx, sve_shift_x, sve_compare_s, sve_cnt, sve_cnt_x,
> > 	  sve_copy, sve_copy_g, sve_move, sve_move_x, sve_move_g,
> > 	  sve_permute, sve_splat, sve_splat_m, sve_splat_g, sve_cext,
> > 	  sve_cext_x, sve_cext_g, sve_ext, sve_ext_x, sve_sext,
> > 	  sve_sext_x, sve_uext, sve_uext_x, sve_index, sve_index_g,
> > 	  sve_ins, sve_ins_x, sve_ins_g, sve_ins_gx, sve_rev, sve_rev_x,
> > 	  sve_tbl, sve_trn, sve_upk, sve_zip, sve_int_to_fp,
> > 	  sve_int_to_fp_x, sve_fp_to_int, sve_fp_to_int_x, sve_fp_to_fp,
> > 	  sve_fp_to_fp_x, sve_fp_round, sve_fp_round_x, sve_bf_to_fp,
> > 	  sve_bf_to_fp_x, sve_div, sve_div_x, sve_dot, sve_dot_x,
> > 	  sve_mla, sve_mla_x, sve_mmla, sve_mmla_x, sve_mul, sve_mul_x,
> > 	  sve_prfx, sve_fp_arith, sve_fp_arith_a, sve_fp_arith_c,
> > 	  sve_fp_arith_cx, sve_fp_arith_r, sve_fp_arith_x,
> > 	  sve_fp_compare, sve_fp_copy, sve_fp_move, sve_fp_move_x,
> > 	  sve_fp_div_d, sve_fp_div_dx, sve_fp_div_s, sve_fp_div_sx
> > 	  sve_fp_dot, sve_fp_mla, sve_fp_mla_x, sve_fp_mla_c,
> > 	  sve_fp_mla_cx, sve_fp_mla_t, sve_fp_mla_tx, sve_fp_mmla,
> > 	  sve_fp_mmla_x, sve_fp_mul, sve_fp_mul_x, sve_fp_sqrt_d,
> > 	  sve_fp_sqrt_dx, sve_fp_sqrt_s, sve_fp_sqrt_sx, sve_fp_trig,
> > 	  sve_fp_trig_x, sve_fp_estimate, sve_fp_step, sve_bf_dot,
> > 	  sve_bf_dot_x, sve_bf_mla, sve_bf_mla_x, sve_bf_mmla,
> > 	  sve_bf_mmla_x, sve_ldr, sve_ldr_p, sve_load1,
> > 	  sve_load1_gather_d, sve_load1_gather_dl, sve_load1_gather_du,
> > 	  sve_load1_gather_s, sve_load1_gather_sl, sve_load1_gather_su,
> > 	  sve_load2, sve_load3, sve_load4, sve_str, sve_str_p,
> > 	  sve_store1, sve_store1_scatter, sve_store2, sve_store3,
> > 	  sve_store4, sve_rd_ffr, sve_rd_ffr_p, sve_rd_ffr_ps,
> > 	  sve_wr_ffr): New types.
> >
> > Signed-off-by: Evandro Menezes <evandro@gcc.gnu.org>
> > ---
> >  gcc/config/aarch64/aarch64-sve.md | 632 ++++++++++++++++++++++--------
> >  gcc/config/arm/types.md           | 342 ++++++++++++++++
> >  2 files changed, 819 insertions(+), 155 deletions(-)
> >
> > diff --git a/gcc/config/aarch64/aarch64-sve.md
> b/gcc/config/aarch64/aarch64-sve.md
> > index 2898b85376b..58c5cb2ddbc 100644
> > --- a/gcc/config/aarch64/aarch64-sve.md
> > +++ b/gcc/config/aarch64/aarch64-sve.md
> > @@ -699,6 +699,7 @@
> >     str\t%1, %0
> >     mov\t%0.d, %1.d
> >     * return aarch64_output_sve_mov_immediate (operands[1]);"
> > +  [(set_attr "type" "sve_ldr, sve_str, sve_move, *")]
> >  )
> >
> >  ;; Unpredicated moves that cannot use LDR and STR, i.e. partial vectors
> > @@ -714,6 +715,7 @@
> >    "@
> >     mov\t%0.d, %1.d
> >     * return aarch64_output_sve_mov_immediate (operands[1]);"
> > +  [(set_attr "type" "sve_move, sve_move_x")]
> >  )
> >
> >  ;; Handle memory reloads for modes that can't use LDR and STR.  We use
> > @@ -758,6 +760,8 @@
> >    "&& register_operand (operands[0], <MODE>mode)
> >     && register_operand (operands[2], <MODE>mode)"
> >    [(set (match_dup 0) (match_dup 2))]
> > +  ""
> > +  [(set_attr "type" "sve_load1, sve_store1, *")]
> >  )
> >
> >  ;; A pattern for optimizing SUBREGs that have a reinterpreting effect
> > @@ -778,6 +782,7 @@
> >      aarch64_split_sve_subreg_move (operands[0], operands[1],
> operands[2]);
> >      DONE;
> >    }
> > +  [(set_attr "type" "sve_rev")]
> >  )
> >
> >  ;; Reinterpret operand 1 in operand 0's mode, without changing its
> contents.
> > @@ -959,6 +964,7 @@
> >     str\t%1, %0
> >     ldr\t%0, %1
> >     * return aarch64_output_sve_mov_immediate (operands[1]);"
> > +  [(set_attr "type" "sve_logic_p, sve_str_p, sve_ldr_p, *")]
> >  )
> >
> >  ;; Match PTRUES Pn.B when both the predicate and flags are useful.
> > @@ -984,6 +990,7 @@
> >    {
> >      operands[2] = operands[3] = CONSTM1_RTX (VNx16BImode);
> >    }
> > +  [(set_attr "type" "sve_set_ps")]
> >  )
> >
> >  ;; Match PTRUES Pn.[HSD] when both the predicate and flags are useful.
> > @@ -1011,6 +1018,7 @@
> >      operands[2] = CONSTM1_RTX (VNx16BImode);
> >      operands[3] = CONSTM1_RTX (<MODE>mode);
> >    }
> > +  [(set_attr "type" "sve_set_ps")]
> >  )
> >
> >  ;; Match PTRUES Pn.B when only the flags result is useful (which is
> > @@ -1036,6 +1044,7 @@
> >    {
> >      operands[2] = operands[3] = CONSTM1_RTX (VNx16BImode);
> >    }
> > +  [(set_attr "type" "sve_set_ps")]
> >  )
> >
> >  ;; Match PTRUES Pn.[HWD] when only the flags result is useful (which is
> > @@ -1063,6 +1072,7 @@
> >      operands[2] = CONSTM1_RTX (VNx16BImode);
> >      operands[3] = CONSTM1_RTX (<MODE>mode);
> >    }
> > +  [(set_attr "type" "sve_set_ps")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -1086,6 +1096,7 @@
> >    "@
> >     setffr
> >     wrffr\t%0.b"
> > +  [(set_attr "type" "sve_wr_ffr, sve_wr_ffr")]
> >  )
> >
> >  ;; [L2 in the block comment above about FFR handling]
> > @@ -1125,6 +1136,7 @@
> >  	(reg:VNx16BI FFRT_REGNUM))]
> >    "TARGET_SVE"
> >    "rdffr\t%0.b"
> > +  [(set_attr "type" "sve_rd_ffr")]
> >  )
> >
> >  ;; Likewise with zero predication.
> > @@ -1135,6 +1147,7 @@
> >  	  (match_operand:VNx16BI 1 "register_operand" "Upa")))]
> >    "TARGET_SVE"
> >    "rdffr\t%0.b, %1/z"
> > +  [(set_attr "type" "sve_rd_ffr_p")]
> >  )
> >
> >  ;; Read the FFR to test for a fault, without using the predicate result.
> > @@ -1151,6 +1164,7 @@
> >     (clobber (match_scratch:VNx16BI 0 "=Upa"))]
> >    "TARGET_SVE"
> >    "rdffrs\t%0.b, %1/z"
> > +  [(set_attr "type" "sve_rd_ffr_ps")]
> >  )
> >
> >  ;; Same for unpredicated RDFFR when tested with a known PTRUE.
> > @@ -1165,6 +1179,7 @@
> >     (clobber (match_scratch:VNx16BI 0 "=Upa"))]
> >    "TARGET_SVE"
> >    "rdffrs\t%0.b, %1/z"
> > +  [(set_attr "type" "sve_rd_ffr_ps")]
> >  )
> >
> >  ;; Read the FFR with zero predication and test the result.
> > @@ -1184,6 +1199,7 @@
> >  	  (match_dup 1)))]
> >    "TARGET_SVE"
> >    "rdffrs\t%0.b, %1/z"
> > +  [(set_attr "type" "sve_rd_ffr_ps")]
> >  )
> >
> >  ;; Same for unpredicated RDFFR when tested with a known PTRUE.
> > @@ -1199,6 +1215,7 @@
> >  	(reg:VNx16BI FFRT_REGNUM))]
> >    "TARGET_SVE"
> >    "rdffrs\t%0.b, %1/z"
> > +  [(set_attr "type" "sve_rd_ffr_ps")]
> >  )
> >
> >  ;; [R3 in the block comment above about FFR handling]
> > @@ -1248,6 +1265,7 @@
> >  	  UNSPEC_LD1_SVE))]
> >    "TARGET_SVE"
> >    "ld1<Vesize>\t%0.<Vctype>, %2/z, %1"
> > +  [(set_attr "type" "sve_load1")]
> >  )
> >
> >  ;; Unpredicated LD[234].
> > @@ -1272,6 +1290,7 @@
> >  	  UNSPEC_LDN))]
> >    "TARGET_SVE"
> >    "ld<vector_count><Vesize>\t%0, %2/z, %1"
> > +  [(set_attr "type" "sve_load<vector_count>")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -1303,6 +1322,7 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<SVE_HSDI:VPRED>mode);
> >    }
> > +  [(set_attr "type" "sve_load1")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -1329,6 +1349,7 @@
> >  	  SVE_LDFF1_LDNF1))]
> >    "TARGET_SVE"
> >    "ld<fn>f1<Vesize>\t%0.<Vetype>, %2/z, %1"
> > +  [(set_attr "type" "sve_load1")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -1367,6 +1388,7 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<SVE_HSDI:VPRED>mode);
> >    }
> > +  [(set_attr "type" "sve_load1")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -1388,6 +1410,7 @@
> >  	  UNSPEC_LDNT1_SVE))]
> >    "TARGET_SVE"
> >    "ldnt1<Vesize>\t%0.<Vetype>, %2/z, %1"
> > +  [(set_attr "type" "sve_load1")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -1435,6 +1458,8 @@
> >     ld1<Vesize>\t%0.s, %5/z, [%1, %2.s, uxtw]
> >     ld1<Vesize>\t%0.s, %5/z, [%1, %2.s, sxtw %p4]
> >     ld1<Vesize>\t%0.s, %5/z, [%1, %2.s, uxtw %p4]"
> > +  [(set_attr "type" "sve_load1_gather_s, sve_load1_gather_s,
> sve_load1_gather_su,
> > +                     sve_load1_gather_su, sve_load1_gather_sl,
> sve_load1_gather_sl")]
> >  )
> >
> >  ;; Predicated gather loads for 64-bit elements.  The value of operand 3
> > @@ -1455,6 +1480,8 @@
> >     ld1<Vesize>\t%0.d, %5/z, [%2.d, #%1]
> >     ld1<Vesize>\t%0.d, %5/z, [%1, %2.d]
> >     ld1<Vesize>\t%0.d, %5/z, [%1, %2.d, lsl %p4]"
> > +  [(set_attr "type" "sve_load1_gather_d, sve_load1_gather_d,
> > +                     sve_load1_gather_du, sve_load1_gather_dl")]
> >  )
> >
> >  ;; Likewise, but with the offset being extended from 32 bits.
> > @@ -1480,6 +1507,7 @@
> >    {
> >      operands[6] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;; Likewise, but with the offset being truncated to 32 bits and then
> > @@ -1507,6 +1535,7 @@
> >    {
> >      operands[6] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;; Likewise, but with the offset being truncated to 32 bits and then
> > @@ -1527,6 +1556,7 @@
> >    "@
> >     ld1<Vesize>\t%0.d, %5/z, [%1, %2.d, uxtw]
> >     ld1<Vesize>\t%0.d, %5/z, [%1, %2.d, uxtw %p4]"
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -1569,6 +1599,8 @@
> >    {
> >      operands[6] = CONSTM1_RTX (VNx4BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_s, sve_load1_gather_s,
> sve_load1_gather_su,
> > +                     sve_load1_gather_su, sve_load1_gather_sl,
> sve_load1_gather_sl")]
> >  )
> >
> >  ;; Predicated extending gather loads for 64-bit elements.  The value of
> > @@ -1597,6 +1629,8 @@
> >    {
> >      operands[6] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_d, sve_load1_gather_d,
> > +                     sve_load1_gather_du, sve_load1_gather_dl")]
> >  )
> >
> >  ;; Likewise, but with the offset being extended from 32 bits.
> > @@ -1627,6 +1661,7 @@
> >      operands[6] = CONSTM1_RTX (VNx2BImode);
> >      operands[7] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;; Likewise, but with the offset being truncated to 32 bits and then
> > @@ -1659,6 +1694,7 @@
> >      operands[6] = CONSTM1_RTX (VNx2BImode);
> >      operands[7] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;; Likewise, but with the offset being truncated to 32 bits and then
> > @@ -1687,6 +1723,7 @@
> >    {
> >      operands[7] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -1718,6 +1755,8 @@
> >     ldff1w\t%0.s, %5/z, [%1, %2.s, uxtw]
> >     ldff1w\t%0.s, %5/z, [%1, %2.s, sxtw %p4]
> >     ldff1w\t%0.s, %5/z, [%1, %2.s, uxtw %p4]"
> > +  [(set_attr "type" "sve_load1_gather_s, sve_load1_gather_s,
> sve_load1_gather_su,
> > +                     sve_load1_gather_su, sve_load1_gather_sl,
> sve_load1_gather_sl")]
> >  )
> >
> >  ;; Predicated first-faulting gather loads for 64-bit elements.  The value
> > @@ -1739,6 +1778,8 @@
> >     ldff1d\t%0.d, %5/z, [%2.d, #%1]
> >     ldff1d\t%0.d, %5/z, [%1, %2.d]
> >     ldff1d\t%0.d, %5/z, [%1, %2.d, lsl %p4]"
> > +  [(set_attr "type" "sve_load1_gather_d, sve_load1_gather_d,
> > +                     sve_load1_gather_du, sve_load1_gather_dl")]
> >  )
> >
> >  ;; Likewise, but with the offset being sign-extended from 32 bits.
> > @@ -1766,6 +1807,7 @@
> >    {
> >      operands[6] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;; Likewise, but with the offset being zero-extended from 32 bits.
> > @@ -1786,6 +1828,7 @@
> >    "@
> >     ldff1d\t%0.d, %5/z, [%1, %2.d, uxtw]
> >     ldff1d\t%0.d, %5/z, [%1, %2.d, uxtw %p4]"
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -1829,6 +1872,8 @@
> >    {
> >      operands[6] = CONSTM1_RTX (VNx4BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_s, sve_load1_gather_s,
> sve_load1_gather_su,
> > +                     sve_load1_gather_su, sve_load1_gather_sl,
> sve_load1_gather_sl")]
> >  )
> >
> >  ;; Predicated extending first-faulting gather loads for 64-bit elements.
> > @@ -1858,6 +1903,8 @@
> >    {
> >      operands[6] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_d, sve_load1_gather_d,
> > +                     sve_load1_gather_du, sve_load1_gather_dl")]
> >  )
> >
> >  ;; Likewise, but with the offset being sign-extended from 32 bits.
> > @@ -1890,6 +1937,7 @@
> >      operands[6] = CONSTM1_RTX (VNx2BImode);
> >      operands[7] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;; Likewise, but with the offset being zero-extended from 32 bits.
> > @@ -1918,6 +1966,7 @@
> >    {
> >      operands[7] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -1950,6 +1999,7 @@
> >      operands[1] = gen_rtx_MEM (<MODE>mode, operands[1]);
> >      return aarch64_output_sve_prefetch ("prf<Vesize>", operands[2], "%0,
> %1");
> >    }
> > +  [(set_attr "type" "sve_ldr")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -1998,6 +2048,8 @@
> >      const char *const *parts = insns[which_alternative];
> >      return aarch64_output_sve_prefetch (parts[0], operands[6], parts[1]);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_s, sve_load1_gather_s,
> sve_load1_gather_su,
> > +                     sve_load1_gather_su, sve_load1_gather_sl,
> sve_load1_gather_sl")]
> >  )
> >
> >  ;; Predicated gather prefetches for 64-bit elements.  The value of operand 3
> > @@ -2025,6 +2077,8 @@
> >      const char *const *parts = insns[which_alternative];
> >      return aarch64_output_sve_prefetch (parts[0], operands[6], parts[1]);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_d, sve_load1_gather_d,
> > +                     sve_load1_gather_du, sve_load1_gather_dl")]
> >  )
> >
> >  ;; Likewise, but with the offset being sign-extended from 32 bits.
> > @@ -2058,6 +2112,7 @@
> >    {
> >      operands[9] = copy_rtx (operands[0]);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;; Likewise, but with the offset being zero-extended from 32 bits.
> > @@ -2084,6 +2139,7 @@
> >      const char *const *parts = insns[which_alternative];
> >      return aarch64_output_sve_prefetch (parts[0], operands[6], parts[1]);
> >    }
> > +  [(set_attr "type" "sve_load1_gather_su, sve_load1_gather_sl")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -2122,6 +2178,7 @@
> >  	  UNSPEC_ST1_SVE))]
> >    "TARGET_SVE"
> >    "st1<Vesize>\t%1.<Vctype>, %2, %0"
> > +  [(set_attr "type" "sve_store1")]
> >  )
> >
> >  ;; Unpredicated ST[234].  This is always a full update, so the dependence
> > @@ -2152,6 +2209,7 @@
> >  	  UNSPEC_STN))]
> >    "TARGET_SVE"
> >    "st<vector_count><Vesize>\t%1, %2, %0"
> > +  [(set_attr "type" "sve_store<vector_count>")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -2174,6 +2232,7 @@
> >  	  UNSPEC_ST1_SVE))]
> >    "TARGET_SVE"
> >    "st1<VNx8_NARROW:Vesize>\t%1.<VNx8_WIDE:Vetype>, %2, %0"
> > +  [(set_attr "type" "sve_store1")]
> >  )
> >
> >  ;; Predicated truncate and store, with 4 elements per 128-bit block.
> > @@ -2187,6 +2246,7 @@
> >  	  UNSPEC_ST1_SVE))]
> >    "TARGET_SVE"
> >    "st1<VNx4_NARROW:Vesize>\t%1.<VNx4_WIDE:Vetype>, %2, %0"
> > +  [(set_attr "type" "sve_store1")]
> >  )
> >
> >  ;; Predicated truncate and store, with 2 elements per 128-bit block.
> > @@ -2200,6 +2260,7 @@
> >  	  UNSPEC_ST1_SVE))]
> >    "TARGET_SVE"
> >    "st1<VNx2_NARROW:Vesize>\t%1.<VNx2_WIDE:Vetype>, %2, %0"
> > +  [(set_attr "type" "sve_store1")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -2221,6 +2282,7 @@
> >  	  UNSPEC_STNT1_SVE))]
> >    "TARGET_SVE"
> >    "stnt1<Vesize>\t%1.<Vetype>, %2, %0"
> > +  [(set_attr "type" "sve_store1")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -2268,6 +2330,8 @@
> >     st1<Vesize>\t%4.s, %5, [%0, %1.s, uxtw]
> >     st1<Vesize>\t%4.s, %5, [%0, %1.s, sxtw %p3]
> >     st1<Vesize>\t%4.s, %5, [%0, %1.s, uxtw %p3]"
> > +  [(set_attr "type" "sve_store1_scatter, sve_store1_scatter,
> sve_store1_scatter,
> > +                     sve_store1_scatter, sve_store1_scatter, sve_store1_scatter")]
> >  )
> >
> >  ;; Predicated scatter stores for 64-bit elements.  The value of operand 2
> > @@ -2288,6 +2352,8 @@
> >     st1<Vesize>\t%4.d, %5, [%1.d, #%0]
> >     st1<Vesize>\t%4.d, %5, [%0, %1.d]
> >     st1<Vesize>\t%4.d, %5, [%0, %1.d, lsl %p3]"
> > +  [(set_attr "type" "sve_store1_scatter, sve_store1_scatter,
> > +                     sve_store1_scatter, sve_store1_scatter")]
> >  )
> >
> >  ;; Likewise, but with the offset being extended from 32 bits.
> > @@ -2313,6 +2379,7 @@
> >    {
> >      operands[6] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > +  [(set_attr "type" "sve_store1_scatter, sve_store1_scatter")]
> >  )
> >
> >  ;; Likewise, but with the offset being truncated to 32 bits and then
> > @@ -2340,6 +2407,7 @@
> >    {
> >      operands[6] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > +  [(set_attr "type" "sve_store1_scatter, sve_store1_scatter")]
> >  )
> >
> >  ;; Likewise, but with the offset being truncated to 32 bits and then
> > @@ -2360,6 +2428,7 @@
> >    "@
> >     st1<Vesize>\t%4.d, %5, [%0, %1.d, uxtw]
> >     st1<Vesize>\t%4.d, %5, [%0, %1.d, uxtw %p3]"
> > +  [(set_attr "type" "sve_store1_scatter, sve_store1_scatter")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -2392,6 +2461,8 @@
> >     st1<VNx4_NARROW:Vesize>\t%4.s, %5, [%0, %1.s, uxtw]
> >     st1<VNx4_NARROW:Vesize>\t%4.s, %5, [%0, %1.s, sxtw %p3]
> >     st1<VNx4_NARROW:Vesize>\t%4.s, %5, [%0, %1.s, uxtw %p3]"
> > +  [(set_attr "type" "sve_store1_scatter, sve_store1_scatter,
> sve_store1_scatter,
> > +                     sve_store1_scatter, sve_store1_scatter, sve_store1_scatter")]
> >  )
> >
> >  ;; Predicated truncating scatter stores for 64-bit elements.  The value of
> > @@ -2413,6 +2484,8 @@
> >     st1<VNx2_NARROW:Vesize>\t%4.d, %5, [%1.d, #%0]
> >     st1<VNx2_NARROW:Vesize>\t%4.d, %5, [%0, %1.d]
> >     st1<VNx2_NARROW:Vesize>\t%4.d, %5, [%0, %1.d, lsl %p3]"
> > +  [(set_attr "type" "sve_store1_scatter, sve_store1_scatter,
> > +                     sve_store1_scatter, sve_store1_scatter")]
> >  )
> >
> >  ;; Likewise, but with the offset being sign-extended from 32 bits.
> > @@ -2440,6 +2513,7 @@
> >    {
> >      operands[6] = copy_rtx (operands[5]);
> >    }
> > +  [(set_attr "type" "sve_store1_scatter, sve_store1_scatter")]
> >  )
> >
> >  ;; Likewise, but with the offset being zero-extended from 32 bits.
> > @@ -2460,6 +2534,7 @@
> >    "@
> >     st1<VNx2_NARROW:Vesize>\t%4.d, %5, [%0, %1.d, uxtw]
> >     st1<VNx2_NARROW:Vesize>\t%4.d, %5, [%0, %1.d, uxtw %p3]"
> > +  [(set_attr "type" "sve_store1_scatter, sve_store1_scatter")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -2529,7 +2604,8 @@
> >  				   CONST0_RTX (<MODE>mode)));
> >      DONE;
> >    }
> > -  [(set_attr "length" "4,4,8")]
> > +  [(set_attr "length" "4,4,8")
> > +   (set_attr "type" "sve_move, sve_move, sve_load1")]
> >  )
> >
> >  ;; Duplicate an Advanced SIMD vector to fill an SVE vector (LE version).
> > @@ -2562,6 +2638,7 @@
> >      emit_insn (gen_aarch64_sve_ld1rq<mode> (operands[0], operands[1],
> gp));
> >      DONE;
> >    }
> > +  [(set_attr "type" "sve_splat, sve_load1")]
> >  )
> >
> >  ;; Duplicate an Advanced SIMD vector to fill an SVE vector (BE version).
> > @@ -2583,6 +2660,7 @@
> >      operands[1] = gen_rtx_REG (<MODE>mode, REGNO (operands[1]));
> >      return "dup\t%0.q, %1.q[0]";
> >    }
> > +  [(set_attr "type" "sve_splat")]
> >  )
> >
> >  ;; This is used for vec_duplicate<mode>s from memory, but can also
> > @@ -2598,6 +2676,7 @@
> >  	  UNSPEC_SEL))]
> >    "TARGET_SVE"
> >    "ld1r<Vesize>\t%0.<Vetype>, %1/z, %2"
> > +  [(set_attr "type" "sve_load1")]
> >  )
> >
> >  ;; Load 128 bits from memory under predicate control and duplicate to
> > @@ -2613,6 +2692,7 @@
> >      operands[1] = gen_rtx_MEM (<VEL>mode, XEXP (operands[1], 0));
> >      return "ld1rq<Vesize>\t%0.<Vetype>, %2/z, %1";
> >    }
> > +  [(set_attr "type" "sve_load1")]
> >  )
> >
> >  (define_insn "@aarch64_sve_ld1ro<mode>"
> > @@ -2627,6 +2707,7 @@
> >      operands[1] = gen_rtx_MEM (<VEL>mode, XEXP (operands[1], 0));
> >      return "ld1ro<Vesize>\t%0.<Vetype>, %2/z, %1";
> >    }
> > +  [(set_attr "type" "sve_load1")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -2659,7 +2740,8 @@
> >     insr\t%0.<Vetype>, %<Vetype>2
> >     movprfx\t%0, %1\;insr\t%0.<Vetype>, %<vwcore>2
> >     movprfx\t%0, %1\;insr\t%0.<Vetype>, %<Vetype>2"
> > -  [(set_attr "movprfx" "*,*,yes,yes")]
> > +  [(set_attr "movprfx" "*,*,yes,yes")
> > +   (set_attr "type" "sve_ins_g, sve_ins, sve_ins_gx, sve_ins_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -2679,6 +2761,7 @@
> >     index\t%0.<Vctype>, #%1, %<vccore>2
> >     index\t%0.<Vctype>, %<vccore>1, #%2
> >     index\t%0.<Vctype>, %<vccore>1, %<vccore>2"
> > +  [(set_attr "type" "sve_index_g, sve_index_g, sve_index_g")]
> >  )
> >
> >  ;; Optimize {x, x, x, x, ...} + {0, n, 2*n, 3*n, ...} if n is in range
> > @@ -2694,6 +2777,7 @@
> >      operands[2] = aarch64_check_zero_based_sve_index_immediate
> (operands[2]);
> >      return "index\t%0.<Vctype>, %<vccore>1, #%2";
> >    }
> > +  [(set_attr "type" "sve_index_g")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -2846,6 +2930,7 @@
> >      operands[0] = gen_rtx_REG (<MODE>mode, REGNO (operands[0]));
> >      return "dup\t%0.<Vetype>, %1.<Vetype>[%2]";
> >    }
> > +  [(set_attr "type" "sve_splat")]
> >  )
> >
> >  ;; Extract an element outside the range of DUP.  This pattern requires the
> > @@ -2863,7 +2948,8 @@
> >  	    ? "ext\t%0.b, %0.b, %0.b, #%2"
> >  	    : "movprfx\t%0, %1\;ext\t%0.b, %0.b, %1.b, #%2");
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_ext, sve_ext_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -2886,6 +2972,7 @@
> >    "@
> >     last<ab>\t%<vwcore>0, %1, %2.<Vetype>
> >     last<ab>\t%<Vetype>0, %1, %2.<Vetype>"
> > +  [(set_attr "type" "sve_ins_g, sve_ins")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -2955,7 +3042,8 @@
> >    "@
> >     <sve_int_op>\t%0.<Vetype>, %1/m, %2.<Vetype>
> >     movprfx\t%0, %2\;<sve_int_op>\t%0.<Vetype>, %1/m, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_arith, sve_arith_x")]
> >  )
> >
> >  ;; Predicated integer unary arithmetic with merging.
> > @@ -2983,7 +3071,8 @@
> >    "@
> >     <sve_int_op>\t%0.<Vetype>, %1/m, %0.<Vetype>
> >     movprfx\t%0, %2\;<sve_int_op>\t%0.<Vetype>, %1/m, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_arith, sve_arith_x")]
> >  )
> >
> >  ;; Predicated integer unary arithmetic, merging with an independent value.
> > @@ -3006,7 +3095,8 @@
> >     <sve_int_op>\t%0.<Vetype>, %1/m, %2.<Vetype>
> >     movprfx\t%0.<Vetype>, %1/z, %2.<Vetype>\;<sve_int_op>\t%0.<Vetype>,
> %1/m, %2.<Vetype>
> >     movprfx\t%0, %3\;<sve_int_op>\t%0.<Vetype>, %1/m, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_arith, sve_arith_x, sve_arith_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -3032,7 +3122,8 @@
> >    "@
> >     <sve_int_op>\t%0.<Vetype>, %1/m, %2.<Vetype>
> >     movprfx\t%0, %2\;<sve_int_op>\t%0.<Vetype>, %1/m, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_rev, sve_rev_x")]
> >  )
> >
> >  ;; Another way of expressing the REVB, REVH and REVW patterns, with this
> > @@ -3051,7 +3142,8 @@
> >    "@
> >     rev<SVE_ALL:Vcwtype>\t%0.<PRED_HSD:Vetype>, %1/m,
> %2.<PRED_HSD:Vetype>
> >     movprfx\t%0, %2\;rev<SVE_ALL:Vcwtype>\t%0.<PRED_HSD:Vetype>,
> %1/m, %2.<PRED_HSD:Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_rev, sve_rev")]
> >  )
> >
> >  ;; Predicated integer unary operations with merging.
> > @@ -3069,7 +3161,8 @@
> >     <sve_int_op>\t%0.<Vetype>, %1/m, %2.<Vetype>
> >     movprfx\t%0.<Vetype>, %1/z, %2.<Vetype>\;<sve_int_op>\t%0.<Vetype>,
> %1/m, %2.<Vetype>
> >     movprfx\t%0, %3\;<sve_int_op>\t%0.<Vetype>, %1/m, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_rev, sve_rev_x, sve_rev_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -3110,7 +3203,8 @@
> >    "@
> >     <su>xt<SVE_PARTIAL_I:Vesize>\t%0.<SVE_HSDI:Vetype>, %1/m,
> %2.<SVE_HSDI:Vetype>
> >     movprfx\t%0,
> %2\;<su>xt<SVE_PARTIAL_I:Vesize>\t%0.<SVE_HSDI:Vetype>, %1/m,
> %2.<SVE_HSDI:Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_<su>ext, sve_<su>ext_x")]
> >  )
> >
> >  ;; Predicated truncate-and-sign-extend operations.
> > @@ -3127,7 +3221,8 @@
> >    "@
> >     sxt<SVE_PARTIAL_I:Vesize>\t%0.<SVE_FULL_HSDI:Vetype>, %1/m,
> %2.<SVE_FULL_HSDI:Vetype>
> >     movprfx\t%0,
> %2\;sxt<SVE_PARTIAL_I:Vesize>\t%0.<SVE_FULL_HSDI:Vetype>, %1/m,
> %2.<SVE_FULL_HSDI:Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_sext, sve_sext_x")]
> >  )
> >
> >  ;; Predicated truncate-and-sign-extend operations with merging.
> > @@ -3146,7 +3241,8 @@
> >     sxt<SVE_PARTIAL_I:Vesize>\t%0.<SVE_FULL_HSDI:Vetype>, %1/m,
> %2.<SVE_FULL_HSDI:Vetype>
> >     movprfx\t%0.<SVE_FULL_HSDI:Vetype>, %1/z,
> %2.<SVE_FULL_HSDI:Vetype>\;sxt<SVE_PARTIAL_I:Vesize>\t%0.<SVE_FULL_HS
> DI:Vetype>, %1/m, %2.<SVE_FULL_HSDI:Vetype>
> >     movprfx\t%0,
> %3\;sxt<SVE_PARTIAL_I:Vesize>\t%0.<SVE_FULL_HSDI:Vetype>, %1/m,
> %2.<SVE_FULL_HSDI:Vetype>"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_sext, sve_sext_x, sve_sext_x")]
> >  )
> >
> >  ;; Predicated truncate-and-zero-extend operations, merging with the
> > @@ -3167,7 +3263,8 @@
> >    "@
> >     uxt%e3\t%0.<Vetype>, %1/m, %0.<Vetype>
> >     movprfx\t%0, %2\;uxt%e3\t%0.<Vetype>, %1/m, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_uext, sve_sext_x")]
> >  )
> >
> >  ;; Predicated truncate-and-zero-extend operations, merging with an
> > @@ -3192,7 +3289,8 @@
> >     uxt%e3\t%0.<Vetype>, %1/m, %2.<Vetype>
> >     movprfx\t%0.<Vetype>, %1/z, %2.<Vetype>\;uxt%e3\t%0.<Vetype>,
> %1/m, %2.<Vetype>
> >     movprfx\t%0, %4\;uxt%e3\t%0.<Vetype>, %1/m, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_uext, sve_sext_x, sve_sext_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -3263,7 +3361,8 @@
> >    "@
> >     cnot\t%0.<Vetype>, %1/m, %2.<Vetype>
> >     movprfx\t%0, %2\;cnot\t%0.<Vetype>, %1/m, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_logic, sve_logic_x")]
> >  )
> >
> >  ;; Predicated logical inverse with merging.
> > @@ -3319,7 +3418,8 @@
> >    {
> >      operands[5] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_logic, sve_logic_x")]
> >  )
> >
> >  ;; Predicated logical inverse, merging with an independent value.
> > @@ -3356,7 +3456,8 @@
> >    {
> >      operands[5] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_logic, sve_logic_x, sve_logic_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -3374,6 +3475,7 @@
> >  	  SVE_FP_UNARY_INT))]
> >    "TARGET_SVE"
> >    "<sve_fp_op>\t%0.<Vetype>, %1.<Vetype>"
> > +  [(set_attr "type" "sve_fp_trig")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -3617,6 +3719,7 @@
> >  	  (match_operand:PRED_ALL 1 "register_operand" "Upa")))]
> >    "TARGET_SVE"
> >    "not\t%0.b, %1/z, %2.b"
> > +  [(set_attr "type" "sve_logic_p")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -3820,7 +3923,9 @@
> >     movprfx\t%0, %1\;add\t%0.<Vetype>, %0.<Vetype>, #%D2
> >     movprfx\t%0, %1\;sub\t%0.<Vetype>, %0.<Vetype>, #%N2
> >     add\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,*,*,yes,yes,*")]
> > +  [(set_attr "movprfx" "*,*,*,yes,yes,*")
> > +   (set_attr "type" "sve_arith, sve_arith, sve_cnt_p,
> > +                     sve_arith_x, sve_arith_x, sve_arith")]
> >  )
> >
> >  ;; Merging forms are handled through SVE_INT_BINARY.
> > @@ -3843,7 +3948,8 @@
> >     sub\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>
> >     subr\t%0.<Vetype>, %0.<Vetype>, #%D1
> >     movprfx\t%0, %2\;subr\t%0.<Vetype>, %0.<Vetype>, #%D1"
> > -  [(set_attr "movprfx" "*,*,yes")]
> > +  [(set_attr "movprfx" "*,*,yes")
> > +   (set_attr "type" "sve_arith, sve_arith, sve_arith_x")]
> >  )
> >
> >  ;; Merging forms are handled through SVE_INT_BINARY.
> > @@ -3865,6 +3971,7 @@
> >  	  UNSPEC_ADR))]
> >    "TARGET_SVE"
> >    "adr\t%0.<Vetype>, [%1.<Vetype>, %2.<Vetype>]"
> > +  [(set_attr "type" "sve_arith")]
> >  )
> >
> >  ;; Same, but with the offset being sign-extended from the low 32 bits.
> > @@ -3885,6 +3992,7 @@
> >    {
> >      operands[3] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_arith")]
> >  )
> >
> >  ;; Same, but with the offset being zero-extended from the low 32 bits.
> > @@ -3898,6 +4006,7 @@
> >  	  UNSPEC_ADR))]
> >    "TARGET_SVE"
> >    "adr\t%0.d, [%1.d, %2.d, uxtw]"
> > +  [(set_attr "type" "sve_arith")]
> >  )
> >
> >  ;; Same, matching as a PLUS rather than unspec.
> > @@ -3910,6 +4019,7 @@
> >  	  (match_operand:VNx2DI 1 "register_operand" "w")))]
> >    "TARGET_SVE"
> >    "adr\t%0.d, [%1.d, %2.d, uxtw]"
> > +  [(set_attr "type" "sve_arith")]
> >  )
> >
> >  ;; ADR with a nonzero shift.
> > @@ -3945,6 +4055,7 @@
> >    {
> >      operands[4] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > +  [(set_attr "type" "sve_arith")]
> >  )
> >
> >  ;; Same, but with the index being sign-extended from the low 32 bits.
> > @@ -3969,6 +4080,7 @@
> >    {
> >      operands[5] = operands[4] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_arith")]
> >  )
> >
> >  ;; Same, but with the index being zero-extended from the low 32 bits.
> > @@ -3990,6 +4102,7 @@
> >    {
> >      operands[5] = CONSTM1_RTX (VNx2BImode);
> >    }
> > +  [(set_attr "type" "sve_arith")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -4035,7 +4148,8 @@
> >    "@
> >     <su>abd\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;<su>abd\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_arith, sve_arith_x")]
> >  )
> >
> >  (define_expand "@aarch64_cond_<su>abd<mode>"
> > @@ -4091,7 +4205,8 @@
> >    {
> >      operands[4] = operands[5] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_arith, sve_arith_x")]
> >  )
> >
> >  ;; Predicated integer absolute difference, merging with the second input.
> > @@ -4122,7 +4237,8 @@
> >    {
> >      operands[4] = operands[5] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_arith, sve_arith_x")]
> >  )
> >
> >  ;; Predicated integer absolute difference, merging with an independent
> value.
> > @@ -4169,7 +4285,8 @@
> >      else
> >        FAIL;
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_arith_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -4194,7 +4311,8 @@
> >     movprfx\t%0, %1\;<binqops_op>\t%0.<Vetype>, %0.<Vetype>, #%D2
> >     movprfx\t%0, %1\;<binqops_op_rev>\t%0.<Vetype>, %0.<Vetype>, #%N2
> >     <binqops_op>\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,*,yes,yes,*")]
> > +  [(set_attr "movprfx" "*,*,yes,yes,*")
> > +   (set_attr "type" "sve_arith_sat, sve_arith_sat, sve_arith_sat_x,
> sve_arith_sat_x, sve_arith_sat")]
> >  )
> >
> >  ;; Unpredicated saturating unsigned addition and subtraction.
> > @@ -4208,7 +4326,8 @@
> >     <binqops_op>\t%0.<Vetype>, %0.<Vetype>, #%D2
> >     movprfx\t%0, %1\;<binqops_op>\t%0.<Vetype>, %0.<Vetype>, #%D2
> >     <binqops_op>\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes,*")]
> > +  [(set_attr "movprfx" "*,yes,*")
> > +   (set_attr "type" "sve_arith_sat, sve_arith_sat_x, sve_arith_sat")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -4249,7 +4368,8 @@
> >    "@
> >     <su>mulh\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;<su>mulh\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_mul, sve_mul_x")]
> >  )
> >
> >  ;; Predicated highpart multiplications with merging.
> > @@ -4286,7 +4406,9 @@
> >    "@
> >     <sve_int_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;<sve_int_op>\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")])
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_mul, sve_mul_x")]
> > +)
> >
> >  ;; Predicated highpart multiplications, merging with zero.
> >  (define_insn "*cond_<optab><mode>_z"
> > @@ -4303,7 +4425,9 @@
> >    "@
> >     movprfx\t%0.<Vetype>, %1/z, %0.<Vetype>\;<sve_int_op>\t%0.<Vetype>,
> %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0.<Vetype>, %1/z, %2.<Vetype>\;<sve_int_op>\t%0.<Vetype>,
> %1/m, %0.<Vetype>, %3.<Vetype>"
> > -  [(set_attr "movprfx" "yes")])
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_mul_x")]
> > +)
> >
> >  ;; -------------------------------------------------------------------------
> >  ;; ---- [INT] Division
> > @@ -4344,7 +4468,8 @@
> >     <sve_int_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     <sve_int_op>r\t%0.<Vetype>, %1/m, %0.<Vetype>, %2.<Vetype>
> >     movprfx\t%0, %2\;<sve_int_op>\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,*,yes")]
> > +  [(set_attr "movprfx" "*,*,yes")
> > +   (set_attr "type" "sve_div, sve_div, sve_div_x")]
> >  )
> >
> >  ;; Predicated integer division with merging.
> > @@ -4374,7 +4499,8 @@
> >    "@
> >     <sve_int_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;<sve_int_op>\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_div, sve_div_x")]
> >  )
> >
> >  ;; Predicated integer division, merging with the second input.
> > @@ -4391,7 +4517,8 @@
> >    "@
> >     <sve_int_op_rev>\t%0.<Vetype>, %1/m, %0.<Vetype>, %2.<Vetype>
> >     movprfx\t%0, %3\;<sve_int_op_rev>\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_div, sve_div_x")]
> >  )
> >
> >  ;; Predicated integer division, merging with an independent value.
> > @@ -4421,7 +4548,8 @@
> >  					     operands[4], operands[1]));
> >      operands[4] = operands[2] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_div_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -4444,7 +4572,8 @@
> >     <logical>\t%0.<Vetype>, %0.<Vetype>, #%C2
> >     movprfx\t%0, %1\;<logical>\t%0.<Vetype>, %0.<Vetype>, #%C2
> >     <logical>\t%0.d, %1.d, %2.d"
> > -  [(set_attr "movprfx" "*,yes,*")]
> > +  [(set_attr "movprfx" "*,yes,*")
> > +   (set_attr "type" "sve_logic, sve_logic_x, sve_logic")]
> >  )
> >
> >  ;; Merging forms are handled through SVE_INT_BINARY.
> > @@ -4487,6 +4616,7 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > +  [(set_attr "type" "sve_logic")]
> >  )
> >
> >  ;; Predicated BIC with merging.
> > @@ -4517,7 +4647,8 @@
> >    "@
> >     bic\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;bic\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_logic, sve_logic_x")]
> >  )
> >
> >  ;; Predicated integer BIC, merging with an independent value.
> > @@ -4545,7 +4676,8 @@
> >  					     operands[4], operands[1]));
> >      operands[4] = operands[2] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_logic_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -4623,7 +4755,8 @@
> >     && !register_operand (operands[3], <MODE>mode)"
> >    [(set (match_dup 0) (ASHIFT:SVE_I (match_dup 2) (match_dup 3)))]
> >    ""
> > -  [(set_attr "movprfx" "*,*,*,yes")]
> > +  [(set_attr "movprfx" "*,*,*,yes")
> > +   (set_attr "type" "sve_shift, sve_shift, sve_shift, sve_shift_x")]
> >  )
> >
> >  ;; Unpredicated shift operations by a constant (post-RA only).
> > @@ -4636,6 +4769,7 @@
> >  	  (match_operand:SVE_I 2 "aarch64_simd_<lr>shift_imm")))]
> >    "TARGET_SVE && reload_completed"
> >    "<shift>\t%0.<Vetype>, %1.<Vetype>, #%2"
> > +  [(set_attr "type" "sve_shift")]
> >  )
> >
> >  ;; Predicated integer shift, merging with the first input.
> > @@ -4652,7 +4786,8 @@
> >    "@
> >     <shift>\t%0.<Vetype>, %1/m, %0.<Vetype>, #%3
> >     movprfx\t%0, %2\;<shift>\t%0.<Vetype>, %1/m, %0.<Vetype>, #%3"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_shift, sve_shift_x")]
> >  )
> >
> >  ;; Predicated integer shift, merging with an independent value.
> > @@ -4678,7 +4813,8 @@
> >  					     operands[4], operands[1]));
> >      operands[4] = operands[2] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_shift_x")]
> >  )
> >
> >  ;; Unpredicated shifts of narrow elements by 64-bit amounts.
> > @@ -4690,6 +4826,7 @@
> >  	  SVE_SHIFT_WIDE))]
> >    "TARGET_SVE"
> >    "<sve_int_op>\t%0.<Vetype>, %1.<Vetype>, %2.d"
> > +  [(set_attr "type" "sve_shift")]
> >  )
> >
> >  ;; Merging predicated shifts of narrow elements by 64-bit amounts.
> > @@ -4722,7 +4859,9 @@
> >    "@
> >     <sve_int_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.d
> >     movprfx\t%0, %2\;<sve_int_op>\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.d"
> > -  [(set_attr "movprfx" "*, yes")])
> > +  [(set_attr "movprfx" "*, yes")
> > +   (set_attr "type" "sve_shift, sve_shift_x")]
> > +)
> >
> >  ;; Predicated shifts of narrow elements by 64-bit amounts, merging with
> zero.
> >  (define_insn "*cond_<sve_int_op><mode>_z"
> > @@ -4739,7 +4878,9 @@
> >    "@
> >     movprfx\t%0.<Vetype>, %1/z, %0.<Vetype>\;<sve_int_op>\t%0.<Vetype>,
> %1/m, %0.<Vetype>, %3.d
> >     movprfx\t%0.<Vetype>, %1/z, %2.<Vetype>\;<sve_int_op>\t%0.<Vetype>,
> %1/m, %0.<Vetype>, %3.d"
> > -  [(set_attr "movprfx" "yes")])
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_shift_x")]
> > +)
> >
> >  ;; -------------------------------------------------------------------------
> >  ;; ---- [INT] Shifts (rounding towards 0)
> > @@ -4781,7 +4922,9 @@
> >    "@
> >     asrd\t%0.<Vetype>, %1/m, %0.<Vetype>, #%3
> >     movprfx\t%0, %2\;asrd\t%0.<Vetype>, %1/m, %0.<Vetype>, #%3"
> > -  [(set_attr "movprfx" "*,yes")])
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_shift_d, sve_shift_dx")]
> > +)
> >
> >  ;; Predicated shift with merging.
> >  (define_expand "@cond_<sve_int_op><mode>"
> > @@ -4825,7 +4968,9 @@
> >    {
> >      operands[4] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes")])
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_shift, sve_shift_x")]
> > +)
> >
> >  ;; Predicated shift, merging with an independent value.
> >  (define_insn_and_rewrite "*cond_<sve_int_op><mode>_any"
> > @@ -4854,7 +4999,8 @@
> >  					     operands[4], operands[1]));
> >      operands[4] = operands[2] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_shift_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -4876,6 +5022,7 @@
> >  	  SVE_FP_BINARY_INT))]
> >    "TARGET_SVE"
> >    "<sve_fp_op>\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>"
> > +  [(set_attr "type" "sve_fp_trig")]
> >  )
> >
> >  ;; Predicated floating-point binary operations that take an integer
> > @@ -4892,7 +5039,8 @@
> >    "@
> >     <sve_fp_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;<sve_fp_op>\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_trig, sve_fp_trig_x")]
> >  )
> >
> >  ;; Predicated floating-point binary operations with merging, taking an
> > @@ -4934,7 +5082,8 @@
> >    {
> >      operands[4] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_trig, sve_fp_trig_x")]
> >  )
> >
> >  (define_insn "*cond_<optab><mode>_2_strict"
> > @@ -4953,7 +5102,8 @@
> >    "@
> >     <sve_fp_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;<sve_fp_op>\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_trig, sve_fp_trig_x")]
> >  )
> >
> >  ;; Predicated floating-point binary operations that take an integer as
> > @@ -4992,7 +5142,8 @@
> >      else
> >        FAIL;
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_trig_x")]
> >  )
> >
> >  (define_insn_and_rewrite "*cond_<optab><mode>_any_strict"
> > @@ -5021,7 +5172,8 @@
> >  					     operands[4], operands[1]));
> >      operands[4] = operands[2] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_trig_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -5042,7 +5194,8 @@
> >  	  (match_operand:SVE_FULL_F 1 "register_operand" "w")
> >  	  (match_operand:SVE_FULL_F 2 "register_operand" "w")))]
> >    "TARGET_SVE && reload_completed"
> > -  "<sve_fp_op>\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>")
> > +  "<sve_fp_op>\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>"
> > +)
> >
> >  ;; -------------------------------------------------------------------------
> >  ;; ---- [FP] General binary arithmetic corresponding to unspecs
> > @@ -5421,7 +5574,9 @@
> >     && INTVAL (operands[4]) == SVE_RELAXED_GP"
> >    [(set (match_dup 0) (plus:SVE_FULL_F (match_dup 2) (match_dup 3)))]
> >    ""
> > -  [(set_attr "movprfx" "*,*,*,*,yes,yes,yes")]
> > +  [(set_attr "movprfx" "*,*,*,*,yes,yes,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith, sve_fp_arith, sve_fp_arith,
> > +                     sve_fp_arith_x, sve_fp_arith_x, sve_fp_arith_x")]
> >  )
> >
> >  ;; Predicated floating-point addition of a constant, merging with the
> > @@ -5448,7 +5603,8 @@
> >    {
> >      operands[4] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,*,yes,yes")]
> > +  [(set_attr "movprfx" "*,*,yes,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith, sve_fp_arith_x,
> sve_fp_arith_x")]
> >  )
> >
> >  (define_insn "*cond_add<mode>_2_const_strict"
> > @@ -5469,7 +5625,8 @@
> >     fsub\t%0.<Vetype>, %1/m, %0.<Vetype>, #%N3
> >     movprfx\t%0, %2\;fadd\t%0.<Vetype>, %1/m, %0.<Vetype>, #%3
> >     movprfx\t%0, %2\;fsub\t%0.<Vetype>, %1/m, %0.<Vetype>, #%N3"
> > -  [(set_attr "movprfx" "*,*,yes,yes")]
> > +  [(set_attr "movprfx" "*,*,yes,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith, sve_fp_arith_x,
> sve_fp_arith_x")]
> >  )
> >
> >  ;; Predicated floating-point addition of a constant, merging with an
> > @@ -5509,7 +5666,8 @@
> >      else
> >        FAIL;
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_arith_x")]
> >  )
> >
> >  (define_insn_and_rewrite "*cond_add<mode>_any_const_strict"
> > @@ -5540,7 +5698,8 @@
> >  					     operands[4], operands[1]));
> >      operands[4] = operands[2] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_arith_x")]
> >  )
> >
> >  ;; Register merging forms are handled through SVE_COND_FP_BINARY.
> > @@ -5565,7 +5724,8 @@
> >    "@
> >     fcadd\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>, #<rot>
> >     movprfx\t%0, %2\;fcadd\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.<Vetype>, #<rot>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith_c, sve_fp_arith_cx")]
> >  )
> >
> >  ;; Predicated FCADD with merging.
> > @@ -5619,7 +5779,8 @@
> >    {
> >      operands[4] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith_c, sve_fp_arith_cx")]
> >  )
> >
> >  (define_insn "*cond_<optab><mode>_2_strict"
> > @@ -5638,7 +5799,8 @@
> >    "@
> >     fcadd\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>, #<rot>
> >     movprfx\t%0, %2\;fcadd\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.<Vetype>, #<rot>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith_c, sve_fp_arith_cx")]
> >  )
> >
> >  ;; Predicated FCADD, merging with an independent value.
> > @@ -5675,7 +5837,8 @@
> >      else
> >        FAIL;
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_arith_cx")]
> >  )
> >
> >  (define_insn_and_rewrite "*cond_<optab><mode>_any_strict"
> > @@ -5704,7 +5867,8 @@
> >  					     operands[4], operands[1]));
> >      operands[4] = operands[2] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_arith_cx")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -5739,7 +5903,9 @@
> >     && INTVAL (operands[4]) == SVE_RELAXED_GP"
> >    [(set (match_dup 0) (minus:SVE_FULL_F (match_dup 2) (match_dup 3)))]
> >    ""
> > -  [(set_attr "movprfx" "*,*,*,*,yes,yes")]
> > +  [(set_attr "movprfx" "*,*,*,*,yes,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith, sve_fp_arith,
> > +                     sve_fp_arith, sve_fp_arith_x, sve_fp_arith_x")]
> >  )
> >
> >  ;; Predicated floating-point subtraction from a constant, merging with the
> > @@ -5764,7 +5930,8 @@
> >    {
> >      operands[4] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith_x")]
> >  )
> >
> >  (define_insn "*cond_sub<mode>_3_const_strict"
> > @@ -5783,7 +5950,8 @@
> >    "@
> >     fsubr\t%0.<Vetype>, %1/m, %0.<Vetype>, #%2
> >     movprfx\t%0, %3\;fsubr\t%0.<Vetype>, %1/m, %0.<Vetype>, #%2"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith_x")]
> >  )
> >
> >  ;; Predicated floating-point subtraction from a constant, merging with an
> > @@ -5820,7 +5988,8 @@
> >      else
> >        FAIL;
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_arith_x")]
> >  )
> >
> >  (define_insn_and_rewrite "*cond_sub<mode>_const_strict"
> > @@ -5848,7 +6017,8 @@
> >                                               operands[4], operands[1]));
> >      operands[4] = operands[3] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_arith_x")]
> >  )
> >  ;; Register merging forms are handled through SVE_COND_FP_BINARY.
> >
> > @@ -5896,7 +6066,8 @@
> >    {
> >      operands[5] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith_x")]
> >  )
> >
> >  (define_insn "*aarch64_pred_abd<mode>_strict"
> > @@ -5915,7 +6086,8 @@
> >    "@
> >     fabd\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;fabd\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith_x")]
> >  )
> >
> >  (define_expand "@aarch64_cond_abd<mode>"
> > @@ -5968,7 +6140,8 @@
> >      operands[4] = copy_rtx (operands[1]);
> >      operands[5] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith_x")]
> >  )
> >
> >  (define_insn "*aarch64_cond_abd<mode>_2_strict"
> > @@ -5991,7 +6164,8 @@
> >    "@
> >     fabd\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;fabd\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith_x")]
> >  )
> >
> >  ;; Predicated floating-point absolute difference, merging with the second
> > @@ -6022,7 +6196,8 @@
> >      operands[4] = copy_rtx (operands[1]);
> >      operands[5] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith_x")]
> >  )
> >
> >  (define_insn "*aarch64_cond_abd<mode>_3_strict"
> > @@ -6045,7 +6220,8 @@
> >    "@
> >     fabd\t%0.<Vetype>, %1/m, %0.<Vetype>, %2.<Vetype>
> >     movprfx\t%0, %3\;fabd\t%0.<Vetype>, %1/m, %0.<Vetype>, %2.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith_x")]
> >  )
> >
> >  ;; Predicated floating-point absolute difference, merging with an
> > @@ -6094,7 +6270,8 @@
> >      else
> >        FAIL;
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_arith_x")]
> >  )
> >
> >  (define_insn_and_rewrite "*aarch64_cond_abd<mode>_any_strict"
> > @@ -6130,7 +6307,8 @@
> >  					     operands[4], operands[1]));
> >      operands[4] = operands[3] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_arith_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -6163,7 +6341,9 @@
> >     && INTVAL (operands[4]) == SVE_RELAXED_GP"
> >    [(set (match_dup 0) (mult:SVE_FULL_F (match_dup 2) (match_dup 3)))]
> >    ""
> > -  [(set_attr "movprfx" "*,*,*,yes,yes")]
> > +  [(set_attr "movprfx" "*,*,*,yes,yes")
> > +   (set_attr "type" "sve_fp_mul, *, sve_fp_mul,
> > +                     sve_fp_mul_x, sve_fp_mul_x")]
> >  )
> >
> >  ;; Merging forms are handled through SVE_COND_FP_BINARY and
> > @@ -6180,6 +6360,7 @@
> >  	  (match_operand:SVE_FULL_F 1 "register_operand" "w")))]
> >    "TARGET_SVE"
> >    "fmul\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>[%3]"
> > +  [(set_attr "type" "sve_fp_mul")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -6243,6 +6424,7 @@
> >  	  LOGICALF))]
> >    "TARGET_SVE"
> >    "<logicalf_op>\t%0.d, %1.d, %2.d"
> > +  [(set_attr "type" "sve_logic")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -6363,7 +6545,9 @@
> >     <sve_fp_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;<sve_fp_op>\t%0.<Vetype>, %1/m, %0.<Vetype>, #%3
> >     movprfx\t%0, %2\;<sve_fp_op>\t%0.<Vetype>, %1/m, %0.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,*,yes,yes")]
> > +  [(set_attr "movprfx" "*,*,yes,yes")
> > +   (set_attr "type" "sve_fp_arith, sve_fp_arith,
> > +                     sve_fp_arith_x, sve_fp_arith_x")]
> >  )
> >
> >  ;; Merging forms are handled through SVE_COND_FP_BINARY and
> > @@ -6390,6 +6574,7 @@
> >  		      (match_operand:PRED_ALL 2 "register_operand" "Upa")))]
> >    "TARGET_SVE"
> >    "and\t%0.b, %1/z, %2.b, %2.b"
> > +  [(set_attr "type" "sve_logic_p")]
> >  )
> >
> >  ;; Unpredicated predicate EOR and ORR.
> > @@ -6416,6 +6601,7 @@
> >  	  (match_operand:PRED_ALL 1 "register_operand" "Upa")))]
> >    "TARGET_SVE"
> >    "<logical>\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_logic_p")]
> >  )
> >
> >  ;; Perform a logical operation on operands 2 and 3, using operand 1 as
> > @@ -6438,6 +6624,7 @@
> >  		      (match_dup 4)))]
> >    "TARGET_SVE"
> >    "<logical>s\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_logic_ps")]
> >  )
> >
> >  ;; Same with just the flags result.
> > @@ -6456,6 +6643,7 @@
> >     (clobber (match_scratch:VNx16BI 0 "=Upa"))]
> >    "TARGET_SVE"
> >    "<logical>s\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_logic_ps")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -6476,6 +6664,7 @@
> >  	  (match_operand:PRED_ALL 1 "register_operand" "Upa")))]
> >    "TARGET_SVE"
> >    "<nlogical>\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_logic_p")]
> >  )
> >
> >  ;; Same, but set the flags as a side-effect.
> > @@ -6499,6 +6688,7 @@
> >  		      (match_dup 4)))]
> >    "TARGET_SVE"
> >    "<nlogical>s\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_logic_ps")]
> >  )
> >
> >  ;; Same with just the flags result.
> > @@ -6518,6 +6708,7 @@
> >     (clobber (match_scratch:VNx16BI 0 "=Upa"))]
> >    "TARGET_SVE"
> >    "<nlogical>s\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_logic_ps")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -6538,6 +6729,7 @@
> >  	  (match_operand:PRED_ALL 1 "register_operand" "Upa")))]
> >    "TARGET_SVE"
> >    "<logical_nn>\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_logic")]
> >  )
> >
> >  ;; Same, but set the flags as a side-effect.
> > @@ -6562,6 +6754,7 @@
> >  		      (match_dup 4)))]
> >    "TARGET_SVE"
> >    "<logical_nn>s\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_logic_ps")]
> >  )
> >
> >  ;; Same with just the flags result.
> > @@ -6582,6 +6775,7 @@
> >     (clobber (match_scratch:VNx16BI 0 "=Upa"))]
> >    "TARGET_SVE"
> >    "<logical_nn>s\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_logic_ps")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -6631,7 +6825,8 @@
> >     mad\t%0.<Vetype>, %1/m, %3.<Vetype>, %4.<Vetype>
> >     mla\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %4\;mla\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,*,yes")]
> > +  [(set_attr "movprfx" "*,*,yes")
> > +   (set_attr "type" "sve_mla, sve_mla, sve_mla_x")]
> >  )
> >
> >  ;; Predicated integer addition of product with merging.
> > @@ -6673,7 +6868,8 @@
> >    "@
> >     mad\t%0.<Vetype>, %1/m, %3.<Vetype>, %4.<Vetype>
> >     movprfx\t%0, %2\;mad\t%0.<Vetype>, %1/m, %3.<Vetype>, %4.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_mla, sve_mla_x")]
> >  )
> >
> >  ;; Predicated integer addition of product, merging with the third input.
> > @@ -6692,7 +6888,8 @@
> >    "@
> >     mla\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %4\;mla\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_mla, sve_mla_x")]
> >  )
> >
> >  ;; Predicated integer addition of product, merging with an independent
> value.
> > @@ -6726,7 +6923,8 @@
> >  					     operands[5], operands[1]));
> >      operands[5] = operands[4] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_mla_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -6772,7 +6970,8 @@
> >     msb\t%0.<Vetype>, %1/m, %3.<Vetype>, %4.<Vetype>
> >     mls\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %4\;mls\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,*,yes")]
> > +  [(set_attr "movprfx" "*,*,yes")
> > +   (set_attr "type" "sve_mla, sve_mla, sve_mla_x")]
> >  )
> >
> >  ;; Predicated integer subtraction of product with merging.
> > @@ -6814,7 +7013,8 @@
> >    "@
> >     msb\t%0.<Vetype>, %1/m, %3.<Vetype>, %4.<Vetype>
> >     movprfx\t%0, %2\;msb\t%0.<Vetype>, %1/m, %3.<Vetype>, %4.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_mla, sve_mla_x")]
> >  )
> >
> >  ;; Predicated integer subtraction of product, merging with the third input.
> > @@ -6833,7 +7033,8 @@
> >    "@
> >     mls\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %4\;mls\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_mla, sve_mla_x")]
> >  )
> >
> >  ;; Predicated integer subtraction of product, merging with an
> > @@ -6868,7 +7069,8 @@
> >  					     operands[5], operands[1]));
> >      operands[5] = operands[4] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_mla_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -6894,7 +7096,8 @@
> >    "@
> >     <sur>dot\\t%0.<Vetype>, %1.<Vetype_fourth>, %2.<Vetype_fourth>
> >     movprfx\t%0, %3\;<sur>dot\\t%0.<Vetype>, %1.<Vetype_fourth>,
> %2.<Vetype_fourth>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_dot, sve_dot_x")]
> >  )
> >
> >  ;; Four-element integer dot-product by selected lanes with accumulation.
> > @@ -6913,7 +7116,8 @@
> >    "@
> >     <sur>dot\\t%0.<Vetype>, %1.<Vetype_fourth>, %2.<Vetype_fourth>[%3]
> >     movprfx\t%0, %4\;<sur>dot\\t%0.<Vetype>, %1.<Vetype_fourth>,
> %2.<Vetype_fourth>[%3]"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_dot, sve_dot_x")]
> >  )
> >
> >  (define_insn "@<sur>dot_prod<vsi2qi>"
> > @@ -6928,7 +7132,8 @@
> >    "@
> >     <sur>dot\\t%0.s, %1.b, %2.b
> >     movprfx\t%0, %3\;<sur>dot\\t%0.s, %1.b, %2.b"
> > -   [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_dot, sve_dot_x")]
> >  )
> >
> >  (define_insn "@aarch64_<sur>dot_prod_lane<vsi2qi>"
> > @@ -6946,7 +7151,8 @@
> >    "@
> >     <sur>dot\\t%0.s, %1.b, %2.b[%3]
> >     movprfx\t%0, %4\;<sur>dot\\t%0.s, %1.b, %2.b[%3]"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_dot, sve_dot_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -7000,7 +7206,8 @@
> >    "@
> >     <sur>mmla\\t%0.s, %2.b, %3.b
> >     movprfx\t%0, %1\;<sur>mmla\\t%0.s, %2.b, %3.b"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_mmla, sve_mmla_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -7048,7 +7255,8 @@
> >     <sve_fmla_op>\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>
> >     <sve_fmad_op>\t%0.<Vetype>, %1/m, %3.<Vetype>, %4.<Vetype>
> >     movprfx\t%0, %4\;<sve_fmla_op>\t%0.<Vetype>, %1/m, %2.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,*,yes")]
> > +  [(set_attr "movprfx" "*,*,yes")
> > +   (set_attr "type" "sve_fp_mla, sve_fp_mla, sve_fp_mla_x")]
> >  )
> >
> >  ;; Predicated floating-point ternary operations with merging.
> > @@ -7096,7 +7304,8 @@
> >    {
> >      operands[5] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_mla, sve_fp_mla_x")]
> >  )
> >
> >  (define_insn "*cond_<optab><mode>_2_strict"
> > @@ -7116,7 +7325,8 @@
> >    "@
> >     <sve_fmad_op>\t%0.<Vetype>, %1/m, %3.<Vetype>, %4.<Vetype>
> >     movprfx\t%0, %2\;<sve_fmad_op>\t%0.<Vetype>, %1/m, %3.<Vetype>,
> %4.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_mla, sve_fp_mla_x")]
> >  )
> >
> >  ;; Predicated floating-point ternary operations, merging with the
> > @@ -7142,7 +7352,8 @@
> >    {
> >      operands[5] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_mla, sve_fp_mla_x")]
> >  )
> >
> >  (define_insn "*cond_<optab><mode>_4_strict"
> > @@ -7162,7 +7373,8 @@
> >    "@
> >     <sve_fmla_op>\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %4\;<sve_fmla_op>\t%0.<Vetype>, %1/m, %2.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_mla, sve_fp_mla_x")]
> >  )
> >
> >  ;; Predicated floating-point ternary operations, merging with an
> > @@ -7206,7 +7418,8 @@
> >      else
> >        FAIL;
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_mla_x")]
> >  )
> >
> >  (define_insn_and_rewrite "*cond_<optab><mode>_any_strict"
> > @@ -7241,7 +7454,8 @@
> >  					     operands[5], operands[1]));
> >      operands[5] = operands[4] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_mla_x")]
> >  )
> >
> >  ;; Unpredicated FMLA and FMLS by selected lanes.  It doesn't seem worth
> using
> > @@ -7260,7 +7474,8 @@
> >    "@
> >     <sve_fp_op>\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>[%3]
> >     movprfx\t%0, %4\;<sve_fp_op>\t%0.<Vetype>, %1.<Vetype>,
> %2.<Vetype>[%3]"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_mla, sve_fp_mla_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -7284,7 +7499,8 @@
> >    "@
> >     fcmla\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>, #<rot>
> >     movprfx\t%0, %4\;fcmla\t%0.<Vetype>, %1/m, %2.<Vetype>,
> %3.<Vetype>, #<rot>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_mla_c, sve_fp_mla_cx")]
> >  )
> >
> >  ;; unpredicated optab pattern for auto-vectorizer
> > @@ -7382,7 +7598,8 @@
> >    {
> >      operands[5] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_mla_c, sve_fp_mla_cx")]
> >  )
> >
> >  (define_insn "*cond_<optab><mode>_4_strict"
> > @@ -7402,7 +7619,8 @@
> >    "@
> >     fcmla\t%0.<Vetype>, %1/m, %2.<Vetype>, %3.<Vetype>, #<rot>
> >     movprfx\t%0, %4\;fcmla\t%0.<Vetype>, %1/m, %2.<Vetype>,
> %3.<Vetype>, #<rot>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_mla_c, sve_fp_mla_cx")]
> >  )
> >
> >  ;; Predicated FCMLA, merging with an independent value.
> > @@ -7440,7 +7658,8 @@
> >      else
> >        FAIL;
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_mla_cx")]
> >  )
> >
> >  (define_insn_and_rewrite "*cond_<optab><mode>_any_strict"
> > @@ -7470,7 +7689,8 @@
> >  					     operands[5], operands[1]));
> >      operands[5] = operands[4] = operands[0];
> >    }
> > -  [(set_attr "movprfx" "yes")]
> > +  [(set_attr "movprfx" "yes")
> > +   (set_attr "type" "sve_fp_mla_cx")]
> >  )
> >
> >  ;; Unpredicated FCMLA with indexing.
> > @@ -7488,7 +7708,8 @@
> >    "@
> >     fcmla\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>[%3], #<rot>
> >     movprfx\t%0, %4\;fcmla\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>[%3],
> #<rot>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_mla_c, sve_fp_mla_cx")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -7509,7 +7730,8 @@
> >    "@
> >     ftmad\t%0.<Vetype>, %0.<Vetype>, %2.<Vetype>, #%3
> >     movprfx\t%0, %1\;ftmad\t%0.<Vetype>, %0.<Vetype>, %2.<Vetype>, #%3"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_trig, sve_fp_trig_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -7571,7 +7793,8 @@
> >    "@
> >     <sve_fp_op>\\t%0.<Vetype>, %2.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %1\;<sve_fp_op>\\t%0.<Vetype>, %2.<Vetype>,
> %3.<Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_mmla, sve_fp_mmla_x")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -7639,7 +7862,9 @@
> >     movprfx\t%0.<Vetype>, %3/z, %0.<Vetype>\;fmov\t%0.<Vetype>, %3/m,
> #%1
> >     movprfx\t%0, %2\;mov\t%0.<Vetype>, %3/m, #%I1
> >     movprfx\t%0, %2\;fmov\t%0.<Vetype>, %3/m, #%1"
> > -  [(set_attr "movprfx" "*,*,*,*,yes,yes,yes")]
> > +  [(set_attr "movprfx" "*,*,*,*,yes,yes,yes")
> > +   (set_attr "type" "sve_move, sve_move, sve_move, sve_fp_move,
> > +                     sve_fp_move_x, sve_move_x, sve_move_x")]
> >  )
> >
> >  ;; Optimize selects between a duplicated scalar variable and another vector,
> > @@ -7662,7 +7887,9 @@
> >     movprfx\t%0.<Vetype>, %3/z, %0.<Vetype>\;mov\t%0.<Vetype>, %3/m,
> %<Vetype>1
> >     movprfx\t%0, %2\;mov\t%0.<Vetype>, %3/m, %<vwcore>1
> >     movprfx\t%0, %2\;mov\t%0.<Vetype>, %3/m, %<Vetype>1"
> > -  [(set_attr "movprfx" "*,*,yes,yes,yes,yes")]
> > +  [(set_attr "movprfx" "*,*,yes,yes,yes,yes")
> > +   (set_attr "type" "sve_move, sve_move,
> > +                     sve_move_x, sve_move_x, sve_move_x, sve_move_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -7813,6 +8040,7 @@
> >    "@
> >     cmp<cmp_op>\t%0.<Vetype>, %1/z, %3.<Vetype>, #%4
> >     cmp<cmp_op>\t%0.<Vetype>, %1/z, %3.<Vetype>, %4.<Vetype>"
> > +  [(set_attr "type" "sve_compare_s, sve_compare_s")]
> >  )
> >
> >  ;; Predicated integer comparisons in which both the flag and predicate
> > @@ -7849,6 +8077,7 @@
> >      operands[6] = copy_rtx (operands[4]);
> >      operands[7] = operands[5];
> >    }
> > +  [(set_attr "type" "sve_compare_s, sve_compare_s")]
> >  )
> >
> >  ;; Predicated integer comparisons in which only the flags result is
> > @@ -7878,6 +8107,7 @@
> >      operands[6] = copy_rtx (operands[4]);
> >      operands[7] = operands[5];
> >    }
> > +  [(set_attr "type" "sve_compare_s, sve_compare_s")]
> >  )
> >
> >  ;; Predicated integer comparisons, formed by combining a PTRUE-
> predicated
> > @@ -7925,6 +8155,7 @@
> >     (clobber (reg:CC_NZC CC_REGNUM))]
> >    "TARGET_SVE"
> >    "cmp<cmp_op>\t%0.<Vetype>, %1/z, %3.<Vetype>, %4.d"
> > +  [(set_attr "type" "sve_compare_s")]
> >  )
> >
> >  ;; Predicated integer wide comparisons in which both the flag and
> > @@ -7956,6 +8187,7 @@
> >    "TARGET_SVE
> >     && aarch64_sve_same_pred_for_ptest_p (&operands[4], &operands[6])"
> >    "cmp<cmp_op>\t%0.<Vetype>, %1/z, %2.<Vetype>, %3.d"
> > +  [(set_attr "type" "sve_compare_s")]
> >  )
> >
> >  ;; Predicated integer wide comparisons in which only the flags result
> > @@ -7979,6 +8211,7 @@
> >    "TARGET_SVE
> >     && aarch64_sve_same_pred_for_ptest_p (&operands[4], &operands[6])"
> >    "cmp<cmp_op>\t%0.<Vetype>, %1/z, %2.<Vetype>, %3.d"
> > +  [(set_attr "type" "sve_compare_s")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8007,6 +8240,7 @@
> >     (clobber (reg:CC_NZC CC_REGNUM))]
> >    "TARGET_SVE"
> >    "while<cmp_op>\t%0.<PRED_ALL:Vetype>, %<w>1, %<w>2"
> > +  [(set_attr "type" "sve_loop_gs")]
> >  )
> >
> >  ;; The WHILE instructions set the flags in the same way as a PTEST with
> > @@ -8036,6 +8270,7 @@
> >      operands[3] = CONSTM1_RTX (VNx16BImode);
> >      operands[4] = CONSTM1_RTX (<PRED_ALL:MODE>mode);
> >    }
> > +  [(set_attr "type" "sve_loop_gs")]
> >  )
> >
> >  ;; Same, but handle the case in which only the flags result is useful.
> > @@ -8060,6 +8295,7 @@
> >      operands[3] = CONSTM1_RTX (VNx16BImode);
> >      operands[4] = CONSTM1_RTX (<PRED_ALL:MODE>mode);
> >    }
> > +  [(set_attr "type" "sve_loop_gs")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8104,6 +8340,7 @@
> >    "@
> >     fcm<cmp_op>\t%0.<Vetype>, %1/z, %3.<Vetype>, #0.0
> >     fcm<cmp_op>\t%0.<Vetype>, %1/z, %3.<Vetype>, %4.<Vetype>"
> > +  [(set_attr "type" "sve_fp_compare, sve_fp_compare")]
> >  )
> >
> >  ;; Same for unordered comparisons.
> > @@ -8117,6 +8354,7 @@
> >  	  UNSPEC_COND_FCMUO))]
> >    "TARGET_SVE"
> >    "fcmuo\t%0.<Vetype>, %1/z, %3.<Vetype>, %4.<Vetype>"
> > +  [(set_attr "type" "sve_fp_compare")]
> >  )
> >
> >  ;; Floating-point comparisons predicated on a PTRUE, with the results
> ANDed
> > @@ -8204,10 +8442,10 @@
> >  	  (not:<VPRED>
> >  	    (match_dup 5))
> >  	  (match_dup 4)))]
> > -{
> > -  if (can_create_pseudo_p ())
> > -    operands[5] = gen_reg_rtx (<VPRED>mode);
> > -}
> > +  {
> > +    if (can_create_pseudo_p ())
> > +      operands[5] = gen_reg_rtx (<VPRED>mode);
> > +  }
> >  )
> >
> >  ;; Make sure that we expand to a nor when the operand 4 of
> > @@ -8245,10 +8483,10 @@
> >  	    (not:<VPRED>
> >  	      (match_dup 4)))
> >  	  (match_dup 1)))]
> > -{
> > -  if (can_create_pseudo_p ())
> > -    operands[5] = gen_reg_rtx (<VPRED>mode);
> > -}
> > +  {
> > +    if (can_create_pseudo_p ())
> > +      operands[5] = gen_reg_rtx (<VPRED>mode);
> > +  }
> >  )
> >
> >  (define_insn_and_split "*fcmuo<mode>_bic_combine"
> > @@ -8280,10 +8518,10 @@
> >  	  (not:<VPRED>
> >  	    (match_dup 5))
> >  	  (match_dup 4)))]
> > -{
> > -  if (can_create_pseudo_p ())
> > -    operands[5] = gen_reg_rtx (<VPRED>mode);
> > -}
> > +  {
> > +    if (can_create_pseudo_p ())
> > +      operands[5] = gen_reg_rtx (<VPRED>mode);
> > +  }
> >  )
> >
> >  ;; Same for unordered comparisons.
> > @@ -8320,10 +8558,10 @@
> >  	    (not:<VPRED>
> >  	      (match_dup 4)))
> >  	  (match_dup 1)))]
> > -{
> > -  if (can_create_pseudo_p ())
> > -    operands[5] = gen_reg_rtx (<VPRED>mode);
> > -}
> > +  {
> > +    if (can_create_pseudo_p ())
> > +      operands[5] = gen_reg_rtx (<VPRED>mode);
> > +  }
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8380,6 +8618,7 @@
> >      operands[5] = copy_rtx (operands[1]);
> >      operands[6] = copy_rtx (operands[1]);
> >    }
> > +  [(set_attr "type" "sve_fp_compare")]
> >  )
> >
> >  (define_insn "*aarch64_pred_fac<cmp_op><mode>_strict"
> > @@ -8400,6 +8639,7 @@
> >  	  SVE_COND_FP_ABS_CMP))]
> >    "TARGET_SVE"
> >    "fac<cmp_op>\t%0.<Vetype>, %1/z, %2.<Vetype>, %3.<Vetype>"
> > +  [(set_attr "type" "sve_fp_compare")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8420,6 +8660,7 @@
> >  	    (match_operand:PRED_ALL 2 "register_operand" "Upa"))))]
> >    "TARGET_SVE"
> >    "sel\t%0.b, %3, %1.b, %2.b"
> > +  [(set_attr "type" "sve_sel_p")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8468,6 +8709,7 @@
> >  		       UNSPEC_PTEST))]
> >    "TARGET_SVE"
> >    "ptest\t%0, %3.b"
> > +  [(set_attr "type" "sve_set_ps")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -8495,6 +8737,7 @@
> >    "@
> >     clast<ab>\t%<vwcore>0, %2, %<vwcore>0, %3.<Vetype>
> >     clast<ab>\t%<Vetype>0, %2, %<Vetype>0, %3.<Vetype>"
> > +  [(set_attr "type" "sve_cext, sve_cext")]
> >  )
> >
> >  (define_insn "@aarch64_fold_extract_vector_<last_op>_<mode>"
> > @@ -8508,6 +8751,8 @@
> >    "@
> >     clast<ab>\t%0.<Vetype>, %2, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %1\;clast<ab>\t%0.<Vetype>, %2, %0.<Vetype>,
> %3.<Vetype>"
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cext, sve_cext_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8548,6 +8793,7 @@
> >  		   SVE_INT_ADDV))]
> >    "TARGET_SVE && <max_elem_bits> >= <elem_bits>"
> >    "<su>addv\t%d0, %1, %2.<Vetype>"
> > +  [(set_attr "type" "sve_arith_r")]
> >  )
> >
> >  ;; Unpredicated integer reductions.
> > @@ -8570,6 +8816,7 @@
> >  		      SVE_INT_REDUCTION))]
> >    "TARGET_SVE"
> >    "<sve_int_op>\t%<Vetype>0, %1, %2.<Vetype>"
> > +  [(set_attr "type" "sve_arith_r")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8614,6 +8861,7 @@
> >  		      SVE_FP_REDUCTION))]
> >    "TARGET_SVE"
> >    "<sve_fp_op>\t%<Vetype>0, %1, %2.<Vetype>"
> > +  [(set_attr "type" "sve_fp_arith_r")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8645,6 +8893,7 @@
> >  		      UNSPEC_FADDA))]
> >    "TARGET_SVE"
> >    "fadda\t%<Vetype>0, %3, %<Vetype>0, %2.<Vetype>"
> > +  [(set_attr "type" "sve_fp_arith_a")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -8679,6 +8928,7 @@
> >  	  UNSPEC_TBL))]
> >    "TARGET_SVE"
> >    "tbl\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>"
> > +  [(set_attr "type" "sve_tbl")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8699,6 +8949,7 @@
> >  	  UNSPEC_SVE_COMPACT))]
> >    "TARGET_SVE"
> >    "compact\t%0.<Vetype>, %1, %2.<Vetype>"
> > +  [(set_attr "type" "sve_cext")]
> >  )
> >
> >  ;; Duplicate one element of a vector.
> > @@ -8711,6 +8962,7 @@
> >    "TARGET_SVE
> >     && IN_RANGE (INTVAL (operands[2]) * <container_bits> / 8, 0, 63)"
> >    "dup\t%0.<Vctype>, %1.<Vctype>[%2]"
> > +  [(set_attr "type" "sve_splat")]
> >  )
> >
> >  ;; Use DUP.Q to duplicate a 128-bit segment of a register.
> > @@ -8747,6 +8999,7 @@
> >      operands[2] = gen_int_mode (byte / 16, DImode);
> >      return "dup\t%0.q, %1.q[%2]";
> >    }
> > +  [(set_attr "type" "sve_splat")]
> >  )
> >
> >  ;; Reverse the order of elements within a full vector.
> > @@ -8756,7 +9009,9 @@
> >  	  [(match_operand:SVE_ALL 1 "register_operand" "w")]
> >  	  UNSPEC_REV))]
> >    "TARGET_SVE"
> > -  "rev\t%0.<Vctype>, %1.<Vctype>")
> > +  "rev\t%0.<Vctype>, %1.<Vctype>"
> > +  [(set_attr "type" "sve_rev")]
> > +)
> >
> >  ;; -------------------------------------------------------------------------
> >  ;; ---- [INT,FP] Special-purpose binary permutes
> > @@ -8784,7 +9039,8 @@
> >    "@
> >     splice\t%0.<Vetype>, %1, %0.<Vetype>, %3.<Vetype>
> >     movprfx\t%0, %2\;splice\t%0.<Vetype>, %1, %0.<Vetype>, %3.<Vetype>"
> > -  [(set_attr "movprfx" "*, yes")]
> > +  [(set_attr "movprfx" "*, yes")
> > +   (set_attr "type" "sve_cext, sve_cext_x")]
> >  )
> >
> >  ;; Permutes that take half the elements from one vector and half the
> > @@ -8797,6 +9053,7 @@
> >  	  PERMUTE))]
> >    "TARGET_SVE"
> >    "<perm_insn>\t%0.<Vctype>, %1.<Vctype>, %2.<Vctype>"
> > +  [(set_attr "type" "sve_ext")]
> >  )
> >
> >  ;; Apply PERMUTE to 128-bit sequences.  The behavior of these patterns
> > @@ -8809,6 +9066,7 @@
> >  	  PERMUTEQ))]
> >    "TARGET_SVE_F64MM"
> >    "<perm_insn>\t%0.q, %1.q, %2.q"
> > +  [(set_attr "type" "sve_ext")]
> >  )
> >
> >  ;; Concatenate two vectors and extract a subvector.  Note that the
> > @@ -8828,7 +9086,8 @@
> >  	    ? "ext\\t%0.b, %0.b, %2.b, #%3"
> >  	    : "movprfx\t%0, %1\;ext\\t%0.b, %0.b, %2.b, #%3");
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_ext, sve_ext_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8843,7 +9102,9 @@
> >  	(unspec:PRED_ALL [(match_operand:PRED_ALL 1 "register_operand"
> "Upa")]
> >  			 UNSPEC_REV))]
> >    "TARGET_SVE"
> > -  "rev\t%0.<Vetype>, %1.<Vetype>")
> > +  "rev\t%0.<Vetype>, %1.<Vetype>"
> > +  [(set_attr "type" "sve_rev_p")]
> > +)
> >
> >  ;; -------------------------------------------------------------------------
> >  ;; ---- [PRED] Special-purpose binary permutes
> > @@ -8866,6 +9127,7 @@
> >  			 PERMUTE))]
> >    "TARGET_SVE"
> >    "<perm_insn>\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>"
> > +  [(set_attr "type" "sve_trn_p")]
> >  )
> >
> >  ;; Special purpose permute used by the predicate generation instructions.
> > @@ -8880,6 +9142,7 @@
> >  			UNSPEC_TRN1_CONV))]
> >    "TARGET_SVE"
> >    "trn1\t%0.<PRED_ALL:Vetype>, %1.<PRED_ALL:Vetype>,
> %2.<PRED_ALL:Vetype>"
> > +  [(set_attr "type" "sve_trn_p")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -8903,6 +9166,7 @@
> >  	  UNSPEC_PACK))]
> >    "TARGET_SVE"
> >    "uzp1\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>"
> > +  [(set_attr "type" "sve_zip")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8939,6 +9203,7 @@
> >  	  UNPACK))]
> >    "TARGET_SVE"
> >    "<su>unpk<perm_hilo>\t%0.<Vewtype>, %1.<Vetype>"
> > +  [(set_attr "type" "sve_upk")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -8976,7 +9241,8 @@
> >    "@
> >     fcvtz<su>\t%0.<SVE_FULL_HSDI:Vetype>, %1/m, %2.<SVE_FULL_F:Vetype>
> >     movprfx\t%0, %2\;fcvtz<su>\t%0.<SVE_FULL_HSDI:Vetype>, %1/m,
> %2.<SVE_FULL_F:Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_to_int, sve_fp_to_int_x")]
> >  )
> >
> >  ;; Predicated narrowing float-to-integer conversion.
> > @@ -8991,7 +9257,8 @@
> >    "@
> >     fcvtz<su>\t%0.<VNx4SI_ONLY:Vetype>, %1/m, %2.<VNx2DF_ONLY:Vetype>
> >     movprfx\t%0, %2\;fcvtz<su>\t%0.<VNx4SI_ONLY:Vetype>, %1/m,
> %2.<VNx2DF_ONLY:Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_to_int, sve_fp_to_int_x")]
> >  )
> >
> >  ;; Predicated float-to-integer conversion with merging, either to the same
> > @@ -9035,7 +9302,8 @@
> >    {
> >      operands[4] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_fp_to_int, sve_fp_to_int_x, sve_fp_to_int_x")]
> >  )
> >
> >  (define_insn
> "*cond_<optab>_nontrunc<SVE_FULL_F:mode><SVE_FULL_HSDI:mode>_stric
> t"
> > @@ -9054,7 +9322,8 @@
> >     fcvtz<su>\t%0.<SVE_FULL_HSDI:Vetype>, %1/m, %2.<SVE_FULL_F:Vetype>
> >     movprfx\t%0.<SVE_FULL_HSDI:Vetype>, %1/z,
> %2.<SVE_FULL_HSDI:Vetype>\;fcvtz<su>\t%0.<SVE_FULL_HSDI:Vetype>,
> %1/m, %2.<SVE_FULL_F:Vetype>
> >     movprfx\t%0, %3\;fcvtz<su>\t%0.<SVE_FULL_HSDI:Vetype>, %1/m,
> %2.<SVE_FULL_F:Vetype>"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_fp_to_int, sve_fp_to_int_x, sve_fp_to_int_x")]
> >  )
> >
> >  ;; Predicated narrowing float-to-integer conversion with merging.
> > @@ -9088,7 +9357,8 @@
> >     fcvtz<su>\t%0.<VNx4SI_ONLY:Vetype>, %1/m, %2.<VNx2DF_ONLY:Vetype>
> >     movprfx\t%0.<VNx2DF_ONLY:Vetype>, %1/z,
> %2.<VNx2DF_ONLY:Vetype>\;fcvtz<su>\t%0.<VNx4SI_ONLY:Vetype>, %1/m,
> %2.<VNx2DF_ONLY:Vetype>
> >     movprfx\t%0, %3\;fcvtz<su>\t%0.<VNx4SI_ONLY:Vetype>, %1/m,
> %2.<VNx2DF_ONLY:Vetype>"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_fp_to_int, sve_fp_to_int_x, sve_fp_to_int_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -9163,7 +9433,8 @@
> >    "@
> >     <su>cvtf\t%0.<SVE_FULL_F:Vetype>, %1/m, %2.<SVE_FULL_HSDI:Vetype>
> >     movprfx\t%0, %2\;<su>cvtf\t%0.<SVE_FULL_F:Vetype>, %1/m,
> %2.<SVE_FULL_HSDI:Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_int_to_fp, sve_int_to_fp_x")]
> >  )
> >
> >  ;; Predicated widening integer-to-float conversion.
> > @@ -9178,7 +9449,8 @@
> >    "@
> >     <su>cvtf\t%0.<VNx2DF_ONLY:Vetype>, %1/m, %2.<VNx4SI_ONLY:Vetype>
> >     movprfx\t%0, %2\;<su>cvtf\t%0.<VNx2DF_ONLY:Vetype>, %1/m,
> %2.<VNx4SI_ONLY:Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_int_to_fp, sve_int_to_fp_x")]
> >  )
> >
> >  ;; Predicated integer-to-float conversion with merging, either to the same
> > @@ -9222,7 +9494,8 @@
> >    {
> >      operands[4] = copy_rtx (operands[1]);
> >    }
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_int_to_fp, sve_int_to_fp_x, sve_int_to_fp_x")]
> >  )
> >
> >  (define_insn
> "*cond_<optab>_nonextend<SVE_FULL_HSDI:mode><SVE_FULL_F:mode>_str
> ict"
> > @@ -9241,7 +9514,8 @@
> >     <su>cvtf\t%0.<SVE_FULL_F:Vetype>, %1/m, %2.<SVE_FULL_HSDI:Vetype>
> >     movprfx\t%0.<SVE_FULL_HSDI:Vetype>, %1/z,
> %2.<SVE_FULL_HSDI:Vetype>\;<su>cvtf\t%0.<SVE_FULL_F:Vetype>, %1/m,
> %2.<SVE_FULL_HSDI:Vetype>
> >     movprfx\t%0, %3\;<su>cvtf\t%0.<SVE_FULL_F:Vetype>, %1/m,
> %2.<SVE_FULL_HSDI:Vetype>"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_int_to_fp, sve_int_to_fp_x, sve_int_to_fp_x")]
> >  )
> >
> >  ;; Predicated widening integer-to-float conversion with merging.
> > @@ -9275,7 +9549,8 @@
> >     <su>cvtf\t%0.<VNx2DF_ONLY:Vetype>, %1/m, %2.<VNx4SI_ONLY:Vetype>
> >     movprfx\t%0.<VNx2DF_ONLY:Vetype>, %1/z,
> %2.<VNx2DF_ONLY:Vetype>\;<su>cvtf\t%0.<VNx2DF_ONLY:Vetype>, %1/m,
> %2.<VNx4SI_ONLY:Vetype>
> >     movprfx\t%0, %3\;<su>cvtf\t%0.<VNx2DF_ONLY:Vetype>, %1/m,
> %2.<VNx4SI_ONLY:Vetype>"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_int_to_fp, sve_int_to_fp_x, sve_int_to_fp_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -9361,7 +9636,8 @@
> >    "@
> >     fcvt\t%0.<SVE_FULL_HSF:Vetype>, %1/m, %2.<SVE_FULL_SDF:Vetype>
> >     movprfx\t%0, %2\;fcvt\t%0.<SVE_FULL_HSF:Vetype>, %1/m,
> %2.<SVE_FULL_SDF:Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x")]
> >  )
> >
> >  ;; Predicated float-to-float truncation with merging.
> > @@ -9395,7 +9671,8 @@
> >     fcvt\t%0.<SVE_FULL_HSF:Vetype>, %1/m, %2.<SVE_FULL_SDF:Vetype>
> >     movprfx\t%0.<SVE_FULL_SDF:Vetype>, %1/z,
> %2.<SVE_FULL_SDF:Vetype>\;fcvt\t%0.<SVE_FULL_HSF:Vetype>, %1/m,
> %2.<SVE_FULL_SDF:Vetype>
> >     movprfx\t%0, %3\;fcvt\t%0.<SVE_FULL_HSF:Vetype>, %1/m,
> %2.<SVE_FULL_SDF:Vetype>"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x, sve_fp_to_fp_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -9418,7 +9695,8 @@
> >    "@
> >     bfcvt\t%0.h, %1/m, %2.s
> >     movprfx\t%0, %2\;bfcvt\t%0.h, %1/m, %2.s"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x")]
> >  )
> >
> >  ;; Predicated BFCVT with merging.
> > @@ -9452,7 +9730,8 @@
> >     bfcvt\t%0.h, %1/m, %2.s
> >     movprfx\t%0.s, %1/z, %2.s\;bfcvt\t%0.h, %1/m, %2.s
> >     movprfx\t%0, %3\;bfcvt\t%0.h, %1/m, %2.s"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x, sve_fp_to_fp_x")]
> >  )
> >
> >  ;; Predicated BFCVTNT.  This doesn't give a natural aarch64_pred_*/cond_*
> > @@ -9470,6 +9749,7 @@
> >  	  UNSPEC_COND_FCVTNT))]
> >    "TARGET_SVE_BF16"
> >    "bfcvtnt\t%0.h, %2/m, %3.s"
> > +  [(set_attr "type" "sve_fp_to_fp")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -9518,7 +9798,8 @@
> >    "@
> >     fcvt\t%0.<SVE_FULL_SDF:Vetype>, %1/m, %2.<SVE_FULL_HSF:Vetype>
> >     movprfx\t%0, %2\;fcvt\t%0.<SVE_FULL_SDF:Vetype>, %1/m,
> %2.<SVE_FULL_HSF:Vetype>"
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x")]
> >  )
> >
> >  ;; Predicated float-to-float extension with merging.
> > @@ -9552,7 +9833,8 @@
> >     fcvt\t%0.<SVE_FULL_SDF:Vetype>, %1/m, %2.<SVE_FULL_HSF:Vetype>
> >     movprfx\t%0.<SVE_FULL_SDF:Vetype>, %1/z,
> %2.<SVE_FULL_SDF:Vetype>\;fcvt\t%0.<SVE_FULL_SDF:Vetype>, %1/m,
> %2.<SVE_FULL_HSF:Vetype>
> >     movprfx\t%0, %3\;fcvt\t%0.<SVE_FULL_SDF:Vetype>, %1/m,
> %2.<SVE_FULL_HSF:Vetype>"
> > -  [(set_attr "movprfx" "*,yes,yes")]
> > +  [(set_attr "movprfx" "*,yes,yes")
> > +   (set_attr "type" "sve_fp_to_fp, sve_fp_to_fp_x, sve_fp_to_fp_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -9572,6 +9854,7 @@
> >  	  UNSPEC_PACK))]
> >    "TARGET_SVE"
> >    "uzp1\t%0.<Vetype>, %1.<Vetype>, %2.<Vetype>"
> > +  [(set_attr "type" "sve_zip_p")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -9605,6 +9888,7 @@
> >  			UNPACK_UNSIGNED))]
> >    "TARGET_SVE"
> >    "punpk<perm_hilo>\t%0.h, %1.b"
> > +  [(set_attr "type" "sve_upk_p")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -9635,6 +9919,7 @@
> >    "@
> >     brk<brk_op>\t%0.b, %1/z, %2.b
> >     brk<brk_op>\t%0.b, %1/m, %2.b"
> > +  [(set_attr "type" "sve_loop_p, sve_loop_p")]
> >  )
> >
> >  ;; Same, but also producing a flags result.
> > @@ -9658,6 +9943,7 @@
> >  	  SVE_BRK_UNARY))]
> >    "TARGET_SVE"
> >    "brk<brk_op>s\t%0.b, %1/z, %2.b"
> > +  [(set_attr "type" "sve_loop_ps")]
> >  )
> >
> >  ;; Same, but with only the flags result being interesting.
> > @@ -9676,6 +9962,7 @@
> >     (clobber (match_scratch:VNx16BI 0 "=Upa"))]
> >    "TARGET_SVE"
> >    "brk<brk_op>s\t%0.b, %1/z, %2.b"
> > +  [(set_attr "type" "sve_loop_ps")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -9700,6 +9987,7 @@
> >  	  SVE_BRK_BINARY))]
> >    "TARGET_SVE"
> >    "brk<brk_op>\t%0.b, %1/z, %2.b, %<brk_reg_opno>.b"
> > +  [(set_attr "type" "sve_loop_p")]
> >  )
> >
> >  ;; BRKN, producing both a predicate and a flags result.  Unlike other
> > @@ -9730,6 +10018,7 @@
> >      operands[4] = CONST0_RTX (VNx16BImode);
> >      operands[5] = CONST0_RTX (VNx16BImode);
> >    }
> > +  [(set_attr "type" "sve_loop_ps")]
> >  )
> >
> >  ;; Same, but with only the flags result being interesting.
> > @@ -9754,6 +10043,7 @@
> >      operands[4] = CONST0_RTX (VNx16BImode);
> >      operands[5] = CONST0_RTX (VNx16BImode);
> >    }
> > +  [(set_attr "type" "sve_loop_ps")]
> >  )
> >
> >  ;; BRKPA and BRKPB, producing both a predicate and a flags result.
> > @@ -9777,6 +10067,7 @@
> >  	  SVE_BRKP))]
> >    "TARGET_SVE"
> >    "brk<brk_op>s\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_loop_ps")]
> >  )
> >
> >  ;; Same, but with only the flags result being interesting.
> > @@ -9795,6 +10086,7 @@
> >     (clobber (match_scratch:VNx16BI 0 "=Upa"))]
> >    "TARGET_SVE"
> >    "brk<brk_op>s\t%0.b, %1/z, %2.b, %3.b"
> > +  [(set_attr "type" "sve_loop_ps")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -9815,6 +10107,7 @@
> >     (clobber (reg:CC_NZC CC_REGNUM))]
> >    "TARGET_SVE && <max_elem_bits> >= <elem_bits>"
> >    "<sve_pred_op>\t%0.<Vetype>, %1, %0.<Vetype>"
> > +  [(set_attr "type" "sve_set_ps")]
> >  )
> >
> >  ;; Same, but also producing a flags result.
> > @@ -9845,6 +10138,7 @@
> >      operands[4] = operands[2];
> >      operands[5] = operands[3];
> >    }
> > +  [(set_attr "type" "sve_set_ps")]
> >  )
> >
> >  ;; Same, but with only the flags result being interesting.
> > @@ -9870,6 +10164,7 @@
> >      operands[4] = operands[2];
> >      operands[5] = operands[3];
> >    }
> > +  [(set_attr "type" "sve_set_ps")]
> >  )
> >
> >  ;;
> ================================================================
> =========
> > @@ -9902,6 +10197,7 @@
> >    {
> >      return aarch64_output_sve_cnt_pat_immediate ("cnt", "%x0", operands
> + 1);
> >    }
> > +  [(set_attr "type" "sve_cnt")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -9928,6 +10224,7 @@
> >      return aarch64_output_sve_cnt_pat_immediate ("<inc_dec>", "%x0",
> >  						 operands + 2);
> >    }
> > +  [(set_attr "type" "sve_cnt")]
> >  )
> >
> >  ;; Increment an SImode register by the number of elements in an svpattern
> > @@ -9944,6 +10241,7 @@
> >    {
> >      return aarch64_output_sve_cnt_pat_immediate ("inc", "%x0", operands
> + 2);
> >    }
> > +  [(set_attr "type" "sve_cnt")]
> >  )
> >
> >  ;; Increment an SImode register by the number of elements in an svpattern
> > @@ -9965,6 +10263,7 @@
> >      return aarch64_output_sve_cnt_pat_immediate ("<inc_dec>", registers,
> >  						 operands + 2);
> >    }
> > +  [(set_attr "type" "sve_cnt")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -9995,7 +10294,8 @@
> >      return aarch64_output_sve_cnt_pat_immediate ("<inc_dec>",
> "%0.<Vetype>",
> >  						 operands + 2);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt, sve_cnt_x")]
> >  )
> >
> >  ;; Increment a vector of SIs by the number of elements in an svpattern.
> > @@ -10016,7 +10316,8 @@
> >      return aarch64_output_sve_cnt_pat_immediate ("<inc_dec>",
> "%0.<Vetype>",
> >  						 operands + 2);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt, sve_cnt_x")]
> >  )
> >
> >  ;; Increment a vector of HIs by the number of elements in an svpattern.
> > @@ -10051,7 +10352,8 @@
> >      return aarch64_output_sve_cnt_pat_immediate ("<inc_dec>",
> "%0.<Vetype>",
> >  						 operands + 2);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt, sve_cnt_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -10078,6 +10380,7 @@
> >      return aarch64_output_sve_cnt_pat_immediate ("<inc_dec>", "%x0",
> >  						 operands + 2);
> >    }
> > +  [(set_attr "type" "sve_cnt")]
> >  )
> >
> >  ;; Decrement an SImode register by the number of elements in an svpattern
> > @@ -10094,6 +10397,7 @@
> >    {
> >      return aarch64_output_sve_cnt_pat_immediate ("dec", "%x0", operands
> + 2);
> >    }
> > +  [(set_attr "type" "sve_cnt")]
> >  )
> >
> >  ;; Decrement an SImode register by the number of elements in an svpattern
> > @@ -10115,6 +10419,7 @@
> >      return aarch64_output_sve_cnt_pat_immediate ("<inc_dec>", registers,
> >  						 operands + 2);
> >    }
> > +  [(set_attr "type" "sve_cnt")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -10145,7 +10450,8 @@
> >      return aarch64_output_sve_cnt_pat_immediate ("<inc_dec>",
> "%0.<Vetype>",
> >  						 operands + 2);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt, sve_cnt_x")]
> >  )
> >
> >  ;; Decrement a vector of SIs by the number of elements in an svpattern.
> > @@ -10166,7 +10472,8 @@
> >      return aarch64_output_sve_cnt_pat_immediate ("<inc_dec>",
> "%0.<Vetype>",
> >  						 operands + 2);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt, sve_cnt_x")]
> >  )
> >
> >  ;; Decrement a vector of HIs by the number of elements in an svpattern.
> > @@ -10201,7 +10508,8 @@
> >      return aarch64_output_sve_cnt_pat_immediate ("<inc_dec>",
> "%0.<Vetype>",
> >  						 operands + 2);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt, sve_cnt_x")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -10221,7 +10529,9 @@
> >  		      (match_operand:PRED_ALL 3 "register_operand" "Upa")]
> >  		     UNSPEC_CNTP)))]
> >    "TARGET_SVE"
> > -  "cntp\t%x0, %1, %3.<Vetype>")
> > +  "cntp\t%x0, %1, %3.<Vetype>"
> > +  [(set_attr "type" "sve_cnt_p")]
> > +)
> >
> >  ;; -------------------------------------------------------------------------
> >  ;; ---- [INT] Increment by the number of elements in a predicate (scalar)
> > @@ -10264,6 +10574,7 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<PRED_ALL:MODE>mode);
> >    }
> > +  [(set_attr "type" "sve_cnt_p")]
> >  )
> >
> >  ;; Increment an SImode register by the number of set bits in a predicate
> > @@ -10283,6 +10594,7 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<MODE>mode);
> >    }
> > +  [(set_attr "type" "sve_cnt_p")]
> >  )
> >
> >  ;; Increment an SImode register by the number of set bits in a predicate
> > @@ -10324,6 +10636,7 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<PRED_ALL:MODE>mode);
> >    }
> > +  [(set_attr "type" "sve_cnt_p")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -10373,7 +10686,8 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")]
> >  )
> >
> >  ;; Increment a vector of SIs by the number of set bits in a predicate.
> > @@ -10412,7 +10726,8 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")]
> >  )
> >
> >  ;; Increment a vector of HIs by the number of set bits in a predicate.
> > @@ -10453,7 +10768,8 @@
> >    {
> >      operands[4] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -10497,6 +10813,7 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<PRED_ALL:MODE>mode);
> >    }
> > +  [(set_attr "type" "sve_cnt_p")]
> >  )
> >
> >  ;; Decrement an SImode register by the number of set bits in a predicate
> > @@ -10516,6 +10833,7 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<MODE>mode);
> >    }
> > +  [(set_attr "type" "sve_cnt_p")]
> >  )
> >
> >  ;; Decrement an SImode register by the number of set bits in a predicate
> > @@ -10557,6 +10875,7 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<PRED_ALL:MODE>mode);
> >    }
> > +  [(set_attr "type" "sve_cnt_p")]
> >  )
> >
> >  ;; -------------------------------------------------------------------------
> > @@ -10606,7 +10925,8 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")]
> >  )
> >
> >  ;; Decrement a vector of SIs by the number of set bits in a predicate.
> > @@ -10645,7 +10965,8 @@
> >    {
> >      operands[3] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")]
> >  )
> >
> >  ;; Decrement a vector of HIs by the number of set bits in a predicate.
> > @@ -10686,5 +11007,6 @@
> >    {
> >      operands[4] = CONSTM1_RTX (<VPRED>mode);
> >    }
> > -  [(set_attr "movprfx" "*,yes")]
> > -)
> > +  [(set_attr "movprfx" "*,yes")
> > +   (set_attr "type" "sve_cnt_pv, sve_cnt_pvx")]
> > +)
> > \ No newline at end of file
> > diff --git a/gcc/config/arm/types.md b/gcc/config/arm/types.md
> > index 83e29563c8e..baccdd02860 100644
> > --- a/gcc/config/arm/types.md
> > +++ b/gcc/config/arm/types.md
> > @@ -562,6 +562,183 @@
> >  ; crypto_sha256_slow
> >  ; crypto_pmull
> >  ;
> > +; The classification below is for SVE instructions.
> > +;
> > +; SVE Suffixes:
> > +; a: accumulated result
> > +; c: complex math
> > +; d: double precision
> > +; g: GPR operand
> > +; l: scaled
> > +; p: predicate operand
> > +; r: reduced result
> > +; s: change flags
> > +; s: single precision
> > +; t: trigonometry math
> > +; u: unscaled
> > +; v: vector operand
> > +; x: prefixed
> > +;
> > +; sve_loop_p
> > +; sve_loop_ps
> > +; sve_loop_gs
> > +; sve_loop_end
> > +;
> > +; sve_logic_p
> > +; sve_logic_ps
> > +;
> > +; sve_cnt_p
> > +; sve_cnt_pv
> > +; sve_cnt_pvx
> > +; sve_rev_p
> > +; sve_sel_p
> > +; sve_set_p
> > +; sve_set_ps
> > +; sve_trn_p
> > +; sve_upk_p
> > +; sve_zip_p
> > +;
> > +; sve_arith
> > +; sve_arith_sat
> > +; sve_arith_sat_x
> > +; sve_arith_r
> > +; sve_arith_x
> > +; sve_logic
> > +; sve_logic_r
> > +; sve_logic_x
> > +;
> > +; sve_shift
> > +; sve_shift_d
> > +; sve_shift_dx
> > +; sve_shift_x
> > +;
> > +; sve_compare_s
> > +;
> > +; sve_cnt
> > +; sve_cnt_x
> > +; sve_copy
> > +; sve_copy_g
> > +; sve_move
> > +; sve_move_x
> > +; sve_move_g
> > +; sve_permute
> > +; sve_splat
> > +; sve_splat_m
> > +; sve_splat_g
> > +; sve_cext
> > +; sve_cext_x
> > +; sve_cext_g
> > +; sve_ext
> > +; sve_ext_x
> > +; sve_sext
> > +; sve_sext_x
> > +; sve_uext
> > +; sve_uext_x
> > +; sve_index
> > +; sve_index_g
> > +; sve_ins
> > +; sve_ins_x
> > +; sve_ins_g
> > +; sve_ins_gx
> > +; sve_rev
> > +; sve_rev_x
> > +; sve_tbl
> > +; sve_trn
> > +; sve_upk
> > +; sve_zip
> > +;
> > +; sve_int_to_fp
> > +; sve_int_to_fp_x
> > +; sve_fp_round
> > +; sve_fp_round_x
> > +; sve_fp_to_int
> > +; sve_fp_to_int_x
> > +; sve_fp_to_fp
> > +; sve_fp_to_fp_x
> > +; sve_bf_to_fp
> > +; sve_bf_to_fp_x
> > +;
> > +; sve_div
> > +; sve_div_x
> > +; sve_dot
> > +; sve_dot_x
> > +; sve_mla
> > +; sve_mla_x
> > +; sve_mmla
> > +; sve_mmla_x
> > +; sve_mul
> > +; sve_mul_x
> > +;
> > +; sve_prfx
> > +;
> > +; sve_fp_arith
> > +; sve_fp_arith_a
> > +; sve_fp_arith_c
> > +; sve_fp_arith_cx
> > +; sve_fp_arith_r
> > +; sve_fp_arith_x
> > +;
> > +; sve_fp_compare
> > +; sve_fp_copy
> > +; sve_fp_move
> > +; sve_fp_move_x
> > +;
> > +; sve_fp_div_d
> > +; sve_fp_div_dx
> > +; sve_fp_div_s
> > +; sve_fp_div_sx
> > +; sve_fp_dot
> > +; sve_fp_mla
> > +; sve_fp_mla_x
> > +; sve_fp_mla_c
> > +; sve_fp_mla_cx
> > +; sve_fp_mla_t
> > +; sve_fp_mla_tx
> > +; sve_fp_mmla
> > +; sve_fp_mmla_x
> > +; sve_fp_mul
> > +; sve_fp_mul_x
> > +; sve_fp_sqrt_d
> > +; sve_fp_sqrt_s
> > +; sve_fp_trig
> > +; sve_fp_trig_x
> > +;
> > +; sve_fp_estimate,
> > +; sve_fp_step,
> > +;
> > +; sve_bf_dot
> > +; sve_bf_dot_x
> > +; sve_bf_mla
> > +; sve_bf_mla_x
> > +; sve_bf_mmla
> > +; sve_bf_mmla_x
> > +;
> > +; sve_ldr
> > +; sve_ldr_p
> > +; sve_load1
> > +; sve_load1_gather_d
> > +; sve_load1_gather_dl
> > +; sve_load1_gather_du
> > +; sve_load1_gather_s
> > +; sve_load1_gather_sl
> > +; sve_load1_gather_su
> > +; sve_load2
> > +; sve_load3
> > +; sve_load4
> > +;
> > +; sve_str
> > +; sve_str_p
> > +; sve_store1
> > +; sve_store1_scatter
> > +; sve_store2
> > +; sve_store3
> > +; sve_store4
> > +;
> > +; sve_rd_ffr
> > +; sve_rd_ffr_p
> > +; sve_rd_ffr_ps
> > +; sve_wr_ffr
> > +;
> >  ; The classification below is for coprocessor instructions
> >  ;
> >  ; coproc
> > @@ -1120,6 +1297,171 @@
> >    crypto_sha3,\
> >    crypto_sm3,\
> >    crypto_sm4,\
> > +\
> > +  sve_loop_p,\
> > +  sve_loop_ps,\
> > +  sve_loop_gs,\
> > +  sve_loop_end,\
> > +\
> > +  sve_logic_p,\
> > +  sve_logic_ps,\
> > +\
> > +  sve_cnt_p,\
> > +  sve_cnt_pv,\
> > +  sve_cnt_pvx,\
> > +  sve_rev_p,\
> > +  sve_sel_p,\
> > +  sve_set_p,\
> > +  sve_set_ps,\
> > +  sve_trn_p,\
> > +  sve_upk_p,\
> > +  sve_zip_p,\
> > +\
> > +  sve_arith,\
> > +  sve_arith_sat,\
> > +  sve_arith_sat_x,\
> > +  sve_arith_r,\
> > +  sve_arith_x,\
> > +  sve_logic,\
> > +  sve_logic_r,\
> > +  sve_logic_x,\
> > +\
> > +  sve_shift,\
> > +  sve_shift_d,\
> > +  sve_shift_dx,\
> > +  sve_shift_x,\
> > +\
> > +  sve_compare_s,\
> > +\
> > +  sve_cnt,\
> > +  sve_cnt_x,\
> > +  sve_copy,\
> > +  sve_copy_g,\
> > +  sve_move,\
> > +  sve_move_x,\
> > +  sve_move_g,\
> > +  sve_permute,\
> > +  sve_splat,\
> > +  sve_splat_m,\
> > +  sve_splat_g,\
> > +  sve_cext,\
> > +  sve_cext_x,\
> > +  sve_cext_g,\
> > +  sve_ext,\
> > +  sve_ext_x,\
> > +  sve_sext,\
> > +  sve_sext_x,\
> > +  sve_uext,\
> > +  sve_uext_x,\
> > +  sve_index,\
> > +  sve_index_g,\
> > +  sve_ins,\
> > +  sve_ins_x,\
> > +  sve_ins_g,\
> > +  sve_ins_gx,\
> > +  sve_rev,\
> > +  sve_rev_x,\
> > +  sve_tbl,\
> > +  sve_trn,\
> > +  sve_upk,\
> > +  sve_zip,\
> > +\
> > +  sve_int_to_fp,\
> > +  sve_int_to_fp_x,\
> > +  sve_fp_round,\
> > +  sve_fp_round_x,\
> > +  sve_fp_to_int,\
> > +  sve_fp_to_int_x,\
> > +  sve_fp_to_fp,\
> > +  sve_fp_to_fp_x,\
> > +  sve_bf_to_fp,\
> > +  sve_bf_to_fp_x,\
> > +\
> > +  sve_div,\
> > +  sve_div_x,\
> > +  sve_dot,\
> > +  sve_dot_x,\
> > +  sve_mla,\
> > +  sve_mla_x,\
> > +  sve_mmla,\
> > +  sve_mmla_x,\
> > +  sve_mul,\
> > +  sve_mul_x,\
> > +\
> > +  sve_prfx,\
> > +\
> > +  sve_fp_arith,\
> > +  sve_fp_arith_a,\
> > +  sve_fp_arith_c,\
> > +  sve_fp_arith_cx,\
> > +  sve_fp_arith_r,\
> > +  sve_fp_arith_x,\
> > +\
> > +  sve_fp_compare,\
> > +  sve_fp_copy,\
> > +  sve_fp_move,\
> > +  sve_fp_move_x,\
> > +\
> > +  sve_fp_div_d,\
> > +  sve_fp_div_dx,\
> > +  sve_fp_div_s,\
> > +  sve_fp_div_sx,\
> > +  sve_fp_dot,\
> > +  sve_fp_mla,\
> > +  sve_fp_mla_x,\
> > +  sve_fp_mla_c,\
> > +  sve_fp_mla_cx,\
> > +  sve_fp_mla_t,\
> > +  sve_fp_mla_tx,\
> > +  sve_fp_mmla,\
> > +  sve_fp_mmla_x,\
> > +  sve_fp_mul,\
> > +  sve_fp_mul_x,\
> > +  sve_fp_sqrt_d,\
> > +  sve_fp_sqrt_dx,\
> > +  sve_fp_sqrt_s,\
> > +  sve_fp_sqrt_sx,\
> > +  sve_fp_trig,\
> > +  sve_fp_trig_x,\
> > +\
> > +  sve_fp_estimate,
> > +  sve_fp_estimate_x,
> > +  sve_fp_step,
> > +  sve_fp_step_x,
> > +\
> > +  sve_bf_dot,\
> > +  sve_bf_dot_x,\
> > +  sve_bf_mla,\
> > +  sve_bf_mla_x,\
> > +  sve_bf_mmla,\
> > +  sve_bf_mmla_x,\
> > +\
> > +  sve_ldr,\
> > +  sve_ldr_p,\
> > +  sve_load1,\
> > +  sve_load1_gather_d,\
> > +  sve_load1_gather_dl,\
> > +  sve_load1_gather_du,\
> > +  sve_load1_gather_s,\
> > +  sve_load1_gather_sl,\
> > +  sve_load1_gather_su,\
> > +  sve_load2,\
> > +  sve_load3,\
> > +  sve_load4,\
> > +\
> > +  sve_str,\
> > +  sve_str_p,\
> > +  sve_store1,\
> > +  sve_store1_scatter,\
> > +  sve_store2,\
> > +  sve_store3,\
> > +  sve_store4,\
> > +\
> > +  sve_rd_ffr,\
> > +  sve_rd_ffr_p,\
> > +  sve_rd_ffr_ps,\
> > +  sve_wr_ffr,\
> > +\
> >    coproc,\
> >    tme,\
> >    memtag,\

  reply	other threads:[~2023-05-15  9:49 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-12 23:08 Evandro Menezes
2023-05-15  9:00 ` Richard Sandiford
2023-05-15  9:49   ` Kyrylo Tkachov [this message]
2023-05-15 20:13     ` Evandro Menezes
2023-05-16  8:36       ` Kyrylo Tkachov
2023-05-16 20:12         ` Evandro Menezes
2023-09-13  0:54         ` Evandro Menezes
2023-05-15 19:59   ` Evandro Menezes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PAXPR08MB6926EF7B648D52442993CB9793789@PAXPR08MB6926.eurprd08.prod.outlook.com \
    --to=kyrylo.tkachov@arm.com \
    --cc=Richard.Sandiford@arm.com \
    --cc=Tamar.Christina@arm.com \
    --cc=ebahapo@icloud.com \
    --cc=evandro+gcc@gcc.gnu.org \
    --cc=gcc-patches@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).