* [PATCH 0/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops
@ 2023-12-18 11:53 Andre Vieira
2023-12-18 11:53 ` [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns Andre Vieira
2023-12-18 11:53 ` [PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops Andre Vieira
0 siblings, 2 replies; 10+ messages in thread
From: Andre Vieira @ 2023-12-18 11:53 UTC (permalink / raw)
To: gcc-patches; +Cc: Richard.Earnshaw, Andre Vieira
[-- Attachment #1: Type: text/plain, Size: 261 bytes --]
Resending series to make use of the Linaro pre-commit CI in patchworks.
Andre Vieira (2):
arm: Add define_attr to to create a mapping between MVE predicated and
unpredicated insns
arm: Add support for MVE Tail-Predicated Low Overhead Loops
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns
2023-12-18 11:53 [PATCH 0/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops Andre Vieira
@ 2023-12-18 11:53 ` Andre Vieira
2023-12-20 16:54 ` Andre Vieira (lists)
2023-12-18 11:53 ` [PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops Andre Vieira
1 sibling, 1 reply; 10+ messages in thread
From: Andre Vieira @ 2023-12-18 11:53 UTC (permalink / raw)
To: gcc-patches; +Cc: Richard.Earnshaw, Stam Markianos-Wright
[-- Attachment #1: Type: text/plain, Size: 44 bytes --]
This is a multi-part message in MIME format.
[-- Attachment #2: Type: text/plain, Size: 152 bytes --]
Re-sending Stam's first patch, same as:
https://gcc.gnu.org/pipermail/gcc-patches/2023-November/635301.html
Hopefully patchworks can pick this up :)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #3: 0001-arm-Add-define_attr-to-to-create-a-mapping-between-M.patch --]
[-- Type: text/x-patch; name="0001-arm-Add-define_attr-to-to-create-a-mapping-between-M.patch", Size: 104533 bytes --]
diff --git a/gcc/config/arm/arm.h b/gcc/config/arm/arm.h
index a9c2752c0ea..0b0e8620717 100644
--- a/gcc/config/arm/arm.h
+++ b/gcc/config/arm/arm.h
@@ -2375,6 +2375,21 @@ extern int making_const_table;
else if (TARGET_THUMB1) \
thumb1_final_prescan_insn (INSN)
+/* These defines are useful to refer to the value of the mve_unpredicated_insn
+ insn attribute. Note that, because these use the get_attr_* function, these
+ will change recog_data if (INSN) isn't current_insn. */
+#define MVE_VPT_PREDICABLE_INSN_P(INSN) \
+ (recog_memoized (INSN) >= 0 \
+ && get_attr_mve_unpredicated_insn (INSN) != 0) \
+
+#define MVE_VPT_PREDICATED_INSN_P(INSN) \
+ (MVE_VPT_PREDICABLE_INSN_P (INSN) \
+ && recog_memoized (INSN) != get_attr_mve_unpredicated_insn (INSN)) \
+
+#define MVE_VPT_UNPREDICATED_INSN_P(INSN) \
+ (MVE_VPT_PREDICABLE_INSN_P (INSN) \
+ && recog_memoized (INSN) == get_attr_mve_unpredicated_insn (INSN)) \
+
#define ARM_SIGN_EXTEND(x) ((HOST_WIDE_INT) \
(HOST_BITS_PER_WIDE_INT <= 32 ? (unsigned HOST_WIDE_INT) (x) \
: ((((unsigned HOST_WIDE_INT)(x)) & (unsigned HOST_WIDE_INT) 0xffffffff) |\
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index 07eaf06cdea..8efdebecc3c 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -124,6 +124,8 @@ (define_attr "fpu" "none,vfp"
; and not all ARM insns do.
(define_attr "predicated" "yes,no" (const_string "no"))
+(define_attr "mve_unpredicated_insn" "" (const_int 0))
+
; LENGTH of an instruction (in bytes)
(define_attr "length" ""
(const_int 4))
diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md
index a9803538101..5ea2d9e8668 100644
--- a/gcc/config/arm/iterators.md
+++ b/gcc/config/arm/iterators.md
@@ -2305,6 +2305,7 @@ (define_int_attr simd32_op [(UNSPEC_QADD8 "qadd8") (UNSPEC_QSUB8 "qsub8")
(define_int_attr mmla_sfx [(UNSPEC_MATMUL_S "s8") (UNSPEC_MATMUL_U "u8")
(UNSPEC_MATMUL_US "s8")])
+
;;MVE int attribute.
(define_int_attr supf [(VCVTQ_TO_F_S "s") (VCVTQ_TO_F_U "u") (VREV16Q_S "s")
(VREV16Q_U "u") (VMVNQ_N_S "s") (VMVNQ_N_U "u")
diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md
index b0d3443da9c..62df022ef19 100644
--- a/gcc/config/arm/mve.md
+++ b/gcc/config/arm/mve.md
@@ -17,7 +17,7 @@
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
-(define_insn "*mve_mov<mode>"
+(define_insn "mve_mov<mode>"
[(set (match_operand:MVE_types 0 "nonimmediate_operand" "=w,w,r,w , w, r,Ux,w")
(match_operand:MVE_types 1 "general_operand" " w,r,w,DnDm,UxUi,r,w, Ul"))]
"TARGET_HAVE_MVE || TARGET_HAVE_MVE_FLOAT"
@@ -81,18 +81,27 @@ (define_insn "*mve_mov<mode>"
return "";
}
}
- [(set_attr "type" "mve_move,mve_move,mve_move,mve_move,mve_load,multiple,mve_store,mve_load")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")])
+ (set_attr "type" "mve_move,mve_move,mve_move,mve_move,mve_load,multiple,mve_store,mve_load")
(set_attr "length" "4,8,8,4,4,8,4,8")
(set_attr "thumb2_pool_range" "*,*,*,*,1018,*,*,*")
(set_attr "neg_pool_range" "*,*,*,*,996,*,*,*")])
-(define_insn "*mve_vdup<mode>"
+(define_insn "mve_vdup<mode>"
[(set (match_operand:MVE_vecs 0 "s_register_operand" "=w")
(vec_duplicate:MVE_vecs
(match_operand:<V_elem> 1 "s_register_operand" "r")))]
"TARGET_HAVE_MVE || TARGET_HAVE_MVE_FLOAT"
"vdup.<V_sz_elem>\t%q0, %1"
- [(set_attr "length" "4")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdup<mode>"))
+ (set_attr "length" "4")
(set_attr "type" "mve_move")])
;;
@@ -145,7 +154,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_mnemo>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -159,7 +169,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -173,7 +184,8 @@ (define_insn "mve_v<absneg_str>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"v<absneg_str>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_v<absneg_str>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -187,7 +199,8 @@ (define_insn "@mve_<mve_insn>q_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -201,7 +214,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
;; [vcvttq_f32_f16])
@@ -214,7 +228,8 @@ (define_insn "mve_vcvttq_f32_f16v4sf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtt.f32.f16\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -228,7 +243,8 @@ (define_insn "mve_vcvtbq_f32_f16v4sf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtb.f32.f16\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -242,7 +258,8 @@ (define_insn "mve_vcvtq_to_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -256,7 +273,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -270,7 +288,8 @@ (define_insn "mve_vcvtq_from_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -284,7 +303,8 @@ (define_insn "mve_v<absneg_str>q_s<mode>"
]
"TARGET_HAVE_MVE"
"v<absneg_str>.s%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_v<absneg_str>q_s<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -297,7 +317,8 @@ (define_insn "mve_vmvnq_u<mode>"
]
"TARGET_HAVE_MVE"
"vmvn\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmvnq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vmvnq_s<mode>"
[
@@ -318,7 +339,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -331,7 +353,8 @@ (define_insn "@mve_vclzq_s<mode>"
]
"TARGET_HAVE_MVE"
"vclz.i%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vclzq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vclzq_u<mode>"
[
@@ -354,7 +377,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -368,7 +392,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -382,7 +407,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -397,7 +423,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -411,7 +438,8 @@ (define_insn "mve_vcvtpq_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtp.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -425,7 +453,8 @@ (define_insn "mve_vcvtnq_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtn.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -439,7 +468,8 @@ (define_insn "mve_vcvtmq_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtm.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -453,7 +483,8 @@ (define_insn "mve_vcvtaq_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvta.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -467,7 +498,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -481,7 +513,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -495,7 +528,8 @@ (define_insn "@mve_<mve_insn>q_<supf>v4si"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -509,7 +543,8 @@ (define_insn "mve_vctp<MVE_vctp>q<MVE_vpred>"
]
"TARGET_HAVE_MVE"
"vctp.<MVE_vctp>\t%1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctp<MVE_vctp>q<MVE_vpred>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -523,7 +558,8 @@ (define_insn "mve_vpnotv16bi"
]
"TARGET_HAVE_MVE"
"vpnot"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vpnotv16bi"))
+ (set_attr "type" "mve_move")
])
;;
@@ -538,7 +574,8 @@ (define_insn "@mve_<mve_insn>q_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -553,7 +590,8 @@ (define_insn "mve_vcvtq_n_to_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.f<V_sz_elem>.<supf><V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; [vcreateq_f])
@@ -599,7 +637,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf><V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; Versions that take constant vectors as operand 2 (with all elements
@@ -617,7 +656,8 @@ (define_insn "mve_vshrq_n_s<mode>_imm"
VALID_NEON_QREG_MODE (<MODE>mode),
true);
}
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_s<mode>_imm"))
+ (set_attr "type" "mve_move")
])
(define_insn "mve_vshrq_n_u<mode>_imm"
[
@@ -632,7 +672,8 @@ (define_insn "mve_vshrq_n_u<mode>_imm"
VALID_NEON_QREG_MODE (<MODE>mode),
true);
}
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_u<mode>_imm"))
+ (set_attr "type" "mve_move")
])
;;
@@ -647,7 +688,8 @@ (define_insn "mve_vcvtq_n_from_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.<supf><V_sz_elem>.f<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -662,8 +704,9 @@ (define_insn "@mve_<mve_insn>q_p_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vcmpneq_, vcmpcsq_, vcmpeqq_, vcmpgeq_, vcmpgtq_, vcmphiq_, vcmpleq_, vcmpltq_])
@@ -676,7 +719,8 @@ (define_insn "@mve_vcmp<mve_cmp_op>q_<mode>"
]
"TARGET_HAVE_MVE"
"vcmp.<mve_cmp_type>%#<V_sz_elem>\t<mve_cmp_op>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -691,7 +735,8 @@ (define_insn "@mve_vcmp<mve_cmp_op>q_n_<mode>"
]
"TARGET_HAVE_MVE"
"vcmp.<mve_cmp_type>%#<V_sz_elem> <mve_cmp_op>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_n_<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -722,7 +767,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -739,7 +785,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -754,7 +801,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -769,7 +817,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -789,8 +838,11 @@ (define_insn "mve_vandq_u<mode>"
"@
vand\t%q0, %q1, %q2
* return neon_output_logic_immediate (\"vand\", &operands[2], <MODE>mode, 1, VALID_NEON_QREG_MODE (<MODE>mode));"
- [(set_attr "type" "mve_move")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_vandq_u<mode>")
+ (symbol_ref "CODE_FOR_nothing")])
+ (set_attr "type" "mve_move")
])
+
(define_expand "mve_vandq_s<mode>"
[
(set (match_operand:MVE_2 0 "s_register_operand")
@@ -811,7 +863,8 @@ (define_insn "mve_vbicq_u<mode>"
]
"TARGET_HAVE_MVE"
"vbic\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vbicq_s<mode>"
@@ -835,7 +888,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -853,7 +907,8 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; Auto vectorizer pattern for int vcadd
@@ -876,7 +931,8 @@ (define_insn "mve_veorq_u<mode>"
]
"TARGET_HAVE_MVE"
"veor\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_veorq_s<mode>"
[
@@ -904,7 +960,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -920,7 +977,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -935,7 +993,8 @@ (define_insn "mve_<max_min_su_str>q_<max_min_supf><mode>"
]
"TARGET_HAVE_MVE"
"<max_min_su_str>.<max_min_supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<max_min_su_str>q_<max_min_supf><mode>"))
+ (set_attr "type" "mve_move")
])
@@ -954,7 +1013,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -972,7 +1032,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -988,7 +1049,8 @@ (define_insn "@mve_<mve_insn>q_int_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1004,7 +1066,8 @@ (define_insn "mve_<mve_addsubmul>q<mode>"
]
"TARGET_HAVE_MVE"
"<mve_addsubmul>.i%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_addsubmul>q<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1018,7 +1081,8 @@ (define_insn "mve_vornq_s<mode>"
]
"TARGET_HAVE_MVE"
"vorn\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vornq_u<mode>"
@@ -1047,7 +1111,8 @@ (define_insn "mve_vorrq_s<mode>"
"@
vorr\t%q0, %q1, %q2
* return neon_output_logic_immediate (\"vorr\", &operands[2], <MODE>mode, 0, VALID_NEON_QREG_MODE (<MODE>mode));"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vorrq_u<mode>"
[
@@ -1071,7 +1136,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1087,7 +1153,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1103,7 +1170,8 @@ (define_insn "@mve_<mve_insn>q_r_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1118,7 +1186,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1133,7 +1202,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1148,7 +1218,8 @@ (define_insn "@mve_<mve_insn>q_<supf>v4si"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1165,7 +1236,8 @@ (define_insn "@mve_<mve_insn>q_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1179,7 +1251,8 @@ (define_insn "mve_vandq_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vand\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vandq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1193,7 +1266,8 @@ (define_insn "mve_vbicq_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vbic\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1209,7 +1283,8 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1223,7 +1298,8 @@ (define_insn "@mve_vcmp<mve_cmp_op>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmp.f%#<V_sz_elem> <mve_cmp_op>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1238,7 +1314,8 @@ (define_insn "@mve_vcmp<mve_cmp_op>q_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmp.f%#<V_sz_elem> <mve_cmp_op>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1253,8 +1330,10 @@ (define_insn "mve_vctp<MVE_vctp>q_m<MVE_vpred>"
]
"TARGET_HAVE_MVE"
"vpst\;vctpt.<MVE_vctp>\t%1"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctp<MVE_vctp>q<MVE_vpred>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")
+])
;;
;; [vcvtbq_f16_f32])
@@ -1268,7 +1347,8 @@ (define_insn "mve_vcvtbq_f16_f32v8hf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtb.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1283,7 +1363,8 @@ (define_insn "mve_vcvttq_f16_f32v8hf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1297,7 +1378,8 @@ (define_insn "mve_veorq_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"veor\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1313,7 +1395,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1331,7 +1414,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1346,7 +1430,8 @@ (define_insn "@mve_<max_min_f_str>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<max_min_f_str>.f%#<V_sz_elem> %q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<max_min_f_str>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1364,7 +1449,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1384,7 +1470,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1400,7 +1487,8 @@ (define_insn "mve_<mve_addsubmul>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_addsubmul>.f%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_addsubmul>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1414,7 +1502,8 @@ (define_insn "mve_vornq_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vorn\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1428,7 +1517,8 @@ (define_insn "mve_vorrq_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vorr\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1444,7 +1534,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1460,7 +1551,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1476,7 +1568,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1494,7 +1587,8 @@ (define_insn "@mve_<mve_insn>q_<supf>v4si"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1510,7 +1604,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1526,7 +1621,8 @@ (define_insn "@mve_<mve_insn>q_poly_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_poly_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1547,8 +1643,9 @@ (define_insn "@mve_vcmp<mve_cmp_op1>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_f<mode>"))
+ (set_attr "length""8")])
+
;;
;; [vcvtaq_m_u, vcvtaq_m_s])
;;
@@ -1562,8 +1659,10 @@ (define_insn "mve_vcvtaq_m_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtat.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
+
;;
;; [vcvtq_m_to_f_s, vcvtq_m_to_f_u])
;;
@@ -1577,8 +1676,9 @@ (define_insn "mve_vcvtq_m_to_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vqrshrnbq_n_u, vqrshrnbq_n_s]
@@ -1604,7 +1704,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1623,7 +1724,8 @@ (define_insn "@mve_<mve_insn>q_<supf>v4si"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1639,7 +1741,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1685,7 +1788,10 @@ (define_insn "mve_vshlcq_<supf><mode>"
(match_dup 4)]
VSHLCQ))]
"TARGET_HAVE_MVE"
- "vshlc\t%q0, %1, %4")
+ "vshlc\t%q0, %1, %4"
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
+])
;;
;; [vabsq_m_s]
@@ -1705,7 +1811,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1721,7 +1828,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1744,7 +1852,8 @@ (define_insn "@mve_vcmp<mve_cmp_op1>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.<isu>%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1767,7 +1876,8 @@ (define_insn "@mve_vcmp<mve_cmp_op1>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.<isu>%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1783,7 +1893,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1800,7 +1911,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1819,7 +1931,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1838,7 +1951,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1857,7 +1971,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1878,7 +1993,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1894,7 +2010,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1910,7 +2027,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1933,7 +2051,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1950,7 +2069,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1967,7 +2087,8 @@ (define_insn "@mve_<mve_insn>q_m_r_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1983,7 +2104,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1999,7 +2121,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2015,7 +2138,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2038,7 +2162,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_mnemo>t.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2054,7 +2179,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcmlaq, vcmlaq_rot90, vcmlaq_rot180, vcmlaq_rot270])
@@ -2072,7 +2198,9 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_f<mode>"
"@
vcmul.f%#<V_sz_elem> %q0, %q2, %q3, #<rot>
vcmla.f%#<V_sz_elem> %q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>")
+ (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>")])
+ (set_attr "type" "mve_move")
])
;;
@@ -2093,7 +2221,8 @@ (define_insn "@mve_vcmp<mve_cmp_op1>q_m_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2109,7 +2238,8 @@ (define_insn "mve_vcvtbq_m_f16_f32v8hf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2125,7 +2255,8 @@ (define_insn "mve_vcvtbq_m_f32_f16v4sf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f32.f16\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2141,7 +2272,8 @@ (define_insn "mve_vcvttq_m_f16_f32v8hf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2157,8 +2289,9 @@ (define_insn "mve_vcvttq_m_f32_f16v4sf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f32.f16\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vdupq_m_n_f])
@@ -2173,7 +2306,8 @@ (define_insn "@mve_<mve_insn>q_m_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2190,7 +2324,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2207,7 +2342,8 @@ (define_insn "@mve_<mve_insn>q_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2224,7 +2360,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2243,7 +2380,8 @@ (define_insn "@mve_<mve_insn>q_p_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2262,7 +2400,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2281,7 +2420,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2298,7 +2438,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2319,7 +2460,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2335,7 +2477,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2352,7 +2495,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2368,7 +2512,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2384,7 +2529,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2400,7 +2546,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2416,7 +2563,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2435,7 +2583,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2451,7 +2600,8 @@ (define_insn "mve_vcvtmq_m_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2467,7 +2617,8 @@ (define_insn "mve_vcvtpq_m_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2483,7 +2634,8 @@ (define_insn "mve_vcvtnq_m_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2500,7 +2652,8 @@ (define_insn "mve_vcvtq_m_n_from_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2516,7 +2669,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2532,8 +2686,9 @@ (define_insn "mve_vcvtq_m_from_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vabavq_p_s, vabavq_p_u])
@@ -2549,7 +2704,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -2566,8 +2722,9 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\n\t<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length" "8")])
;;
;; [vsriq_m_n_s, vsriq_m_n_u])
@@ -2583,8 +2740,9 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length" "8")])
;;
;; [vcvtq_m_n_to_f_u, vcvtq_m_n_to_f_s])
@@ -2600,7 +2758,8 @@ (define_insn "mve_vcvtq_m_n_to_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2640,7 +2799,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2659,8 +2819,9 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vaddq_m_u, vaddq_m_s]
@@ -2678,7 +2839,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2698,7 +2860,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2715,8 +2878,9 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vcaddq_rot90_m_u, vcaddq_rot90_m_s]
@@ -2735,7 +2899,8 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2763,7 +2928,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2784,7 +2950,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2802,7 +2969,8 @@ (define_insn "@mve_<mve_insn>q_int_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2819,7 +2987,8 @@ (define_insn "mve_vornq_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;vornt\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2837,7 +3006,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2855,7 +3025,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2872,7 +3043,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2892,7 +3064,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2920,7 +3093,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2940,7 +3114,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2958,7 +3133,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2976,7 +3152,8 @@ (define_insn "@mve_<mve_insn>q_poly_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_poly_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2994,7 +3171,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3012,7 +3190,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3036,7 +3215,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3057,7 +3237,8 @@ (define_insn "@mve_<mve_insn>q_m_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3077,7 +3258,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3094,7 +3276,8 @@ (define_insn "@mve_<mve_insn>q_m_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3116,7 +3299,8 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3136,7 +3320,8 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3153,7 +3338,8 @@ (define_insn "mve_vornq_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vornt\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3173,7 +3359,8 @@ (define_insn "mve_vstrbq_<supf><mode>"
output_asm_insn("vstrb.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vstrbq_scatter_offset_s vstrbq_scatter_offset_u]
@@ -3201,7 +3388,8 @@ (define_insn "mve_vstrbq_scatter_offset_<supf><mode>_insn"
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vstrb.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_s vstrwq_scatter_base_u]
@@ -3223,7 +3411,8 @@ (define_insn "mve_vstrwq_scatter_base_<supf>v4si"
output_asm_insn("vstrw.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrbq_gather_offset_s vldrbq_gather_offset_u]
@@ -3246,7 +3435,8 @@ (define_insn "mve_vldrbq_gather_offset_<supf><mode>"
output_asm_insn ("vldrb.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrbq_s vldrbq_u]
@@ -3268,7 +3458,8 @@ (define_insn "mve_vldrbq_<supf><mode>"
output_asm_insn ("vldrb.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_base_s vldrwq_gather_base_u]
@@ -3288,7 +3479,8 @@ (define_insn "mve_vldrwq_gather_base_<supf>v4si"
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vstrbq_scatter_offset_p_s vstrbq_scatter_offset_p_u]
@@ -3320,7 +3512,8 @@ (define_insn "mve_vstrbq_scatter_offset_p_<supf><mode>_insn"
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrbt.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_p_s vstrwq_scatter_base_p_u]
@@ -3343,7 +3536,8 @@ (define_insn "mve_vstrwq_scatter_base_p_<supf>v4si"
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "8")])
(define_insn "mve_vstrbq_p_<supf><mode>"
[(set (match_operand:<MVE_B_ELEM> 0 "mve_memory_operand" "=Ux")
@@ -3361,7 +3555,8 @@ (define_insn "mve_vstrbq_p_<supf><mode>"
output_asm_insn ("vpst\;vstrbt.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_gather_offset_z_s vldrbq_gather_offset_z_u]
@@ -3386,7 +3581,8 @@ (define_insn "mve_vldrbq_gather_offset_z_<supf><mode>"
output_asm_insn ("vpst\n\tvldrbt.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_z_s vldrbq_z_u]
@@ -3409,7 +3605,8 @@ (define_insn "mve_vldrbq_z_<supf><mode>"
output_asm_insn ("vpst\;vldrbt.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_z_s vldrwq_gather_base_z_u]
@@ -3430,7 +3627,8 @@ (define_insn "mve_vldrwq_gather_base_z_<supf>v4si"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_f]
@@ -3449,7 +3647,8 @@ (define_insn "mve_vldrhq_fv8hf"
output_asm_insn ("vldrh.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_s vldrhq_gather_offset_u]
@@ -3472,7 +3671,8 @@ (define_insn "mve_vldrhq_gather_offset_<supf><mode>"
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_z_s vldrhq_gather_offset_z_u]
@@ -3497,7 +3697,8 @@ (define_insn "mve_vldrhq_gather_offset_z_<supf><mode>"
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_s vldrhq_gather_shifted_offset_u]
@@ -3520,7 +3721,8 @@ (define_insn "mve_vldrhq_gather_shifted_offset_<supf><mode>"
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_shifted_offset_z_s vldrhq_gather_shited_offset_z_u]
@@ -3545,7 +3747,8 @@ (define_insn "mve_vldrhq_gather_shifted_offset_z_<supf><mode>"
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_s, vldrhq_u]
@@ -3567,7 +3770,8 @@ (define_insn "mve_vldrhq_<supf><mode>"
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_z_f]
@@ -3587,7 +3791,8 @@ (define_insn "mve_vldrhq_z_fv8hf"
output_asm_insn ("vpst\;vldrht.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_z_s vldrhq_z_u]
@@ -3610,7 +3815,8 @@ (define_insn "mve_vldrhq_z_<supf><mode>"
output_asm_insn ("vpst\;vldrht.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_f]
@@ -3629,7 +3835,8 @@ (define_insn "mve_vldrwq_fv4sf"
output_asm_insn ("vldrw.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_s vldrwq_u]
@@ -3648,7 +3855,8 @@ (define_insn "mve_vldrwq_<supf>v4si"
output_asm_insn ("vldrw.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_z_f]
@@ -3668,7 +3876,8 @@ (define_insn "mve_vldrwq_z_fv4sf"
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_z_s vldrwq_z_u]
@@ -3688,7 +3897,8 @@ (define_insn "mve_vldrwq_z_<supf>v4si"
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "8")])
(define_expand "@mve_vld1q_f<mode>"
[(match_operand:MVE_0 0 "s_register_operand")
@@ -3728,7 +3938,8 @@ (define_insn "mve_vldrdq_gather_base_<supf>v2di"
output_asm_insn ("vldrd.64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_base_z_s vldrdq_gather_base_z_u]
@@ -3749,7 +3960,8 @@ (define_insn "mve_vldrdq_gather_base_z_<supf>v2di"
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_offset_s vldrdq_gather_offset_u]
@@ -3769,7 +3981,8 @@ (define_insn "mve_vldrdq_gather_offset_<supf>v2di"
output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_offset_z_s vldrdq_gather_offset_z_u]
@@ -3790,7 +4003,8 @@ (define_insn "mve_vldrdq_gather_offset_z_<supf>v2di"
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_shifted_offset_s vldrdq_gather_shifted_offset_u]
@@ -3810,7 +4024,8 @@ (define_insn "mve_vldrdq_gather_shifted_offset_<supf>v2di"
output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_shifted_offset_z_s vldrdq_gather_shifted_offset_z_u]
@@ -3831,7 +4046,8 @@ (define_insn "mve_vldrdq_gather_shifted_offset_z_<supf>v2di"
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_offset_f]
@@ -3851,7 +4067,8 @@ (define_insn "mve_vldrhq_gather_offset_fv8hf"
output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_z_f]
@@ -3873,7 +4090,8 @@ (define_insn "mve_vldrhq_gather_offset_z_fv8hf"
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_f]
@@ -3893,7 +4111,8 @@ (define_insn "mve_vldrhq_gather_shifted_offset_fv8hf"
output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_shifted_offset_z_f]
@@ -3915,7 +4134,8 @@ (define_insn "mve_vldrhq_gather_shifted_offset_z_fv8hf"
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_f]
@@ -3935,7 +4155,8 @@ (define_insn "mve_vldrwq_gather_base_fv4sf"
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_base_z_f]
@@ -3956,7 +4177,8 @@ (define_insn "mve_vldrwq_gather_base_z_fv4sf"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_f]
@@ -3976,7 +4198,8 @@ (define_insn "mve_vldrwq_gather_offset_fv4sf"
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_offset_s vldrwq_gather_offset_u]
@@ -3996,7 +4219,8 @@ (define_insn "mve_vldrwq_gather_offset_<supf>v4si"
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_offset_z_f]
@@ -4018,7 +4242,8 @@ (define_insn "mve_vldrwq_gather_offset_z_fv4sf"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_z_s vldrwq_gather_offset_z_u]
@@ -4040,7 +4265,8 @@ (define_insn "mve_vldrwq_gather_offset_z_<supf>v4si"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_f]
@@ -4060,7 +4286,8 @@ (define_insn "mve_vldrwq_gather_shifted_offset_fv4sf"
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_shifted_offset_s vldrwq_gather_shifted_offset_u]
@@ -4080,7 +4307,8 @@ (define_insn "mve_vldrwq_gather_shifted_offset_<supf>v4si"
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_shifted_offset_z_f]
@@ -4102,7 +4330,8 @@ (define_insn "mve_vldrwq_gather_shifted_offset_z_fv4sf"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_z_s vldrwq_gather_shifted_offset_z_u]
@@ -4124,7 +4353,8 @@ (define_insn "mve_vldrwq_gather_shifted_offset_z_<supf>v4si"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_f]
@@ -4143,7 +4373,8 @@ (define_insn "mve_vstrhq_fv8hf"
output_asm_insn ("vstrh.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_p_f]
@@ -4164,7 +4395,8 @@ (define_insn "mve_vstrhq_p_fv8hf"
output_asm_insn ("vpst\;vstrht.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_p_s vstrhq_p_u]
@@ -4186,7 +4418,8 @@ (define_insn "mve_vstrhq_p_<supf><mode>"
output_asm_insn ("vpst\;vstrht.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_p_s vstrhq_scatter_offset_p_u]
@@ -4218,7 +4451,8 @@ (define_insn "mve_vstrhq_scatter_offset_p_<supf><mode>_insn"
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_s vstrhq_scatter_offset_u]
@@ -4246,7 +4480,8 @@ (define_insn "mve_vstrhq_scatter_offset_<supf><mode>_insn"
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vstrh.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_shifted_offset_p_s vstrhq_scatter_shifted_offset_p_u]
@@ -4278,7 +4513,8 @@ (define_insn "mve_vstrhq_scatter_shifted_offset_p_<supf><mode>_insn"
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_s vstrhq_scatter_shifted_offset_u]
@@ -4307,7 +4543,8 @@ (define_insn "mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vstrh.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_s, vstrhq_u]
@@ -4326,7 +4563,8 @@ (define_insn "mve_vstrhq_<supf><mode>"
output_asm_insn ("vstrh.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_f]
@@ -4345,7 +4583,8 @@ (define_insn "mve_vstrwq_fv4sf"
output_asm_insn ("vstrw.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_p_f]
@@ -4366,7 +4605,8 @@ (define_insn "mve_vstrwq_p_fv4sf"
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_p_s vstrwq_p_u]
@@ -4387,7 +4627,8 @@ (define_insn "mve_vstrwq_p_<supf>v4si"
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_s vstrwq_u]
@@ -4406,7 +4647,8 @@ (define_insn "mve_vstrwq_<supf>v4si"
output_asm_insn ("vstrw.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "4")])
(define_expand "@mve_vst1q_f<mode>"
[(match_operand:<MVE_CNVT> 0 "mve_memory_operand")
@@ -4449,7 +4691,8 @@ (define_insn "mve_vstrdq_scatter_base_p_<supf>v2di"
output_asm_insn ("vpst\;\tvstrdt.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_s vstrdq_scatter_base_u]
@@ -4471,7 +4714,8 @@ (define_insn "mve_vstrdq_scatter_base_<supf>v2di"
output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_offset_p_s vstrdq_scatter_offset_p_u]
@@ -4502,7 +4746,8 @@ (define_insn "mve_vstrdq_scatter_offset_p_<supf>v2di_insn"
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_offset_s vstrdq_scatter_offset_u]
@@ -4530,7 +4775,8 @@ (define_insn "mve_vstrdq_scatter_offset_<supf>v2di_insn"
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vstrd.64\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_shifted_offset_p_s vstrdq_scatter_shifted_offset_p_u]
@@ -4562,7 +4808,8 @@ (define_insn "mve_vstrdq_scatter_shifted_offset_p_<supf>v2di_insn"
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1, uxtw #3]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_shifted_offset_s vstrdq_scatter_shifted_offset_u]
@@ -4591,7 +4838,8 @@ (define_insn "mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vstrd.64\t%q2, [%0, %q1, uxtw #3]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_offset_f]
@@ -4619,7 +4867,8 @@ (define_insn "mve_vstrhq_scatter_offset_fv8hf_insn"
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrh.16\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_offset_p_f]
@@ -4650,7 +4899,8 @@ (define_insn "mve_vstrhq_scatter_offset_p_fv8hf_insn"
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_f]
@@ -4678,7 +4928,8 @@ (define_insn "mve_vstrhq_scatter_shifted_offset_fv8hf_insn"
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrh.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_shifted_offset_p_f]
@@ -4710,7 +4961,8 @@ (define_insn "mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn"
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_f]
@@ -4732,7 +4984,8 @@ (define_insn "mve_vstrwq_scatter_base_fv4sf"
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_p_f]
@@ -4755,7 +5008,8 @@ (define_insn "mve_vstrwq_scatter_base_p_fv4sf"
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_f]
@@ -4783,7 +5037,8 @@ (define_insn "mve_vstrwq_scatter_offset_fv4sf_insn"
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrw.32\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_offset_p_f]
@@ -4814,7 +5069,8 @@ (define_insn "mve_vstrwq_scatter_offset_p_fv4sf_insn"
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -4845,7 +5101,8 @@ (define_insn "mve_vstrwq_scatter_offset_p_<supf>v4si_insn"
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -4873,7 +5130,8 @@ (define_insn "mve_vstrwq_scatter_offset_<supf>v4si_insn"
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vstrw.32\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_shifted_offset_f]
@@ -4901,7 +5159,8 @@ (define_insn "mve_vstrwq_scatter_shifted_offset_fv4sf_insn"
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrw.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_f]
@@ -4933,7 +5192,8 @@ (define_insn "mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn"
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_s vstrwq_scatter_shifted_offset_p_u]
@@ -4965,7 +5225,8 @@ (define_insn "mve_vstrwq_scatter_shifted_offset_p_<supf>v4si_insn"
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_s vstrwq_scatter_shifted_offset_u]
@@ -4994,7 +5255,8 @@ (define_insn "mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vstrw.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "4")])
;;
;; [vidupq_n_u])
@@ -5062,7 +5324,8 @@ (define_insn "mve_vidupq_m_wb_u<mode>_insn"
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;\tvidupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vidupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vddupq_n_u])
@@ -5130,7 +5393,8 @@ (define_insn "mve_vddupq_m_wb_u<mode>_insn"
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;vddupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vddupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vdwdupq_n_u])
@@ -5246,8 +5510,9 @@ (define_insn "mve_vdwdupq_m_wb_u<mode>_insn"
]
"TARGET_HAVE_MVE"
"vpst\;vdwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [viwdupq_n_u])
@@ -5363,7 +5628,8 @@ (define_insn "mve_viwdupq_m_wb_u<mode>_insn"
]
"TARGET_HAVE_MVE"
"vpst\;\tviwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_viwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5389,7 +5655,8 @@ (define_insn "mve_vstrwq_scatter_base_wb_<supf>v4si"
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_wb_p_s vstrwq_scatter_base_wb_p_u]
@@ -5415,7 +5682,8 @@ (define_insn "mve_vstrwq_scatter_base_wb_p_<supf>v4si"
output_asm_insn ("vpst\;\tvstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_wb_f]
@@ -5440,7 +5708,8 @@ (define_insn "mve_vstrwq_scatter_base_wb_fv4sf"
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_wb_p_f]
@@ -5466,7 +5735,8 @@ (define_insn "mve_vstrwq_scatter_base_wb_p_fv4sf"
output_asm_insn ("vpst\;vstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_wb_s vstrdq_scatter_base_wb_u]
@@ -5491,7 +5761,8 @@ (define_insn "mve_vstrdq_scatter_base_wb_<supf>v2di"
output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_base_wb_p_s vstrdq_scatter_base_wb_p_u]
@@ -5517,7 +5788,8 @@ (define_insn "mve_vstrdq_scatter_base_wb_p_<supf>v2di"
output_asm_insn ("vpst\;vstrdt.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5569,7 +5841,8 @@ (define_insn "mve_vldrwq_gather_base_wb_<supf>v4si_insn"
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrwq_gather_base_wb_z_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5625,7 +5898,8 @@ (define_insn "mve_vldrwq_gather_base_wb_z_<supf>v4si_insn"
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5677,7 +5951,8 @@ (define_insn "mve_vldrwq_gather_base_wb_fv4sf_insn"
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrwq_gather_base_wb_z_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5734,7 +6009,8 @@ (define_insn "mve_vldrwq_gather_base_wb_z_fv4sf_insn"
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrdq_gather_base_wb_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -5787,7 +6063,8 @@ (define_insn "mve_vldrdq_gather_base_wb_<supf>v2di_insn"
output_asm_insn ("vldrd.64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrdq_gather_base_wb_z_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -5826,7 +6103,7 @@ (define_insn "get_fpscr_nzcvqc"
(unspec_volatile:SI [(reg:SI VFPCC_REGNUM)] UNSPEC_GET_FPSCR_NZCVQC))]
"TARGET_HAVE_MVE"
"vmrs\\t%0, FPSCR_nzcvqc"
- [(set_attr "type" "mve_move")])
+ [(set_attr "type" "mve_move")])
(define_insn "set_fpscr_nzcvqc"
[(set (reg:SI VFPCC_REGNUM)
@@ -5834,7 +6111,7 @@ (define_insn "set_fpscr_nzcvqc"
VUNSPEC_SET_FPSCR_NZCVQC))]
"TARGET_HAVE_MVE"
"vmsr\\tFPSCR_nzcvqc, %0"
- [(set_attr "type" "mve_move")])
+ [(set_attr "type" "mve_move")])
;;
;; [vldrdq_gather_base_wb_z_s vldrdq_gather_base_wb_z_u]
@@ -5859,7 +6136,8 @@ (define_insn "mve_vldrdq_gather_base_wb_z_<supf>v2di_insn"
output_asm_insn ("vpst\;vldrdt.u64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vadciq_m_s, vadciq_m_u])
;;
@@ -5876,7 +6154,8 @@ (define_insn "mve_vadciq_m_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;vadcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5893,7 +6172,8 @@ (define_insn "mve_vadciq_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vadci.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -5912,7 +6192,8 @@ (define_insn "mve_vadcq_m_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;vadct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5929,7 +6210,8 @@ (define_insn "mve_vadcq_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vadc.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")
(set_attr "conds" "set")])
@@ -5949,7 +6231,8 @@ (define_insn "mve_vsbciq_m_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;vsbcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5966,7 +6249,8 @@ (define_insn "mve_vsbciq_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vsbci.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -5985,7 +6269,8 @@ (define_insn "mve_vsbcq_m_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;vsbct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -6002,7 +6287,8 @@ (define_insn "mve_vsbcq_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vsbc.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -6031,7 +6317,7 @@ (define_insn "mve_vst2q<mode>"
"vst21.<V_sz_elem>\t{%q0, %q1}, %3", ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set_attr "length" "8")])
;;
;; [vld2q])
@@ -6059,7 +6345,7 @@ (define_insn "mve_vld2q<mode>"
"vld21.<V_sz_elem>\t{%q0, %q1}, %3", ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set_attr "length" "8")])
;;
;; [vld4q])
@@ -6402,7 +6688,8 @@ (define_insn "mve_vshlcq_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;vshlct\t%q0, %1, %4"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;; CDE instructions on MVE registers.
@@ -6414,7 +6701,8 @@ (define_insn "arm_vcx1qv16qi"
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx1\\tp%c1, %q0, #%c2"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx1qav16qi"
@@ -6425,7 +6713,8 @@ (define_insn "arm_vcx1qav16qi"
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx1a\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx2qv16qi"
@@ -6436,7 +6725,8 @@ (define_insn "arm_vcx2qv16qi"
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx2\\tp%c1, %q0, %q2, #%c3"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx2qav16qi"
@@ -6448,7 +6738,8 @@ (define_insn "arm_vcx2qav16qi"
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx2a\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx3qv16qi"
@@ -6460,7 +6751,8 @@ (define_insn "arm_vcx3qv16qi"
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx3\\tp%c1, %q0, %q2, %q3, #%c4"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx3qav16qi"
@@ -6473,7 +6765,8 @@ (define_insn "arm_vcx3qav16qi"
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx3a\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx1q<a>_p_v16qi"
@@ -6485,7 +6778,8 @@ (define_insn "arm_vcx1q<a>_p_v16qi"
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx1<a>t\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -6499,7 +6793,8 @@ (define_insn "arm_vcx2q<a>_p_v16qi"
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx2<a>t\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -6514,11 +6809,12 @@ (define_insn "arm_vcx3q<a>_p_v16qi"
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx3<a>t\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
-(define_insn "*movmisalign<mode>_mve_store"
+(define_insn "movmisalign<mode>_mve_store"
[(set (match_operand:MVE_VLD_ST 0 "mve_memory_operand" "=Ux")
(unspec:MVE_VLD_ST [(match_operand:MVE_VLD_ST 1 "s_register_operand" " w")]
UNSPEC_MISALIGNED_ACCESS))]
@@ -6526,11 +6822,12 @@ (define_insn "*movmisalign<mode>_mve_store"
|| (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE (<MODE>mode)))
&& !BYTES_BIG_ENDIAN && unaligned_access"
"vstr<V_sz_elem1>.<V_sz_elem>\t%q1, %E0"
- [(set_attr "type" "mve_store")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_movmisalign<mode>_mve_store"))
+ (set_attr "type" "mve_store")]
)
-(define_insn "*movmisalign<mode>_mve_load"
+(define_insn "movmisalign<mode>_mve_load"
[(set (match_operand:MVE_VLD_ST 0 "s_register_operand" "=w")
(unspec:MVE_VLD_ST [(match_operand:MVE_VLD_ST 1 "mve_memory_operand" " Ux")]
UNSPEC_MISALIGNED_ACCESS))]
@@ -6538,7 +6835,8 @@ (define_insn "*movmisalign<mode>_mve_load"
|| (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE (<MODE>mode)))
&& !BYTES_BIG_ENDIAN && unaligned_access"
"vldr<V_sz_elem1>.<V_sz_elem>\t%q0, %E1"
- [(set_attr "type" "mve_load")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_movmisalign<mode>_mve_load"))
+ (set_attr "type" "mve_load")]
)
;; Expander for VxBI moves
@@ -6620,3 +6918,40 @@ (define_expand "@arm_mve_reinterpret<mode>"
}
}
)
+
+;; Originally expanded by 'predicated_doloop_end'.
+;; In the rare situation where the branch is too far, we do also need to
+;; revert FPSCR.LTPSIZE back to 0x100 after the last iteration.
+(define_insn "*predicated_doloop_end_internal"
+ [(set (pc)
+ (if_then_else
+ (ge (plus:SI (reg:SI LR_REGNUM)
+ (match_operand:SI 0 "const_int_operand" ""))
+ (const_int 0))
+ (label_ref (match_operand 1 "" ""))
+ (pc)))
+ (set (reg:SI LR_REGNUM)
+ (plus:SI (reg:SI LR_REGNUM) (match_dup 0)))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2"
+ {
+ if (get_attr_length (insn) == 4)
+ return "letp\t%|lr, %l1";
+ else
+ return "subs\t%|lr, #%n0\n\tbgt\t%l1\n\tlctp";
+ }
+ [(set (attr "length")
+ (if_then_else
+ (ltu (minus (pc) (match_dup 1)) (const_int 1024))
+ (const_int 4)
+ (const_int 6)))
+ (set_attr "type" "branch")])
+
+(define_insn "dlstp<mode1>_insn"
+ [
+ (set (reg:SI LR_REGNUM)
+ (unspec:SI [(match_operand:SI 0 "s_register_operand" "r")]
+ DLSTP))
+ ]
+ "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2"
+ "dlstp.<mode1>\t%|lr, %0")
diff --git a/gcc/config/arm/vec-common.md b/gcc/config/arm/vec-common.md
index 9af8429968d..74871cb984b 100644
--- a/gcc/config/arm/vec-common.md
+++ b/gcc/config/arm/vec-common.md
@@ -366,7 +366,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
"@
<mve_insn>.<supf>%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2
* return neon_output_shift_immediate (\"vshl\", 'i', &operands[2], <MODE>mode, VALID_NEON_QREG_MODE (<MODE>mode), true);"
- [(set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
)
(define_expand "vashl<mode>3"
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns
2023-12-18 11:53 ` [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns Andre Vieira
@ 2023-12-20 16:54 ` Andre Vieira (lists)
0 siblings, 0 replies; 10+ messages in thread
From: Andre Vieira (lists) @ 2023-12-20 16:54 UTC (permalink / raw)
To: gcc-patches; +Cc: Richard.Earnshaw, Stam Markianos-Wright
[-- Attachment #1: Type: text/plain, Size: 377 bytes --]
Reworked patch after Richard's comments and moved
predicated_doloop_end_internal and dlstp*_insn to the next patch in the
series to make sure this one builds on its own.
On 18/12/2023 11:53, Andre Vieira wrote:
>
> Re-sending Stam's first patch, same as:
> https://gcc.gnu.org/pipermail/gcc-patches/2023-November/635301.html
>
> Hopefully patchworks can pick this up :)
>
[-- Attachment #2: 0001-arm-Add-define_attr-to-to-create-a-mapping-between-M_v2.patch --]
[-- Type: text/plain, Size: 103870 bytes --]
diff --git a/gcc/config/arm/arm.h b/gcc/config/arm/arm.h
index a9c2752c0ea5ecd4597ded254e9426753ac0a098..f0b01b7461f883994a0be137cb6cbf079d54618b 100644
--- a/gcc/config/arm/arm.h
+++ b/gcc/config/arm/arm.h
@@ -2375,6 +2375,21 @@ extern int making_const_table;
else if (TARGET_THUMB1) \
thumb1_final_prescan_insn (INSN)
+/* These defines are useful to refer to the value of the mve_unpredicated_insn
+ insn attribute. Note that, because these use the get_attr_* function, these
+ will change recog_data if (INSN) isn't current_insn. */
+#define MVE_VPT_PREDICABLE_INSN_P(INSN) \
+ (recog_memoized (INSN) >= 0 \
+ && get_attr_mve_unpredicated_insn (INSN) != CODE_FOR_nothing)
+
+#define MVE_VPT_PREDICATED_INSN_P(INSN) \
+ (MVE_VPT_PREDICABLE_INSN_P (INSN) \
+ && recog_memoized (INSN) != get_attr_mve_unpredicated_insn (INSN))
+
+#define MVE_VPT_UNPREDICATED_INSN_P(INSN) \
+ (MVE_VPT_PREDICABLE_INSN_P (INSN) \
+ && recog_memoized (INSN) == get_attr_mve_unpredicated_insn (INSN))
+
#define ARM_SIGN_EXTEND(x) ((HOST_WIDE_INT) \
(HOST_BITS_PER_WIDE_INT <= 32 ? (unsigned HOST_WIDE_INT) (x) \
: ((((unsigned HOST_WIDE_INT)(x)) & (unsigned HOST_WIDE_INT) 0xffffffff) |\
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index 07eaf06cdeace750fe1c7d399deb833ef5fc2b66..296212be33ffe6397b05491d8854d2a59f7c54df 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -124,6 +124,12 @@ (define_attr "fpu" "none,vfp"
; and not all ARM insns do.
(define_attr "predicated" "yes,no" (const_string "no"))
+; An attribute that encodes the CODE_FOR_<insn> of the MVE VPT unpredicated
+; version of a VPT-predicated instruction. For unpredicated instructions
+; that are predicable, encode the same pattern's CODE_FOR_<insn> as a way to
+; encode that it is a predicable instruction.
+(define_attr "mve_unpredicated_insn" "" (symbol_ref "CODE_FOR_nothing"))
+
; LENGTH of an instruction (in bytes)
(define_attr "length" ""
(const_int 4))
diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md
index a980353810166312d5bdfc8ad58b2825c910d0a0..5ea2d9e866891bdb3dc73fcf6cbd6cdd2f989951 100644
--- a/gcc/config/arm/iterators.md
+++ b/gcc/config/arm/iterators.md
@@ -2305,6 +2305,7 @@ (define_int_attr simd32_op [(UNSPEC_QADD8 "qadd8") (UNSPEC_QSUB8 "qsub8")
(define_int_attr mmla_sfx [(UNSPEC_MATMUL_S "s8") (UNSPEC_MATMUL_U "u8")
(UNSPEC_MATMUL_US "s8")])
+
;;MVE int attribute.
(define_int_attr supf [(VCVTQ_TO_F_S "s") (VCVTQ_TO_F_U "u") (VREV16Q_S "s")
(VREV16Q_U "u") (VMVNQ_N_S "s") (VMVNQ_N_U "u")
diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md
index b0d3443da9cee991193d390200738290806a1e69..b1862d7977e91605cd971e634105bed3fa6e75cb 100644
--- a/gcc/config/arm/mve.md
+++ b/gcc/config/arm/mve.md
@@ -17,7 +17,7 @@
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
-(define_insn "*mve_mov<mode>"
+(define_insn "mve_mov<mode>"
[(set (match_operand:MVE_types 0 "nonimmediate_operand" "=w,w,r,w , w, r,Ux,w")
(match_operand:MVE_types 1 "general_operand" " w,r,w,DnDm,UxUi,r,w, Ul"))]
"TARGET_HAVE_MVE || TARGET_HAVE_MVE_FLOAT"
@@ -81,18 +81,27 @@ (define_insn "*mve_mov<mode>"
return "";
}
}
- [(set_attr "type" "mve_move,mve_move,mve_move,mve_move,mve_load,multiple,mve_store,mve_load")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")])
+ (set_attr "type" "mve_move,mve_move,mve_move,mve_move,mve_load,multiple,mve_store,mve_load")
(set_attr "length" "4,8,8,4,4,8,4,8")
(set_attr "thumb2_pool_range" "*,*,*,*,1018,*,*,*")
(set_attr "neg_pool_range" "*,*,*,*,996,*,*,*")])
-(define_insn "*mve_vdup<mode>"
+(define_insn "mve_vdup<mode>"
[(set (match_operand:MVE_vecs 0 "s_register_operand" "=w")
(vec_duplicate:MVE_vecs
(match_operand:<V_elem> 1 "s_register_operand" "r")))]
"TARGET_HAVE_MVE || TARGET_HAVE_MVE_FLOAT"
"vdup.<V_sz_elem>\t%q0, %1"
- [(set_attr "length" "4")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdup<mode>"))
+ (set_attr "length" "4")
(set_attr "type" "mve_move")])
;;
@@ -145,7 +154,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_mnemo>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -159,7 +169,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -173,7 +184,8 @@ (define_insn "mve_v<absneg_str>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"v<absneg_str>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_v<absneg_str>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -187,7 +199,8 @@ (define_insn "@mve_<mve_insn>q_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -201,7 +214,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
;; [vcvttq_f32_f16])
@@ -214,7 +228,8 @@ (define_insn "mve_vcvttq_f32_f16v4sf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtt.f32.f16\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -228,7 +243,8 @@ (define_insn "mve_vcvtbq_f32_f16v4sf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtb.f32.f16\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -242,7 +258,8 @@ (define_insn "mve_vcvtq_to_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -256,7 +273,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -270,7 +288,8 @@ (define_insn "mve_vcvtq_from_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -284,7 +303,8 @@ (define_insn "mve_v<absneg_str>q_s<mode>"
]
"TARGET_HAVE_MVE"
"v<absneg_str>.s%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_v<absneg_str>q_s<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -297,7 +317,8 @@ (define_insn "mve_vmvnq_u<mode>"
]
"TARGET_HAVE_MVE"
"vmvn\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmvnq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vmvnq_s<mode>"
[
@@ -318,7 +339,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -331,7 +353,8 @@ (define_insn "@mve_vclzq_s<mode>"
]
"TARGET_HAVE_MVE"
"vclz.i%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vclzq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vclzq_u<mode>"
[
@@ -354,7 +377,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -368,7 +392,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -382,7 +407,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -397,7 +423,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -411,7 +438,8 @@ (define_insn "mve_vcvtpq_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtp.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -425,7 +453,8 @@ (define_insn "mve_vcvtnq_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtn.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -439,7 +468,8 @@ (define_insn "mve_vcvtmq_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtm.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -453,7 +483,8 @@ (define_insn "mve_vcvtaq_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvta.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -467,7 +498,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -481,7 +513,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -495,7 +528,8 @@ (define_insn "@mve_<mve_insn>q_<supf>v4si"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -509,7 +543,8 @@ (define_insn "mve_vctp<MVE_vctp>q<MVE_vpred>"
]
"TARGET_HAVE_MVE"
"vctp.<MVE_vctp>\t%1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctp<MVE_vctp>q<MVE_vpred>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -523,7 +558,8 @@ (define_insn "mve_vpnotv16bi"
]
"TARGET_HAVE_MVE"
"vpnot"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vpnotv16bi"))
+ (set_attr "type" "mve_move")
])
;;
@@ -538,7 +574,8 @@ (define_insn "@mve_<mve_insn>q_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -553,7 +590,8 @@ (define_insn "mve_vcvtq_n_to_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.f<V_sz_elem>.<supf><V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; [vcreateq_f])
@@ -599,7 +637,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf><V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; Versions that take constant vectors as operand 2 (with all elements
@@ -617,7 +656,8 @@ (define_insn "mve_vshrq_n_s<mode>_imm"
VALID_NEON_QREG_MODE (<MODE>mode),
true);
}
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_s<mode>_imm"))
+ (set_attr "type" "mve_move")
])
(define_insn "mve_vshrq_n_u<mode>_imm"
[
@@ -632,7 +672,8 @@ (define_insn "mve_vshrq_n_u<mode>_imm"
VALID_NEON_QREG_MODE (<MODE>mode),
true);
}
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_u<mode>_imm"))
+ (set_attr "type" "mve_move")
])
;;
@@ -647,7 +688,8 @@ (define_insn "mve_vcvtq_n_from_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.<supf><V_sz_elem>.f<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -662,8 +704,9 @@ (define_insn "@mve_<mve_insn>q_p_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vcmpneq_, vcmpcsq_, vcmpeqq_, vcmpgeq_, vcmpgtq_, vcmphiq_, vcmpleq_, vcmpltq_])
@@ -676,7 +719,8 @@ (define_insn "@mve_vcmp<mve_cmp_op>q_<mode>"
]
"TARGET_HAVE_MVE"
"vcmp.<mve_cmp_type>%#<V_sz_elem>\t<mve_cmp_op>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -691,7 +735,8 @@ (define_insn "@mve_vcmp<mve_cmp_op>q_n_<mode>"
]
"TARGET_HAVE_MVE"
"vcmp.<mve_cmp_type>%#<V_sz_elem> <mve_cmp_op>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_n_<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -722,7 +767,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -739,7 +785,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -754,7 +801,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -769,7 +817,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -789,8 +838,11 @@ (define_insn "mve_vandq_u<mode>"
"@
vand\t%q0, %q1, %q2
* return neon_output_logic_immediate (\"vand\", &operands[2], <MODE>mode, 1, VALID_NEON_QREG_MODE (<MODE>mode));"
- [(set_attr "type" "mve_move")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_vandq_u<mode>")
+ (symbol_ref "CODE_FOR_nothing")])
+ (set_attr "type" "mve_move")
])
+
(define_expand "mve_vandq_s<mode>"
[
(set (match_operand:MVE_2 0 "s_register_operand")
@@ -811,7 +863,8 @@ (define_insn "mve_vbicq_u<mode>"
]
"TARGET_HAVE_MVE"
"vbic\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vbicq_s<mode>"
@@ -835,7 +888,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -853,7 +907,8 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; Auto vectorizer pattern for int vcadd
@@ -876,7 +931,8 @@ (define_insn "mve_veorq_u<mode>"
]
"TARGET_HAVE_MVE"
"veor\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_veorq_s<mode>"
[
@@ -904,7 +960,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -920,7 +977,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -935,7 +993,8 @@ (define_insn "mve_<max_min_su_str>q_<max_min_supf><mode>"
]
"TARGET_HAVE_MVE"
"<max_min_su_str>.<max_min_supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<max_min_su_str>q_<max_min_supf><mode>"))
+ (set_attr "type" "mve_move")
])
@@ -954,7 +1013,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -972,7 +1032,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -988,7 +1049,8 @@ (define_insn "@mve_<mve_insn>q_int_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1004,7 +1066,8 @@ (define_insn "mve_<mve_addsubmul>q<mode>"
]
"TARGET_HAVE_MVE"
"<mve_addsubmul>.i%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_addsubmul>q<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1018,7 +1081,8 @@ (define_insn "mve_vornq_s<mode>"
]
"TARGET_HAVE_MVE"
"vorn\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vornq_u<mode>"
@@ -1047,7 +1111,8 @@ (define_insn "mve_vorrq_s<mode>"
"@
vorr\t%q0, %q1, %q2
* return neon_output_logic_immediate (\"vorr\", &operands[2], <MODE>mode, 0, VALID_NEON_QREG_MODE (<MODE>mode));"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vorrq_u<mode>"
[
@@ -1071,7 +1136,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1087,7 +1153,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1103,7 +1170,8 @@ (define_insn "@mve_<mve_insn>q_r_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1118,7 +1186,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1133,7 +1202,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1148,7 +1218,8 @@ (define_insn "@mve_<mve_insn>q_<supf>v4si"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1165,7 +1236,8 @@ (define_insn "@mve_<mve_insn>q_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1179,7 +1251,8 @@ (define_insn "mve_vandq_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vand\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vandq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1193,7 +1266,8 @@ (define_insn "mve_vbicq_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vbic\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1209,7 +1283,8 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1223,7 +1298,8 @@ (define_insn "@mve_vcmp<mve_cmp_op>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmp.f%#<V_sz_elem> <mve_cmp_op>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1238,7 +1314,8 @@ (define_insn "@mve_vcmp<mve_cmp_op>q_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmp.f%#<V_sz_elem> <mve_cmp_op>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1253,8 +1330,10 @@ (define_insn "mve_vctp<MVE_vctp>q_m<MVE_vpred>"
]
"TARGET_HAVE_MVE"
"vpst\;vctpt.<MVE_vctp>\t%1"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctp<MVE_vctp>q<MVE_vpred>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")
+])
;;
;; [vcvtbq_f16_f32])
@@ -1268,7 +1347,8 @@ (define_insn "mve_vcvtbq_f16_f32v8hf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtb.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1283,7 +1363,8 @@ (define_insn "mve_vcvttq_f16_f32v8hf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1297,7 +1378,8 @@ (define_insn "mve_veorq_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"veor\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1313,7 +1395,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1331,7 +1414,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1346,7 +1430,8 @@ (define_insn "@mve_<max_min_f_str>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<max_min_f_str>.f%#<V_sz_elem> %q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<max_min_f_str>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1364,7 +1449,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1384,7 +1470,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1400,7 +1487,8 @@ (define_insn "mve_<mve_addsubmul>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_addsubmul>.f%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_addsubmul>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1414,7 +1502,8 @@ (define_insn "mve_vornq_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vorn\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1428,7 +1517,8 @@ (define_insn "mve_vorrq_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vorr\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1444,7 +1534,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1460,7 +1551,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1476,7 +1568,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1494,7 +1587,8 @@ (define_insn "@mve_<mve_insn>q_<supf>v4si"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1510,7 +1604,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1526,7 +1621,8 @@ (define_insn "@mve_<mve_insn>q_poly_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_poly_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1547,8 +1643,9 @@ (define_insn "@mve_vcmp<mve_cmp_op1>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_f<mode>"))
+ (set_attr "length""8")])
+
;;
;; [vcvtaq_m_u, vcvtaq_m_s])
;;
@@ -1562,8 +1659,10 @@ (define_insn "mve_vcvtaq_m_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtat.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
+
;;
;; [vcvtq_m_to_f_s, vcvtq_m_to_f_u])
;;
@@ -1577,8 +1676,9 @@ (define_insn "mve_vcvtq_m_to_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vqrshrnbq_n_u, vqrshrnbq_n_s]
@@ -1604,7 +1704,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1623,7 +1724,8 @@ (define_insn "@mve_<mve_insn>q_<supf>v4si"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1639,7 +1741,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1685,7 +1788,10 @@ (define_insn "mve_vshlcq_<supf><mode>"
(match_dup 4)]
VSHLCQ))]
"TARGET_HAVE_MVE"
- "vshlc\t%q0, %1, %4")
+ "vshlc\t%q0, %1, %4"
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
+])
;;
;; [vabsq_m_s]
@@ -1705,7 +1811,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1721,7 +1828,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1744,7 +1852,8 @@ (define_insn "@mve_vcmp<mve_cmp_op1>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.<isu>%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1767,7 +1876,8 @@ (define_insn "@mve_vcmp<mve_cmp_op1>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.<isu>%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1783,7 +1893,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1800,7 +1911,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1819,7 +1931,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1838,7 +1951,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1857,7 +1971,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1878,7 +1993,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1894,7 +2010,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1910,7 +2027,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1933,7 +2051,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1950,7 +2069,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1967,7 +2087,8 @@ (define_insn "@mve_<mve_insn>q_m_r_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1983,7 +2104,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1999,7 +2121,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2015,7 +2138,8 @@ (define_insn "@mve_<mve_insn>q_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2038,7 +2162,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_mnemo>t.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2054,7 +2179,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcmlaq, vcmlaq_rot90, vcmlaq_rot180, vcmlaq_rot270])
@@ -2072,7 +2198,9 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_f<mode>"
"@
vcmul.f%#<V_sz_elem> %q0, %q2, %q3, #<rot>
vcmla.f%#<V_sz_elem> %q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>")
+ (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>")])
+ (set_attr "type" "mve_move")
])
;;
@@ -2093,7 +2221,8 @@ (define_insn "@mve_vcmp<mve_cmp_op1>q_m_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2109,7 +2238,8 @@ (define_insn "mve_vcvtbq_m_f16_f32v8hf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2125,7 +2255,8 @@ (define_insn "mve_vcvtbq_m_f32_f16v4sf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f32.f16\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2141,7 +2272,8 @@ (define_insn "mve_vcvttq_m_f16_f32v8hf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2157,8 +2289,9 @@ (define_insn "mve_vcvttq_m_f32_f16v4sf"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f32.f16\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vdupq_m_n_f])
@@ -2173,7 +2306,8 @@ (define_insn "@mve_<mve_insn>q_m_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2190,7 +2324,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2207,7 +2342,8 @@ (define_insn "@mve_<mve_insn>q_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2224,7 +2360,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2243,7 +2380,8 @@ (define_insn "@mve_<mve_insn>q_p_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2262,7 +2400,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2281,7 +2420,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2298,7 +2438,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2319,7 +2460,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2335,7 +2477,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2352,7 +2495,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2368,7 +2512,8 @@ (define_insn "@mve_<mve_insn>q_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2384,7 +2529,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2400,7 +2546,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2416,7 +2563,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2435,7 +2583,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2451,7 +2600,8 @@ (define_insn "mve_vcvtmq_m_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2467,7 +2617,8 @@ (define_insn "mve_vcvtpq_m_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2483,7 +2634,8 @@ (define_insn "mve_vcvtnq_m_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2500,7 +2652,8 @@ (define_insn "mve_vcvtq_m_n_from_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2516,7 +2669,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2532,8 +2686,9 @@ (define_insn "mve_vcvtq_m_from_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vabavq_p_s, vabavq_p_u])
@@ -2549,7 +2704,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -2566,8 +2722,9 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\n\t<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length" "8")])
;;
;; [vsriq_m_n_s, vsriq_m_n_u])
@@ -2583,8 +2740,9 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length" "8")])
;;
;; [vcvtq_m_n_to_f_u, vcvtq_m_n_to_f_s])
@@ -2600,7 +2758,8 @@ (define_insn "mve_vcvtq_m_n_to_f_<supf><mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2640,7 +2799,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2659,8 +2819,9 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vaddq_m_u, vaddq_m_s]
@@ -2678,7 +2839,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2698,7 +2860,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2715,8 +2878,9 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vcaddq_rot90_m_u, vcaddq_rot90_m_s]
@@ -2735,7 +2899,8 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2763,7 +2928,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2784,7 +2950,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2802,7 +2969,8 @@ (define_insn "@mve_<mve_insn>q_int_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2819,7 +2987,8 @@ (define_insn "mve_vornq_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;vornt\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2837,7 +3006,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2855,7 +3025,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2872,7 +3043,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2892,7 +3064,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2920,7 +3093,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2940,7 +3114,8 @@ (define_insn "@mve_<mve_insn>q_p_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2958,7 +3133,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2976,7 +3152,8 @@ (define_insn "@mve_<mve_insn>q_poly_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_poly_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2994,7 +3171,8 @@ (define_insn "@mve_<mve_insn>q_m_n_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3012,7 +3190,8 @@ (define_insn "@mve_<mve_insn>q_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3036,7 +3215,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3057,7 +3237,8 @@ (define_insn "@mve_<mve_insn>q_m_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3077,7 +3258,8 @@ (define_insn "@mve_<mve_insn>q_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3094,7 +3276,8 @@ (define_insn "@mve_<mve_insn>q_m_n_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3116,7 +3299,8 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3136,7 +3320,8 @@ (define_insn "@mve_<mve_insn>q<mve_rot>_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3153,7 +3338,8 @@ (define_insn "mve_vornq_m_f<mode>"
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vornt\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3173,7 +3359,8 @@ (define_insn "mve_vstrbq_<supf><mode>"
output_asm_insn("vstrb.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vstrbq_scatter_offset_s vstrbq_scatter_offset_u]
@@ -3201,7 +3388,8 @@ (define_insn "mve_vstrbq_scatter_offset_<supf><mode>_insn"
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vstrb.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_s vstrwq_scatter_base_u]
@@ -3223,7 +3411,8 @@ (define_insn "mve_vstrwq_scatter_base_<supf>v4si"
output_asm_insn("vstrw.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrbq_gather_offset_s vldrbq_gather_offset_u]
@@ -3246,7 +3435,8 @@ (define_insn "mve_vldrbq_gather_offset_<supf><mode>"
output_asm_insn ("vldrb.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrbq_s vldrbq_u]
@@ -3268,7 +3458,8 @@ (define_insn "mve_vldrbq_<supf><mode>"
output_asm_insn ("vldrb.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_base_s vldrwq_gather_base_u]
@@ -3288,7 +3479,8 @@ (define_insn "mve_vldrwq_gather_base_<supf>v4si"
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vstrbq_scatter_offset_p_s vstrbq_scatter_offset_p_u]
@@ -3320,7 +3512,8 @@ (define_insn "mve_vstrbq_scatter_offset_p_<supf><mode>_insn"
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrbt.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_p_s vstrwq_scatter_base_p_u]
@@ -3343,7 +3536,8 @@ (define_insn "mve_vstrwq_scatter_base_p_<supf>v4si"
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "8")])
(define_insn "mve_vstrbq_p_<supf><mode>"
[(set (match_operand:<MVE_B_ELEM> 0 "mve_memory_operand" "=Ux")
@@ -3361,7 +3555,8 @@ (define_insn "mve_vstrbq_p_<supf><mode>"
output_asm_insn ("vpst\;vstrbt.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_gather_offset_z_s vldrbq_gather_offset_z_u]
@@ -3386,7 +3581,8 @@ (define_insn "mve_vldrbq_gather_offset_z_<supf><mode>"
output_asm_insn ("vpst\n\tvldrbt.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_z_s vldrbq_z_u]
@@ -3409,7 +3605,8 @@ (define_insn "mve_vldrbq_z_<supf><mode>"
output_asm_insn ("vpst\;vldrbt.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_z_s vldrwq_gather_base_z_u]
@@ -3430,7 +3627,8 @@ (define_insn "mve_vldrwq_gather_base_z_<supf>v4si"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_f]
@@ -3449,7 +3647,8 @@ (define_insn "mve_vldrhq_fv8hf"
output_asm_insn ("vldrh.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_s vldrhq_gather_offset_u]
@@ -3472,7 +3671,8 @@ (define_insn "mve_vldrhq_gather_offset_<supf><mode>"
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_z_s vldrhq_gather_offset_z_u]
@@ -3497,7 +3697,8 @@ (define_insn "mve_vldrhq_gather_offset_z_<supf><mode>"
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_s vldrhq_gather_shifted_offset_u]
@@ -3520,7 +3721,8 @@ (define_insn "mve_vldrhq_gather_shifted_offset_<supf><mode>"
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_shifted_offset_z_s vldrhq_gather_shited_offset_z_u]
@@ -3545,7 +3747,8 @@ (define_insn "mve_vldrhq_gather_shifted_offset_z_<supf><mode>"
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_s, vldrhq_u]
@@ -3567,7 +3770,8 @@ (define_insn "mve_vldrhq_<supf><mode>"
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_z_f]
@@ -3587,7 +3791,8 @@ (define_insn "mve_vldrhq_z_fv8hf"
output_asm_insn ("vpst\;vldrht.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_z_s vldrhq_z_u]
@@ -3610,7 +3815,8 @@ (define_insn "mve_vldrhq_z_<supf><mode>"
output_asm_insn ("vpst\;vldrht.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_f]
@@ -3629,7 +3835,8 @@ (define_insn "mve_vldrwq_fv4sf"
output_asm_insn ("vldrw.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_s vldrwq_u]
@@ -3648,7 +3855,8 @@ (define_insn "mve_vldrwq_<supf>v4si"
output_asm_insn ("vldrw.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_z_f]
@@ -3668,7 +3876,8 @@ (define_insn "mve_vldrwq_z_fv4sf"
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_z_s vldrwq_z_u]
@@ -3688,7 +3897,8 @@ (define_insn "mve_vldrwq_z_<supf>v4si"
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "8")])
(define_expand "@mve_vld1q_f<mode>"
[(match_operand:MVE_0 0 "s_register_operand")
@@ -3728,7 +3938,8 @@ (define_insn "mve_vldrdq_gather_base_<supf>v2di"
output_asm_insn ("vldrd.64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_base_z_s vldrdq_gather_base_z_u]
@@ -3749,7 +3960,8 @@ (define_insn "mve_vldrdq_gather_base_z_<supf>v2di"
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_offset_s vldrdq_gather_offset_u]
@@ -3769,7 +3981,8 @@ (define_insn "mve_vldrdq_gather_offset_<supf>v2di"
output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_offset_z_s vldrdq_gather_offset_z_u]
@@ -3790,7 +4003,8 @@ (define_insn "mve_vldrdq_gather_offset_z_<supf>v2di"
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_shifted_offset_s vldrdq_gather_shifted_offset_u]
@@ -3810,7 +4024,8 @@ (define_insn "mve_vldrdq_gather_shifted_offset_<supf>v2di"
output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_shifted_offset_z_s vldrdq_gather_shifted_offset_z_u]
@@ -3831,7 +4046,8 @@ (define_insn "mve_vldrdq_gather_shifted_offset_z_<supf>v2di"
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_offset_f]
@@ -3851,7 +4067,8 @@ (define_insn "mve_vldrhq_gather_offset_fv8hf"
output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_z_f]
@@ -3873,7 +4090,8 @@ (define_insn "mve_vldrhq_gather_offset_z_fv8hf"
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_f]
@@ -3893,7 +4111,8 @@ (define_insn "mve_vldrhq_gather_shifted_offset_fv8hf"
output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_shifted_offset_z_f]
@@ -3915,7 +4134,8 @@ (define_insn "mve_vldrhq_gather_shifted_offset_z_fv8hf"
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_f]
@@ -3935,7 +4155,8 @@ (define_insn "mve_vldrwq_gather_base_fv4sf"
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_base_z_f]
@@ -3956,7 +4177,8 @@ (define_insn "mve_vldrwq_gather_base_z_fv4sf"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_f]
@@ -3976,7 +4198,8 @@ (define_insn "mve_vldrwq_gather_offset_fv4sf"
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_offset_s vldrwq_gather_offset_u]
@@ -3996,7 +4219,8 @@ (define_insn "mve_vldrwq_gather_offset_<supf>v4si"
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_offset_z_f]
@@ -4018,7 +4242,8 @@ (define_insn "mve_vldrwq_gather_offset_z_fv4sf"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_z_s vldrwq_gather_offset_z_u]
@@ -4040,7 +4265,8 @@ (define_insn "mve_vldrwq_gather_offset_z_<supf>v4si"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_f]
@@ -4060,7 +4286,8 @@ (define_insn "mve_vldrwq_gather_shifted_offset_fv4sf"
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_shifted_offset_s vldrwq_gather_shifted_offset_u]
@@ -4080,7 +4307,8 @@ (define_insn "mve_vldrwq_gather_shifted_offset_<supf>v4si"
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_shifted_offset_z_f]
@@ -4102,7 +4330,8 @@ (define_insn "mve_vldrwq_gather_shifted_offset_z_fv4sf"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_z_s vldrwq_gather_shifted_offset_z_u]
@@ -4124,7 +4353,8 @@ (define_insn "mve_vldrwq_gather_shifted_offset_z_<supf>v4si"
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_f]
@@ -4143,7 +4373,8 @@ (define_insn "mve_vstrhq_fv8hf"
output_asm_insn ("vstrh.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_p_f]
@@ -4164,7 +4395,8 @@ (define_insn "mve_vstrhq_p_fv8hf"
output_asm_insn ("vpst\;vstrht.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_p_s vstrhq_p_u]
@@ -4186,7 +4418,8 @@ (define_insn "mve_vstrhq_p_<supf><mode>"
output_asm_insn ("vpst\;vstrht.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_p_s vstrhq_scatter_offset_p_u]
@@ -4218,7 +4451,8 @@ (define_insn "mve_vstrhq_scatter_offset_p_<supf><mode>_insn"
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_s vstrhq_scatter_offset_u]
@@ -4246,7 +4480,8 @@ (define_insn "mve_vstrhq_scatter_offset_<supf><mode>_insn"
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vstrh.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_shifted_offset_p_s vstrhq_scatter_shifted_offset_p_u]
@@ -4278,7 +4513,8 @@ (define_insn "mve_vstrhq_scatter_shifted_offset_p_<supf><mode>_insn"
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_s vstrhq_scatter_shifted_offset_u]
@@ -4307,7 +4543,8 @@ (define_insn "mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vstrh.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_s, vstrhq_u]
@@ -4326,7 +4563,8 @@ (define_insn "mve_vstrhq_<supf><mode>"
output_asm_insn ("vstrh.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_f]
@@ -4345,7 +4583,8 @@ (define_insn "mve_vstrwq_fv4sf"
output_asm_insn ("vstrw.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_p_f]
@@ -4366,7 +4605,8 @@ (define_insn "mve_vstrwq_p_fv4sf"
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_p_s vstrwq_p_u]
@@ -4387,7 +4627,8 @@ (define_insn "mve_vstrwq_p_<supf>v4si"
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_s vstrwq_u]
@@ -4406,7 +4647,8 @@ (define_insn "mve_vstrwq_<supf>v4si"
output_asm_insn ("vstrw.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "4")])
(define_expand "@mve_vst1q_f<mode>"
[(match_operand:<MVE_CNVT> 0 "mve_memory_operand")
@@ -4449,7 +4691,8 @@ (define_insn "mve_vstrdq_scatter_base_p_<supf>v2di"
output_asm_insn ("vpst\;\tvstrdt.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_s vstrdq_scatter_base_u]
@@ -4471,7 +4714,8 @@ (define_insn "mve_vstrdq_scatter_base_<supf>v2di"
output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_offset_p_s vstrdq_scatter_offset_p_u]
@@ -4502,7 +4746,8 @@ (define_insn "mve_vstrdq_scatter_offset_p_<supf>v2di_insn"
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_offset_s vstrdq_scatter_offset_u]
@@ -4530,7 +4775,8 @@ (define_insn "mve_vstrdq_scatter_offset_<supf>v2di_insn"
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vstrd.64\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_shifted_offset_p_s vstrdq_scatter_shifted_offset_p_u]
@@ -4562,7 +4808,8 @@ (define_insn "mve_vstrdq_scatter_shifted_offset_p_<supf>v2di_insn"
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1, uxtw #3]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_shifted_offset_s vstrdq_scatter_shifted_offset_u]
@@ -4591,7 +4838,8 @@ (define_insn "mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vstrd.64\t%q2, [%0, %q1, uxtw #3]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_offset_f]
@@ -4619,7 +4867,8 @@ (define_insn "mve_vstrhq_scatter_offset_fv8hf_insn"
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrh.16\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_offset_p_f]
@@ -4650,7 +4899,8 @@ (define_insn "mve_vstrhq_scatter_offset_p_fv8hf_insn"
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_f]
@@ -4678,7 +4928,8 @@ (define_insn "mve_vstrhq_scatter_shifted_offset_fv8hf_insn"
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrh.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_shifted_offset_p_f]
@@ -4710,7 +4961,8 @@ (define_insn "mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn"
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_f]
@@ -4732,7 +4984,8 @@ (define_insn "mve_vstrwq_scatter_base_fv4sf"
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_p_f]
@@ -4755,7 +5008,8 @@ (define_insn "mve_vstrwq_scatter_base_p_fv4sf"
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_f]
@@ -4783,7 +5037,8 @@ (define_insn "mve_vstrwq_scatter_offset_fv4sf_insn"
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrw.32\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_offset_p_f]
@@ -4814,7 +5069,8 @@ (define_insn "mve_vstrwq_scatter_offset_p_fv4sf_insn"
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -4845,7 +5101,8 @@ (define_insn "mve_vstrwq_scatter_offset_p_<supf>v4si_insn"
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -4873,7 +5130,8 @@ (define_insn "mve_vstrwq_scatter_offset_<supf>v4si_insn"
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vstrw.32\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_shifted_offset_f]
@@ -4901,7 +5159,8 @@ (define_insn "mve_vstrwq_scatter_shifted_offset_fv4sf_insn"
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrw.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_f]
@@ -4933,7 +5192,8 @@ (define_insn "mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn"
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_s vstrwq_scatter_shifted_offset_p_u]
@@ -4965,7 +5225,8 @@ (define_insn "mve_vstrwq_scatter_shifted_offset_p_<supf>v4si_insn"
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_s vstrwq_scatter_shifted_offset_u]
@@ -4994,7 +5255,8 @@ (define_insn "mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vstrw.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "4")])
;;
;; [vidupq_n_u])
@@ -5062,7 +5324,8 @@ (define_insn "mve_vidupq_m_wb_u<mode>_insn"
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;\tvidupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vidupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vddupq_n_u])
@@ -5130,7 +5393,8 @@ (define_insn "mve_vddupq_m_wb_u<mode>_insn"
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;vddupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vddupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vdwdupq_n_u])
@@ -5246,8 +5510,9 @@ (define_insn "mve_vdwdupq_m_wb_u<mode>_insn"
]
"TARGET_HAVE_MVE"
"vpst\;vdwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [viwdupq_n_u])
@@ -5363,7 +5628,8 @@ (define_insn "mve_viwdupq_m_wb_u<mode>_insn"
]
"TARGET_HAVE_MVE"
"vpst\;\tviwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_viwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5389,7 +5655,8 @@ (define_insn "mve_vstrwq_scatter_base_wb_<supf>v4si"
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_wb_p_s vstrwq_scatter_base_wb_p_u]
@@ -5415,7 +5682,8 @@ (define_insn "mve_vstrwq_scatter_base_wb_p_<supf>v4si"
output_asm_insn ("vpst\;\tvstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_wb_f]
@@ -5440,7 +5708,8 @@ (define_insn "mve_vstrwq_scatter_base_wb_fv4sf"
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_wb_p_f]
@@ -5466,7 +5735,8 @@ (define_insn "mve_vstrwq_scatter_base_wb_p_fv4sf"
output_asm_insn ("vpst\;vstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_wb_s vstrdq_scatter_base_wb_u]
@@ -5491,7 +5761,8 @@ (define_insn "mve_vstrdq_scatter_base_wb_<supf>v2di"
output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_base_wb_p_s vstrdq_scatter_base_wb_p_u]
@@ -5517,7 +5788,8 @@ (define_insn "mve_vstrdq_scatter_base_wb_p_<supf>v2di"
output_asm_insn ("vpst\;vstrdt.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5569,7 +5841,8 @@ (define_insn "mve_vldrwq_gather_base_wb_<supf>v4si_insn"
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrwq_gather_base_wb_z_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5625,7 +5898,8 @@ (define_insn "mve_vldrwq_gather_base_wb_z_<supf>v4si_insn"
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5677,7 +5951,8 @@ (define_insn "mve_vldrwq_gather_base_wb_fv4sf_insn"
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrwq_gather_base_wb_z_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5734,7 +6009,8 @@ (define_insn "mve_vldrwq_gather_base_wb_z_fv4sf_insn"
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrdq_gather_base_wb_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -5787,7 +6063,8 @@ (define_insn "mve_vldrdq_gather_base_wb_<supf>v2di_insn"
output_asm_insn ("vldrd.64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrdq_gather_base_wb_z_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -5826,7 +6103,7 @@ (define_insn "get_fpscr_nzcvqc"
(unspec_volatile:SI [(reg:SI VFPCC_REGNUM)] UNSPEC_GET_FPSCR_NZCVQC))]
"TARGET_HAVE_MVE"
"vmrs\\t%0, FPSCR_nzcvqc"
- [(set_attr "type" "mve_move")])
+ [(set_attr "type" "mve_move")])
(define_insn "set_fpscr_nzcvqc"
[(set (reg:SI VFPCC_REGNUM)
@@ -5834,7 +6111,7 @@ (define_insn "set_fpscr_nzcvqc"
VUNSPEC_SET_FPSCR_NZCVQC))]
"TARGET_HAVE_MVE"
"vmsr\\tFPSCR_nzcvqc, %0"
- [(set_attr "type" "mve_move")])
+ [(set_attr "type" "mve_move")])
;;
;; [vldrdq_gather_base_wb_z_s vldrdq_gather_base_wb_z_u]
@@ -5859,7 +6136,8 @@ (define_insn "mve_vldrdq_gather_base_wb_z_<supf>v2di_insn"
output_asm_insn ("vpst\;vldrdt.u64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vadciq_m_s, vadciq_m_u])
;;
@@ -5876,7 +6154,8 @@ (define_insn "mve_vadciq_m_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;vadcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5893,7 +6172,8 @@ (define_insn "mve_vadciq_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vadci.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -5912,7 +6192,8 @@ (define_insn "mve_vadcq_m_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;vadct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5929,7 +6210,8 @@ (define_insn "mve_vadcq_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vadc.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")
(set_attr "conds" "set")])
@@ -5949,7 +6231,8 @@ (define_insn "mve_vsbciq_m_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;vsbcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5966,7 +6249,8 @@ (define_insn "mve_vsbciq_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vsbci.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -5985,7 +6269,8 @@ (define_insn "mve_vsbcq_m_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vpst\;vsbct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -6002,7 +6287,8 @@ (define_insn "mve_vsbcq_<supf>v4si"
]
"TARGET_HAVE_MVE"
"vsbc.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -6031,7 +6317,7 @@ (define_insn "mve_vst2q<mode>"
"vst21.<V_sz_elem>\t{%q0, %q1}, %3", ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set_attr "length" "8")])
;;
;; [vld2q])
@@ -6059,7 +6345,7 @@ (define_insn "mve_vld2q<mode>"
"vld21.<V_sz_elem>\t{%q0, %q1}, %3", ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set_attr "length" "8")])
;;
;; [vld4q])
@@ -6402,7 +6688,8 @@ (define_insn "mve_vshlcq_m_<supf><mode>"
]
"TARGET_HAVE_MVE"
"vpst\;vshlct\t%q0, %1, %4"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;; CDE instructions on MVE registers.
@@ -6414,7 +6701,8 @@ (define_insn "arm_vcx1qv16qi"
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx1\\tp%c1, %q0, #%c2"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx1qav16qi"
@@ -6425,7 +6713,8 @@ (define_insn "arm_vcx1qav16qi"
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx1a\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx2qv16qi"
@@ -6436,7 +6725,8 @@ (define_insn "arm_vcx2qv16qi"
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx2\\tp%c1, %q0, %q2, #%c3"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx2qav16qi"
@@ -6448,7 +6738,8 @@ (define_insn "arm_vcx2qav16qi"
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx2a\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx3qv16qi"
@@ -6460,7 +6751,8 @@ (define_insn "arm_vcx3qv16qi"
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx3\\tp%c1, %q0, %q2, %q3, #%c4"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx3qav16qi"
@@ -6473,7 +6765,8 @@ (define_insn "arm_vcx3qav16qi"
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx3a\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx1q<a>_p_v16qi"
@@ -6485,7 +6778,8 @@ (define_insn "arm_vcx1q<a>_p_v16qi"
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx1<a>t\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -6499,7 +6793,8 @@ (define_insn "arm_vcx2q<a>_p_v16qi"
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx2<a>t\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -6514,11 +6809,12 @@ (define_insn "arm_vcx3q<a>_p_v16qi"
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx3<a>t\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
-(define_insn "*movmisalign<mode>_mve_store"
+(define_insn "movmisalign<mode>_mve_store"
[(set (match_operand:MVE_VLD_ST 0 "mve_memory_operand" "=Ux")
(unspec:MVE_VLD_ST [(match_operand:MVE_VLD_ST 1 "s_register_operand" " w")]
UNSPEC_MISALIGNED_ACCESS))]
@@ -6526,11 +6822,12 @@ (define_insn "*movmisalign<mode>_mve_store"
|| (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE (<MODE>mode)))
&& !BYTES_BIG_ENDIAN && unaligned_access"
"vstr<V_sz_elem1>.<V_sz_elem>\t%q1, %E0"
- [(set_attr "type" "mve_store")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_movmisalign<mode>_mve_store"))
+ (set_attr "type" "mve_store")]
)
-(define_insn "*movmisalign<mode>_mve_load"
+(define_insn "movmisalign<mode>_mve_load"
[(set (match_operand:MVE_VLD_ST 0 "s_register_operand" "=w")
(unspec:MVE_VLD_ST [(match_operand:MVE_VLD_ST 1 "mve_memory_operand" " Ux")]
UNSPEC_MISALIGNED_ACCESS))]
@@ -6538,7 +6835,8 @@ (define_insn "*movmisalign<mode>_mve_load"
|| (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE (<MODE>mode)))
&& !BYTES_BIG_ENDIAN && unaligned_access"
"vldr<V_sz_elem1>.<V_sz_elem>\t%q0, %E1"
- [(set_attr "type" "mve_load")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_movmisalign<mode>_mve_load"))
+ (set_attr "type" "mve_load")]
)
;; Expander for VxBI moves
diff --git a/gcc/config/arm/vec-common.md b/gcc/config/arm/vec-common.md
index 9af8429968d8662b3c814306c94f033434378e7d..74871cb984b3fe1fb9571841cdcae39631abf8e2 100644
--- a/gcc/config/arm/vec-common.md
+++ b/gcc/config/arm/vec-common.md
@@ -366,7 +366,8 @@ (define_insn "@mve_<mve_insn>q_<supf><mode>"
"@
<mve_insn>.<supf>%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2
* return neon_output_shift_immediate (\"vshl\", 'i', &operands[2], <MODE>mode, VALID_NEON_QREG_MODE (<MODE>mode), true);"
- [(set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
)
(define_expand "vashl<mode>3"
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops
2023-12-18 11:53 [PATCH 0/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops Andre Vieira
2023-12-18 11:53 ` [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns Andre Vieira
@ 2023-12-18 11:53 ` Andre Vieira
2023-12-20 16:54 ` Andre Vieira (lists)
1 sibling, 1 reply; 10+ messages in thread
From: Andre Vieira @ 2023-12-18 11:53 UTC (permalink / raw)
To: gcc-patches; +Cc: Richard.Earnshaw, Stam Markianos-Wright
[-- Attachment #1: Type: text/plain, Size: 44 bytes --]
This is a multi-part message in MIME format.
[-- Attachment #2: Type: text/plain, Size: 881 bytes --]
Reworked Stam's patch after comments in:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/640362.html
The original gcc ChangeLog remains unchanged, but I did split up some tests so
here is the testsuite ChangeLog.
gcc/testsuite/ChangeLog:
* gcc.target/arm/lob.h: Update framework.
* gcc.target/arm/lob1.c: Likewise.
* gcc.target/arm/lob6.c: Likewise.
* gcc.target/arm/mve/dlstp-compile-asm.c: New test.
* gcc.target/arm/mve/dlstp-int16x8.c: New test.
* gcc.target/arm/mve/dlstp-int16x8-run.c: New test.
* gcc.target/arm/mve/dlstp-int32x4.c: New test.
* gcc.target/arm/mve/dlstp-int32x4-run.c: New test.
* gcc.target/arm/mve/dlstp-int64x2.c: New test.
* gcc.target/arm/mve/dlstp-int64x2-run.c: New test.
* gcc.target/arm/mve/dlstp-int8x16.c: New test.
* gcc.target/arm/mve/dlstp-int8x16-run.c: New test.
* gcc.target/arm/mve/dlstp-invalid-asm.c: New test.
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #3: 0002-arm-Add-support-for-MVE-Tail-Predicated-Low-Overhead.patch --]
[-- Type: text/x-patch; name="0002-arm-Add-support-for-MVE-Tail-Predicated-Low-Overhead.patch", Size: 107172 bytes --]
diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h
index 2f5ca79ed8d..4f164c54740 100644
--- a/gcc/config/arm/arm-protos.h
+++ b/gcc/config/arm/arm-protos.h
@@ -65,8 +65,8 @@ extern void arm_emit_speculation_barrier_function (void);
extern void arm_decompose_di_binop (rtx, rtx, rtx *, rtx *, rtx *, rtx *);
extern bool arm_q_bit_access (void);
extern bool arm_ge_bits_access (void);
-extern bool arm_target_insn_ok_for_lob (rtx);
-
+extern bool arm_target_bb_ok_for_lob (basic_block);
+extern rtx arm_attempt_dlstp_transform (rtx);
#ifdef RTX_CODE
enum reg_class
arm_mode_base_reg_class (machine_mode);
diff --git a/gcc/config/arm/arm.cc b/gcc/config/arm/arm.cc
index 0c0cb14a8a4..1ee72bcb7ec 100644
--- a/gcc/config/arm/arm.cc
+++ b/gcc/config/arm/arm.cc
@@ -668,6 +668,12 @@ static const scoped_attribute_specs *const arm_attribute_table[] =
#undef TARGET_HAVE_CONDITIONAL_EXECUTION
#define TARGET_HAVE_CONDITIONAL_EXECUTION arm_have_conditional_execution
+#undef TARGET_LOOP_UNROLL_ADJUST
+#define TARGET_LOOP_UNROLL_ADJUST arm_loop_unroll_adjust
+
+#undef TARGET_PREDICT_DOLOOP_P
+#define TARGET_PREDICT_DOLOOP_P arm_predict_doloop_p
+
#undef TARGET_LEGITIMATE_CONSTANT_P
#define TARGET_LEGITIMATE_CONSTANT_P arm_legitimate_constant_p
@@ -34483,19 +34489,1147 @@ arm_invalid_within_doloop (const rtx_insn *insn)
}
bool
-arm_target_insn_ok_for_lob (rtx insn)
+arm_target_bb_ok_for_lob (basic_block bb)
{
- basic_block bb = BLOCK_FOR_INSN (insn);
/* Make sure the basic block of the target insn is a simple latch
having as single predecessor and successor the body of the loop
itself. Only simple loops with a single basic block as body are
supported for 'low over head loop' making sure that LE target is
above LE itself in the generated code. */
-
return single_succ_p (bb)
- && single_pred_p (bb)
- && single_succ_edge (bb)->dest == single_pred_edge (bb)->src
- && contains_no_active_insn_p (bb);
+ && single_pred_p (bb)
+ && single_succ_edge (bb)->dest == single_pred_edge (bb)->src;
+}
+
+/* Utility fuction: Given a VCTP or a VCTP_M insn, return the number of MVE
+ lanes based on the machine mode being used. */
+
+static int
+arm_mve_get_vctp_lanes (rtx_insn *insn)
+{
+ rtx insn_set = single_set (insn);
+ if (insn_set
+ && GET_CODE (SET_SRC (insn_set)) == UNSPEC
+ && (XINT (SET_SRC (insn_set), 1) == VCTP
+ || XINT (SET_SRC (insn_set), 1) == VCTP_M))
+ {
+ machine_mode mode = GET_MODE (SET_SRC (insn_set));
+ return (VECTOR_MODE_P (mode) && VALID_MVE_PRED_MODE (mode))
+ ? GET_MODE_NUNITS (mode) : 0;
+ }
+ return 0;
+}
+
+/* Check if INSN requires the use of the VPR reg, if it does, return the
+ sub-rtx of the VPR reg. The TYPE argument controls whether
+ this function should:
+ * For TYPE == 0, check all operands, including the OUT operands,
+ and return the first occurrence of the VPR reg.
+ * For TYPE == 1, only check the input operands.
+ * For TYPE == 2, only check the output operands.
+ (INOUT operands are considered both as input and output operands)
+*/
+static rtx
+arm_get_required_vpr_reg (rtx_insn *insn, unsigned int type = 0)
+{
+ gcc_assert (type < 3);
+ if (!NONJUMP_INSN_P (insn))
+ return NULL_RTX;
+
+ bool requires_vpr;
+ extract_constrain_insn (insn);
+ int n_operands = recog_data.n_operands;
+ if (recog_data.n_alternatives == 0)
+ return NULL_RTX;
+
+ /* Fill in recog_op_alt with information about the constraints of
+ this insn. */
+ preprocess_constraints (insn);
+
+ for (int op = 0; op < n_operands; op++)
+ {
+ requires_vpr = true;
+ if (type == 1 && recog_data.operand_type[op] == OP_OUT)
+ continue;
+ else if (type == 2 && recog_data.operand_type[op] == OP_IN)
+ continue;
+
+ /* Iterate through alternatives of operand "op" in recog_op_alt and
+ identify if the operand is required to be the VPR. */
+ for (int alt = 0; alt < recog_data.n_alternatives; alt++)
+ {
+ const operand_alternative *op_alt
+ = &recog_op_alt[alt * n_operands];
+ /* Fetch the reg_class for each entry and check it against the
+ VPR_REG reg_class. */
+ if (alternative_class (op_alt, op) != VPR_REG)
+ requires_vpr = false;
+ }
+ /* If all alternatives of the insn require the VPR reg for this operand,
+ it means that either this is VPR-generating instruction, like a vctp,
+ vcmp, etc., or it is a VPT-predicated insruction. Return the subrtx
+ of the VPR reg operand. */
+ if (requires_vpr)
+ return recog_data.operand[op];
+ }
+ return NULL_RTX;
+}
+
+/* Wrapper function of arm_get_required_vpr_reg with TYPE == 1, so return
+ something only if the VPR reg is an input operand to the insn. */
+
+static rtx
+arm_get_required_vpr_reg_param (rtx_insn *insn)
+{
+ return arm_get_required_vpr_reg (insn, 1);
+}
+
+/* Wrapper function of arm_get_required_vpr_reg with TYPE == 2, so return
+ something only if the VPR reg is the return value, an output of, or is
+ clobbered by the insn. */
+
+static rtx
+arm_get_required_vpr_reg_ret_val (rtx_insn *insn)
+{
+ return arm_get_required_vpr_reg (insn, 2);
+}
+
+/* Scan the basic block of a loop body for a vctp instruction. If there is
+ at least vctp instruction, return the first rtx_insn *. */
+
+static rtx_insn *
+arm_mve_get_loop_vctp (basic_block bb)
+{
+ rtx_insn *insn = BB_HEAD (bb);
+
+ /* Now scan through all the instruction patterns and pick out the VCTP
+ instruction. We require arm_get_required_vpr_reg_param to be false
+ to make sure we pick up a VCTP, rather than a VCTP_M. */
+ FOR_BB_INSNS (bb, insn)
+ if (NONDEBUG_INSN_P (insn))
+ if (arm_get_required_vpr_reg_ret_val (insn)
+ && (arm_mve_get_vctp_lanes (insn) != 0)
+ && !arm_get_required_vpr_reg_param (insn))
+ return insn;
+ return NULL;
+}
+
+/* Return true if INSN is a MVE instruction that is VPT-predicable, but in
+ its unpredicated form, or if it is predicated, but on a predicate other
+ than VPR_REG. */
+
+static bool
+arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate (rtx_insn *insn,
+ rtx vpr_reg)
+{
+ rtx insn_vpr_reg_operand;
+ if (MVE_VPT_UNPREDICATED_INSN_P (insn)
+ || (MVE_VPT_PREDICATED_INSN_P (insn)
+ && (insn_vpr_reg_operand = arm_get_required_vpr_reg_param (insn))
+ && !rtx_equal_p (vpr_reg, insn_vpr_reg_operand)))
+ return true;
+ else
+ return false;
+}
+
+/* Return true if INSN is a MVE instruction that is VPT-predicable and is
+ predicated on VPR_REG. */
+
+static bool
+arm_mve_vec_insn_is_predicated_with_this_predicate (rtx_insn *insn,
+ rtx vpr_reg)
+{
+ rtx insn_vpr_reg_operand;
+ if (MVE_VPT_PREDICATED_INSN_P (insn)
+ && (insn_vpr_reg_operand = arm_get_required_vpr_reg_param (insn))
+ && rtx_equal_p (vpr_reg, insn_vpr_reg_operand))
+ return true;
+ else
+ return false;
+}
+
+/* Utility function to identify if INSN is an MVE instruction that performs
+ some across-vector operation (and as a result does not align with normal
+ lane predication rules). All such instructions give one only scalar
+ output, except for vshlcq which gives a PARALLEL of a vector and a scalar
+ (one vector result and one carry output). */
+
+static bool
+arm_is_mve_across_vector_insn (rtx_insn* insn)
+{
+ df_ref insn_defs = NULL;
+ if (!MVE_VPT_PREDICABLE_INSN_P (insn))
+ return false;
+
+ bool is_across_vector = false;
+ FOR_EACH_INSN_DEF (insn_defs, insn)
+ if (!VALID_MVE_MODE (GET_MODE (DF_REF_REG (insn_defs)))
+ && !arm_get_required_vpr_reg_ret_val (insn))
+ is_across_vector = true;
+
+ return is_across_vector;
+}
+
+/* Utility function to identify if INSN is an MVE load or store instruction.
+ * For TYPE == 0, check all operands. If the function returns true,
+ INSN is a load or a store insn.
+ * For TYPE == 1, only check the input operands. If the function returns
+ true, INSN is a load insn.
+ * For TYPE == 2, only check the output operands. If the function returns
+ true, INSN is a store insn. */
+
+static bool
+arm_is_mve_load_store_insn (rtx_insn* insn, int type = 0)
+{
+ int n_operands = recog_data.n_operands;
+ extract_insn (insn);
+
+ for (int op = 0; op < n_operands; op++)
+ {
+ if (type == 1 && recog_data.operand_type[op] == OP_OUT)
+ continue;
+ else if (type == 2 && recog_data.operand_type[op] == OP_IN)
+ continue;
+ if (mve_memory_operand (recog_data.operand[op],
+ GET_MODE (recog_data.operand[op])))
+ return true;
+ }
+ return false;
+}
+
+/* When transforming an MVE intrinsic loop into an MVE Tail Predicated Low
+ Overhead Loop, there are a number of instructions that, if in their
+ unpredicated form, act across vector lanes, but are still safe to include
+ within the loop, despite the implicit predication added to the vector lanes.
+ This list has been compiled by carefully analyzing the instruction
+ pseudocode in the Arm-ARM.
+ All other across-vector instructions aren't allowed, because the addition
+ of implicit predication could influnce the result of the operation.
+ Any new across-vector instructions to the MVE ISA will have to assessed for
+ inclusion to this list. */
+
+static bool
+arm_mve_is_allowed_unpredic_across_vector_insn (rtx_insn* insn)
+{
+ gcc_assert (MVE_VPT_UNPREDICATED_INSN_P (insn)
+ && arm_is_mve_across_vector_insn (insn));
+ rtx insn_set = single_set (insn);
+ if (!insn_set)
+ return false;
+ rtx unspec = SET_SRC (insn_set);
+ if (GET_CODE (unspec) != UNSPEC)
+ return false;
+ switch (XINT (unspec, 1))
+ {
+ case VADDVQ_U:
+ case VADDVQ_S:
+ case VADDVAQ_U:
+ case VADDVAQ_S:
+ case VMLADAVQ_U:
+ case VMLADAVQ_S:
+ case VMLADAVXQ_S:
+ case VMLADAVAQ_U:
+ case VMLADAVAQ_S:
+ case VMLADAVAXQ_S:
+ case VABAVQ_S:
+ case VABAVQ_U:
+ case VADDLVQ_S:
+ case VADDLVQ_U:
+ case VADDLVAQ_S:
+ case VADDLVAQ_U:
+ case VMAXVQ_U:
+ case VMAXAVQ_S:
+ case VMLALDAVQ_U:
+ case VMLALDAVXQ_U:
+ case VMLALDAVXQ_S:
+ case VMLALDAVQ_S:
+ case VMLALDAVAQ_S:
+ case VMLALDAVAQ_U:
+ case VMLALDAVAXQ_S:
+ case VMLALDAVAXQ_U:
+ case VMLSDAVQ_S:
+ case VMLSDAVXQ_S:
+ case VMLSDAVAXQ_S:
+ case VMLSDAVAQ_S:
+ case VMLSLDAVQ_S:
+ case VMLSLDAVXQ_S:
+ case VMLSLDAVAQ_S:
+ case VMLSLDAVAXQ_S:
+ case VRMLALDAVHXQ_S:
+ case VRMLALDAVHQ_U:
+ case VRMLALDAVHQ_S:
+ case VRMLALDAVHAQ_S:
+ case VRMLALDAVHAQ_U:
+ case VRMLALDAVHAXQ_S:
+ case VRMLSLDAVHQ_S:
+ case VRMLSLDAVHXQ_S:
+ case VRMLSLDAVHAQ_S:
+ case VRMLSLDAVHAXQ_S:
+ return true;
+ default:
+ break;
+ }
+ return false;
+}
+
+/* Scan through the DF chain backwards within the basic block and
+ determine if any of the USEs of the original insn (or the USEs of the insns
+ where thy were DEF-ed, etc.) were affected by implicit VPT
+ predication of an MVE_VPT_UNPREDICATED_INSN_P in a dlstp/letp loop.
+ This function returns true if the insn is affected implicit predication
+ and false otherwise.
+ Having such implicit predication on an unpredicated insn wouldn't in itself
+ block tail predication, because the output of that insn might then be used
+ in a correctly predicated store insn, where the disabled lanes will be
+ ignored. To verify this we later call:
+ `arm_mve_check_df_chain_fwd_for_implic_predic_impact`, which will check the
+ DF chains forward to see if any implicitly-predicated operand gets used in
+ an improper way. */
+
+static bool
+arm_mve_check_df_chain_back_for_implic_predic
+ (hash_map <rtx_insn *, bool> *safe_insn_map, rtx_insn *insn_in,
+ rtx vctp_vpr_generated)
+{
+
+ auto_vec<rtx_insn *> worklist;
+ worklist.safe_push (insn_in);
+
+ bool *temp = NULL;
+
+ while (worklist.length () > 0)
+ {
+ rtx_insn *insn = worklist.pop ();
+
+ if ((temp = safe_insn_map->get (insn)))
+ return *temp;
+
+ basic_block body = BLOCK_FOR_INSN (insn);
+
+ /* The circumstances under which an instruction is affected by "implicit
+ predication" are as follows:
+ * It is an UNPREDICATED_INSN_P:
+ * That loads/stores from/to memory.
+ * Where any one of its operands is an MVE vector from outside the
+ loop body bb.
+ Or:
+ * Any of it's operands were affected earlier in the insn chain. */
+ if (MVE_VPT_UNPREDICATED_INSN_P (insn)
+ && (arm_is_mve_load_store_insn (insn)
+ || (arm_is_mve_across_vector_insn (insn)
+ && !arm_mve_is_allowed_unpredic_across_vector_insn (insn))))
+ {
+ safe_insn_map->put (insn, true);
+ return true;
+ }
+
+ df_ref insn_uses = NULL;
+ FOR_EACH_INSN_USE (insn_uses, insn)
+ {
+ /* If the operand is in the input reg set to the the basic block,
+ (i.e. it has come from outside the loop!), consider it unsafe if:
+ * It's being used in an unpredicated insn.
+ * It is a predicable MVE vector. */
+ if (MVE_VPT_UNPREDICATED_INSN_P (insn)
+ && VALID_MVE_MODE (GET_MODE (DF_REF_REG (insn_uses)))
+ && REGNO_REG_SET_P (DF_LR_IN (body), DF_REF_REGNO (insn_uses)))
+ {
+ safe_insn_map->put (insn, true);
+ return true;
+ }
+
+ /* Scan backwards from the current INSN through the instruction chain
+ until the start of the basic block. */
+ for (rtx_insn *prev_insn = PREV_INSN (insn);
+ prev_insn && prev_insn != PREV_INSN (BB_HEAD (body));
+ prev_insn = PREV_INSN (prev_insn))
+ {
+ /* If a previous insn defines a register that INSN uses, then
+ add to the worklist to check that insn's USEs. If any of these
+ insns return true as MVE_VPT_UNPREDICATED_INSN_Ps, then the
+ whole chain is affected by the change in behaviour from being
+ placed in dlstp/letp loop. */
+ df_ref prev_insn_defs = NULL;
+ FOR_EACH_INSN_DEF (prev_insn_defs, prev_insn)
+ {
+ if (DF_REF_REGNO (insn_uses) == DF_REF_REGNO (prev_insn_defs)
+ && !arm_mve_vec_insn_is_predicated_with_this_predicate
+ (insn, vctp_vpr_generated))
+ worklist.safe_push (prev_insn);
+ }
+ }
+ }
+ }
+ safe_insn_map->put (insn_in, false);
+ return false;
+}
+
+/* If we have identified that the current DEF will be modified
+ by such implicit predication, scan through all the
+ insns that USE it and bail out if any one is outside the
+ current basic block (i.e. the reg is live after the loop)
+ or if any are store insns that are unpredicated or using a
+ predicate other than the loop VPR.
+ This function returns true if the insn is not suitable for
+ implicit predication and false otherwise.*/
+
+static bool
+arm_mve_check_df_chain_fwd_for_implic_predic_impact (rtx_insn *insn,
+ rtx vctp_vpr_generated)
+{
+
+ /* If this insn is indeed an unpredicated store to memory, bail out. */
+ if (arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate
+ (insn, vctp_vpr_generated)
+ && (arm_is_mve_load_store_insn (insn, 2)
+ || arm_is_mve_across_vector_insn (insn)))
+ return true;
+
+ /* Next, scan forward to the various USEs of the DEFs in this insn. */
+ df_ref insn_def = NULL;
+ FOR_EACH_INSN_DEF (insn_def, insn)
+ {
+ for (df_ref use = DF_REG_USE_CHAIN (DF_REF_REGNO (insn_def)); use;
+ use = DF_REF_NEXT_REG (use))
+ {
+ rtx_insn *next_use_insn = DF_REF_INSN (use);
+ if (next_use_insn != insn
+ && NONDEBUG_INSN_P (next_use_insn))
+ {
+ /* If the USE is outside the loop body bb, or it is inside, but
+ is an differently-predicated store to memory or it is any
+ across-vector instruction. */
+ if (BLOCK_FOR_INSN (insn) != BLOCK_FOR_INSN (next_use_insn)
+ || (arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate
+ (next_use_insn, vctp_vpr_generated)
+ && (arm_is_mve_load_store_insn (next_use_insn, 2)
+ || arm_is_mve_across_vector_insn (next_use_insn))))
+ return true;
+ }
+ }
+ }
+ return false;
+}
+
+/* Helper function to `arm_mve_dlstp_check_inc_counter` and to
+ `arm_mve_dlstp_check_dec_counter`. In the situations where the loop counter
+ is incrementing by 1 or decrementing by 1 in each iteration, ensure that the
+ target value or the initialisation value, respectively, was a calculation
+ of the number of iterations of the loop, which is expected to be an ASHIFTRT
+ by VCTP_STEP. */
+
+static bool
+arm_mve_check_reg_origin_is_num_elems (basic_block body, rtx reg, rtx vctp_step)
+{
+ /* Ok, we now know the loop starts from zero and increments by one.
+ Now just show that the max value of the counter came from an
+ appropriate ASHIFRT expr of the correct amount. */
+ basic_block pre_loop_bb = body->prev_bb;
+ while (pre_loop_bb && BB_END (pre_loop_bb)
+ && !df_bb_regno_only_def_find (pre_loop_bb, REGNO (reg)))
+ pre_loop_bb = pre_loop_bb->prev_bb;
+
+ df_ref counter_max_last_def = df_bb_regno_only_def_find (pre_loop_bb, REGNO (reg));
+ if (!counter_max_last_def)
+ return false;
+ rtx counter_max_last_set = single_set (DF_REF_INSN (counter_max_last_def));
+ if (!counter_max_last_set)
+ return false;
+
+ /* If we encounter a simple SET from a REG, follow it through. */
+ if (REG_P (SET_SRC (counter_max_last_set)))
+ return arm_mve_check_reg_origin_is_num_elems
+ (pre_loop_bb->next_bb, SET_SRC (counter_max_last_set), vctp_step);
+
+ /* If we encounter a SET from an IF_THEN_ELSE where one of the operands is a
+ constant and the other is a REG, follow through to that REG. */
+ if (GET_CODE (SET_SRC (counter_max_last_set)) == IF_THEN_ELSE
+ && REG_P (XEXP (SET_SRC (counter_max_last_set), 1))
+ && CONST_INT_P (XEXP (SET_SRC (counter_max_last_set), 2)))
+ return arm_mve_check_reg_origin_is_num_elems
+ (pre_loop_bb->next_bb, XEXP (SET_SRC (counter_max_last_set), 1), vctp_step);
+
+ if (GET_CODE (SET_SRC (counter_max_last_set)) == ASHIFTRT
+ && CONST_INT_P (XEXP (SET_SRC (counter_max_last_set), 1))
+ && ((1 << INTVAL (XEXP (SET_SRC (counter_max_last_set), 1)))
+ == abs (INTVAL (vctp_step))))
+ return true;
+
+ return false;
+}
+
+/* If we have identified the loop to have an incrementing counter, we need to
+ make sure that it increments by 1 and that the loop is structured correctly:
+ * The counter starts from 0
+ * The counter terminates at (num_of_elem + num_of_lanes - 1) / num_of_lanes
+ * The vctp insn uses a reg that decrements appropriately in each iteration.
+*/
+
+static rtx_insn*
+arm_mve_dlstp_check_inc_counter (basic_block body, rtx_insn* vctp_insn,
+ rtx condconst, rtx condcount)
+{
+ rtx vctp_reg = XVECEXP (XEXP (PATTERN (vctp_insn), 1), 0, 0);
+ /* The loop latch has to be empty. When compiling all the known MVE LoLs in
+ user applications, none of those with incrementing counters had any real
+ insns in the loop latch. As such, this function has only been tested with
+ an empty latch and may misbehave or ICE if we somehow get here with an
+ increment in the latch, so, for correctness, error out early. */
+ if (!empty_block_p (body->loop_father->latch))
+ return NULL;
+
+ class rtx_iv vctp_reg_iv;
+ /* For loops of type B) the loop counter is independent of the decrement
+ of the reg used in the vctp_insn. So run iv analysis on that reg. This
+ has to succeed for such loops to be supported. */
+ if (!iv_analyze (vctp_insn, as_a<scalar_int_mode> (GET_MODE (vctp_reg)),
+ vctp_reg, &vctp_reg_iv))
+ return NULL;
+
+ /* Extract the decrementnum of the vctp reg from the iv. This decrementnum
+ is the number of lanes/elements it decrements from the remaining number of
+ lanes/elements to process in the loop, for this reason this is always a
+ negative number, but to simplify later checks we use it's absolute value. */
+ int decrementnum = INTVAL (vctp_reg_iv.step);
+ if (decrementnum >= 0)
+ return NULL;
+ decrementnum = abs (decrementnum);
+
+ /* Find where both of those are modified in the loop body bb. */
+ df_ref condcount_reg_set_df = df_bb_regno_only_def_find (body, REGNO (condcount));
+ df_ref vctp_reg_set_df = df_bb_regno_only_def_find (body, REGNO (vctp_reg));
+ if (!condcount_reg_set_df || !vctp_reg_set_df)
+ return NULL;
+ rtx condcount_reg_set = single_set (DF_REF_INSN (condcount_reg_set_df));
+ rtx vctp_reg_set = single_set (DF_REF_INSN (vctp_reg_set_df));
+ if (!condcount_reg_set || !vctp_reg_set)
+ return NULL;
+
+ /* Ensure the modification of the vctp reg from df is consistent with
+ the iv and the number of lanes on the vctp insn. */
+ if (GET_CODE (SET_SRC (vctp_reg_set)) != PLUS
+ || !REG_P (SET_DEST (vctp_reg_set))
+ || !REG_P (XEXP (SET_SRC (vctp_reg_set), 0))
+ || REGNO (SET_DEST (vctp_reg_set))
+ != REGNO (XEXP (SET_SRC (vctp_reg_set), 0))
+ || !CONST_INT_P (XEXP (SET_SRC (vctp_reg_set), 1))
+ || INTVAL (XEXP (SET_SRC (vctp_reg_set), 1)) >= 0
+ || decrementnum != abs (INTVAL (XEXP (SET_SRC (vctp_reg_set), 1)))
+ || decrementnum != arm_mve_get_vctp_lanes (vctp_insn))
+ return NULL;
+
+ if (REG_P (condcount) && REG_P (condconst))
+ {
+ /* First we need to prove that the loop is going 0..condconst with an
+ inc of 1 in each iteration. */
+ if (GET_CODE (SET_SRC (condcount_reg_set)) == PLUS
+ && CONST_INT_P (XEXP (SET_SRC (condcount_reg_set), 1))
+ && INTVAL (XEXP (SET_SRC (condcount_reg_set), 1)) == 1)
+ {
+ rtx counter_reg = SET_DEST (condcount_reg_set);
+ /* Check that the counter did indeed start from zero. */
+ df_ref this_set = DF_REG_DEF_CHAIN (REGNO (counter_reg));
+ if (!this_set)
+ return NULL;
+ df_ref last_set_def = DF_REF_NEXT_REG (this_set);
+ if (!last_set_def)
+ return NULL;
+ rtx_insn* last_set_insn = DF_REF_INSN (last_set_def);
+ rtx last_set = single_set (last_set_insn);
+ if (!last_set)
+ return NULL;
+ rtx counter_orig_set;
+ counter_orig_set = SET_SRC (last_set);
+ if (!CONST_INT_P (counter_orig_set)
+ || (INTVAL (counter_orig_set) != 0))
+ return NULL;
+ /* And finally check that the target value of the counter,
+ condconst, is of the correct shape. */
+ if (!arm_mve_check_reg_origin_is_num_elems (body, condconst,
+ vctp_reg_iv.step))
+ return NULL;
+ }
+ else
+ return NULL;
+ }
+ else
+ return NULL;
+
+ /* Everything looks valid. */
+ return vctp_insn;
+}
+
+/* Helper function to `arm_mve_loop_valid_for_dlstp`. In the case of a
+ counter that is decrementing, ensure that it is decrementing by the
+ right amount in each iteration and that the target condition is what
+ we expect. */
+
+static rtx_insn*
+arm_mve_dlstp_check_dec_counter (basic_block body, rtx_insn* vctp_insn,
+ rtx condconst, rtx condcount)
+{
+ rtx vctp_reg = XVECEXP (XEXP (PATTERN (vctp_insn), 1), 0, 0);
+ class rtx_iv vctp_reg_iv;
+ int decrementnum;
+ /* For decrementing loops of type A), the counter is usually present in the
+ loop latch. Here we simply need to verify that this counter is the same
+ reg that is also used in the vctp_insn and that it is not otherwise
+ modified. */
+ rtx_insn *dec_insn = BB_END (body->loop_father->latch);
+ /* If not in the loop latch, try to find the decrement in the loop body. */
+ if (!NONDEBUG_INSN_P (dec_insn))
+ {
+ df_ref temp = df_bb_regno_only_def_find (body, REGNO (condcount));
+ /* If we haven't been able to find the decrement, bail out. */
+ if (!temp)
+ return NULL;
+ dec_insn = DF_REF_INSN (temp);
+ }
+
+ rtx dec_set = single_set (dec_insn);
+
+ /* Next, ensure that it is a PLUS of the form:
+ (set (reg a) (plus (reg a) (const_int)))
+ where (reg a) is the same as condcount. */
+ if (!dec_set
+ || !REG_P (SET_DEST (dec_set))
+ || !REG_P (XEXP (SET_SRC (dec_set), 0))
+ || !CONST_INT_P (XEXP (SET_SRC (dec_set), 1))
+ || REGNO (SET_DEST (dec_set))
+ != REGNO (XEXP (SET_SRC (dec_set), 0))
+ || REGNO (SET_DEST (dec_set)) != REGNO (condcount))
+ return NULL;
+
+ decrementnum = INTVAL (XEXP (SET_SRC (dec_set), 1));
+
+ /* This decrementnum is the number of lanes/elements it decrements from the
+ remaining number of lanes/elements to process in the loop, for this reason
+ this is always a negative number, but to simplify later checks we use it's
+ absolute value. */
+ if (decrementnum >= 0)
+ return NULL;
+ decrementnum = abs (decrementnum);
+
+ /* Ok, so we now know the loop decrement. If it is a 1, then we need to
+ look at the loop vctp_reg and verify that it also decrements correctly.
+ Then, we need to establish that the starting value of the loop decrement
+ originates from the starting value of the vctp decrement. */
+ if (decrementnum == 1)
+ {
+ class rtx_iv vctp_reg_iv;
+ /* The loop counter is found to be independent of the decrement
+ of the reg used in the vctp_insn, again. Ensure that IV analysis
+ succeeds and check the step. */
+ if (!iv_analyze (vctp_insn, as_a<scalar_int_mode> (GET_MODE (vctp_reg)),
+ vctp_reg, &vctp_reg_iv))
+ return NULL;
+ /* Ensure it matches the number of lanes of the vctp instruction. */
+ if (abs (INTVAL (vctp_reg_iv.step))
+ != arm_mve_get_vctp_lanes (vctp_insn))
+ return NULL;
+ if (!arm_mve_check_reg_origin_is_num_elems (body, condcount, vctp_reg_iv.step))
+ return NULL;
+ }
+ /* If the decrements are the same, then the situation is simple: either they
+ are also the same reg, which is safe, or they are different registers, in
+ which case makse sure that there is a only simple SET from one to the
+ other inside the loop.*/
+ else if (decrementnum == arm_mve_get_vctp_lanes (vctp_insn))
+ {
+ if (REGNO (condcount) != REGNO (vctp_reg))
+ {
+ /* It wasn't the same reg, but it could be behild a
+ (set (vctp_reg) (condcount)), so instead find where
+ the VCTP insn is DEF'd inside the loop. */
+ rtx_insn *vctp_reg_insn
+ = DF_REF_INSN (df_bb_regno_only_def_find (body, REGNO (vctp_reg)));
+ rtx vctp_reg_set = single_set (vctp_reg_insn);
+ /* This must just be a simple SET from the condcount. */
+ if (!vctp_reg_set
+ || !REG_P (SET_DEST (vctp_reg_set))
+ || !REG_P (SET_SRC (vctp_reg_set))
+ || REGNO (SET_SRC (vctp_reg_set)) != REGNO (condcount))
+ return NULL;
+ }
+ }
+ else
+ return NULL;
+
+ /* We now only need to find out that the loop terminates with a LE
+ zero condition. If condconst is a const_int, then this is easy.
+ If its a REG, look at the last condition+jump in a bb before
+ the loop, because that usually will have a branch jumping over
+ the loop body. */
+ if (CONST_INT_P (condconst)
+ && !(INTVAL (condconst) == 0 && JUMP_P (BB_END (body))
+ && GET_CODE (XEXP (PATTERN (BB_END (body)), 1)) == IF_THEN_ELSE
+ && (GET_CODE (XEXP (XEXP (PATTERN (BB_END (body)), 1), 0)) == NE
+ ||GET_CODE (XEXP (XEXP (PATTERN (BB_END (body)), 1), 0)) == GT)))
+ return NULL;
+ else if (REG_P (condconst))
+ {
+ basic_block pre_loop_bb = body;
+ while (pre_loop_bb->prev_bb && BB_END (pre_loop_bb->prev_bb)
+ && !JUMP_P (BB_END (pre_loop_bb->prev_bb)))
+ pre_loop_bb = pre_loop_bb->prev_bb;
+ if (pre_loop_bb && BB_END (pre_loop_bb))
+ pre_loop_bb = pre_loop_bb->prev_bb;
+ else
+ return NULL;
+ rtx initial_compare = NULL_RTX;
+ if (!(prev_nonnote_nondebug_insn_bb (BB_END (pre_loop_bb))
+ && INSN_P (prev_nonnote_nondebug_insn_bb (BB_END (pre_loop_bb)))))
+ return NULL;
+ else
+ initial_compare
+ = single_set (prev_nonnote_nondebug_insn_bb (BB_END (pre_loop_bb)));
+ if (!(initial_compare
+ && cc_register (SET_DEST (initial_compare), VOIDmode)
+ && GET_CODE (SET_SRC (initial_compare)) == COMPARE
+ && CONST_INT_P (XEXP (SET_SRC (initial_compare), 1))
+ && INTVAL (XEXP (SET_SRC (initial_compare), 1)) == 0))
+ return NULL;
+
+ /* Usually this is a LE condition, but it can also just be a GT or an EQ
+ condition (if the value is unsigned or the compiler knows its not negative) */
+ rtx_insn *loop_jumpover = BB_END (pre_loop_bb);
+ if (!(JUMP_P (loop_jumpover)
+ && GET_CODE (XEXP (PATTERN (loop_jumpover), 1)) == IF_THEN_ELSE
+ && (GET_CODE (XEXP (XEXP (PATTERN (loop_jumpover), 1), 0)) == LE
+ || GET_CODE (XEXP (XEXP (PATTERN (loop_jumpover), 1), 0)) == GT
+ || GET_CODE (XEXP (XEXP (PATTERN (loop_jumpover), 1), 0)) == EQ)))
+ return NULL;
+ }
+
+ /* Everything looks valid. */
+ return vctp_insn;
+}
+
+/* Function to check a loop's structure to see if it is a valid candidate for
+ an MVE Tail Predicated Low-Overhead Loop. Returns the loop's VCTP_INSN if
+ it is valid, or NULL if it isn't. */
+
+static rtx_insn*
+arm_mve_loop_valid_for_dlstp (basic_block body)
+{
+ /* Doloop can only be done "elementwise" with predicated dlstp/letp if it
+ contains a VCTP on the number of elements processed by the loop.
+ Find the VCTP predicate generation inside the loop body BB. */
+ rtx_insn *vctp_insn = arm_mve_get_loop_vctp (body);
+ if (!vctp_insn)
+ return NULL;
+
+ /* There are only two types of loops that can be turned into dlstp/letp
+ loops:
+ A) Loops of the form:
+ while (num_of_elem > 0)
+ {
+ p = vctp<size> (num_of_elem)
+ n -= num_of_lanes;
+ }
+ B) Loops of the form:
+ int num_of_iters = (num_of_elem + num_of_lanes - 1) / num_of_lanes
+ for (i = 0; i < num_of_iters; i++)
+ {
+ p = vctp<size> (num_of_elem)
+ n -= num_of_lanes;
+ }
+
+ Then, depending on the type of loop above we need will need to do
+ different sets of checks. */
+ iv_analysis_loop_init (body->loop_father);
+
+ /* In order to find out if the loop is of type A or B above look for the
+ loop counter: it will either be incrementing by one per iteration or
+ it will be decrementing by num_of_lanes. We can find the loop counter
+ in the condition at the end of the loop. */
+ rtx_insn *loop_cond = prev_nonnote_nondebug_insn_bb (BB_END (body));
+ if (!(cc_register (XEXP (PATTERN (loop_cond), 0), VOIDmode)
+ && GET_CODE (XEXP (PATTERN (loop_cond), 1)) == COMPARE))
+ return NULL;
+
+ /* The operands in the condition: Try to identify which one is the
+ constant and which is the counter and run IV analysis on the latter. */
+ rtx cond_arg_1 = XEXP (XEXP (PATTERN (loop_cond), 1), 0);
+ rtx cond_arg_2 = XEXP (XEXP (PATTERN (loop_cond), 1), 1);
+
+ rtx loop_cond_constant;
+ rtx loop_counter;
+ class rtx_iv cond_counter_iv, cond_temp_iv;
+
+ if (CONST_INT_P (cond_arg_1))
+ {
+ /* cond_arg_1 is the constant and cond_arg_2 is the counter. */
+ loop_cond_constant = cond_arg_1;
+ loop_counter = cond_arg_2;
+ iv_analyze (loop_cond, as_a<scalar_int_mode> (GET_MODE (cond_arg_2)),
+ cond_arg_2, &cond_counter_iv);
+ }
+ else if (CONST_INT_P (cond_arg_2))
+ {
+ /* cond_arg_2 is the constant and cond_arg_1 is the counter. */
+ loop_cond_constant = cond_arg_2;
+ loop_counter = cond_arg_1;
+ iv_analyze (loop_cond, as_a<scalar_int_mode> (GET_MODE (cond_arg_1)),
+ cond_arg_1, &cond_counter_iv);
+ }
+ else if (REG_P (cond_arg_1) && REG_P (cond_arg_2))
+ {
+ /* If both operands to the compare are REGs, we can safely
+ run IV analysis on both and then determine which is the
+ constant by looking at the step.
+ First assume cond_arg_1 is the counter. */
+ loop_counter = cond_arg_1;
+ loop_cond_constant = cond_arg_2;
+ iv_analyze (loop_cond, as_a<scalar_int_mode> (GET_MODE (cond_arg_1)),
+ cond_arg_1, &cond_counter_iv);
+ iv_analyze (loop_cond, as_a<scalar_int_mode> (GET_MODE (cond_arg_2)),
+ cond_arg_2, &cond_temp_iv);
+
+ /* Look at the steps and swap around the rtx's if needed. Error out if
+ one of them cannot be identified as constant. */
+ if (!CONST_INT_P (cond_counter_iv.step) || !CONST_INT_P (cond_temp_iv.step))
+ return NULL;
+ if (INTVAL (cond_counter_iv.step) != 0 && INTVAL (cond_temp_iv.step) != 0)
+ return NULL;
+ if (INTVAL (cond_counter_iv.step) == 0 && INTVAL (cond_temp_iv.step) != 0)
+ {
+ loop_counter = cond_arg_2;
+ loop_cond_constant = cond_arg_1;
+ cond_counter_iv = cond_temp_iv;
+ }
+ }
+ else
+ return NULL;
+
+ if (!REG_P (loop_counter))
+ return NULL;
+ if (!(REG_P (loop_cond_constant) || CONST_INT_P (loop_cond_constant)))
+ return NULL;
+
+ /* Now we have extracted the IV step of the loop counter, call the
+ appropriate checking function. */
+ if (INTVAL (cond_counter_iv.step) > 0)
+ return arm_mve_dlstp_check_inc_counter (body, vctp_insn,
+ loop_cond_constant, loop_counter);
+ else if (INTVAL (cond_counter_iv.step) < 0)
+ return arm_mve_dlstp_check_dec_counter (body, vctp_insn,
+ loop_cond_constant, loop_counter);
+ else
+ return NULL;
+}
+
+/* Predict whether the given loop in gimple will be transformed in the RTL
+ doloop_optimize pass. It could be argued that turning large enough loops
+ into low-overhead loops would not show a signficant performance boost.
+ Howeer, in the case of tail predication we would still avoid using VPT/VPST
+ instructions inside the loop, and in either case using low-overhead loops
+ would not be detrimental, so we decided to not consider size, avoiding the
+ need of a heuristic to determine what an appropriate size boundary is. */
+
+static bool
+arm_predict_doloop_p (struct loop *loop)
+{
+ gcc_assert (loop);
+ /* On arm, targetm.can_use_doloop_p is actually
+ can_use_doloop_if_innermost. Ensure the loop is innermost,
+ it is valid and as per arm_target_bb_ok_for_lob and the
+ correct architecture flags are enabled. */
+ if (!(TARGET_HAVE_LOB && optimize > 0))
+ {
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ fprintf (dump_file, "Predict doloop failure due to"
+ " target architecture or optimisation flags.\n");
+ return false;
+ }
+ else if (loop->inner != NULL)
+ {
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ fprintf (dump_file, "Predict doloop failure due to"
+ " loop nesting.\n");
+ return false;
+ }
+ else if (!arm_target_bb_ok_for_lob (loop->header->next_bb))
+ {
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ fprintf (dump_file, "Predict doloop failure due to"
+ " loop bb complexity.\n");
+ return false;
+ }
+
+ return true;
+}
+
+/* Implement targetm.loop_unroll_adjust. Use this to block unrolling of loops
+ that may later be turned into MVE Tail Predicated Low Overhead Loops. The
+ performance benefit of an MVE LoL is likely to be much higher than that of
+ the unrolling. */
+
+unsigned
+arm_loop_unroll_adjust (unsigned nunroll, struct loop *loop)
+{
+ if (TARGET_HAVE_MVE
+ && arm_target_bb_ok_for_lob (loop->latch)
+ && arm_mve_loop_valid_for_dlstp (loop->header))
+ return 0;
+ else
+ return nunroll;
+}
+
+/* Function to hadle emitting a VPT-unpredicated version of a VPT-predicated
+ insn to a sequence. */
+
+static bool
+arm_emit_mve_unpredicated_insn_to_seq (rtx_insn* insn)
+{
+ rtx insn_vpr_reg_operand = arm_get_required_vpr_reg_param (insn);
+ int new_icode = get_attr_mve_unpredicated_insn (insn);
+ if (!in_sequence_p ()
+ || !MVE_VPT_PREDICATED_INSN_P (insn)
+ || (!insn_vpr_reg_operand)
+ || (!new_icode))
+ return false;
+
+ extract_insn (insn);
+ rtx arr[8];
+ int j = 0;
+
+ /* When transforming a VPT-predicated instruction
+ into its unpredicated equivalent we need to drop
+ the VPR operand and we may need to also drop a
+ merge "vuninit" input operand, depending on the
+ instruction pattern. Here ensure that we have at
+ most a two-operand difference between the two
+ instrunctions. */
+ int n_operands_diff
+ = recog_data.n_operands - insn_data[new_icode].n_operands;
+ if (!(n_operands_diff > 0 && n_operands_diff <= 2))
+ return false;
+
+ /* Then, loop through the operands of the predicated
+ instruction, and retain the ones that map to the
+ unpredicated instruction. */
+ for (int i = 0; i < recog_data.n_operands; i++)
+ {
+ /* Ignore the VPR and, if needed, the vuninit
+ operand. */
+ if (insn_vpr_reg_operand == recog_data.operand[i]
+ || (n_operands_diff == 2
+ && !strcmp (recog_data.constraints[i], "0")))
+ continue;
+ else
+ {
+ arr[j] = recog_data.operand[i];
+ j++;
+ }
+ }
+
+ /* Finally, emit the upredicated instruction. */
+ rtx_insn *new_insn;
+ switch (j)
+ {
+ case 1:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0]));
+ break;
+ case 2:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1]));
+ break;
+ case 3:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2]));
+ break;
+ case 4:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2],
+ arr[3]));
+ break;
+ case 5:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2],
+ arr[3], arr[4]));
+ break;
+ case 6:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2],
+ arr[3], arr[4], arr[5]));
+ break;
+ case 7:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2],
+ arr[3], arr[4], arr[5],
+ arr[6]));
+ break;
+ default:
+ gcc_unreachable ();
+ }
+ INSN_LOCATION (new_insn) = INSN_LOCATION (insn);
+ return true;
+}
+
+/* When a vctp insn is used, its out is often followed by
+ a zero-extend insn to SImode, which is then SUBREG'd into a
+ vector form of mode VALID_MVE_PRED_MODE: this vector form is
+ what is then used as an input to the instructions within the
+ loop. Hence, store that vector form of the VPR reg into
+ vctp_vpr_generated, so that we can match it with instructions
+ in the loop to determine if they are predicated on this same
+ VPR. If there is no zero-extend and subreg or it is otherwise
+ invalid, then return NULL to cancel the dlstp transform. */
+
+static rtx
+arm_mve_get_vctp_vec_form (rtx_insn *insn)
+{
+ rtx vctp_vpr_generated = NULL_RTX;
+ rtx_insn *next_use1 = NULL;
+ df_ref use;
+ for (use
+ = DF_REG_USE_CHAIN
+ (DF_REF_REGNO (DF_INSN_INFO_DEFS (DF_INSN_INFO_GET (insn))));
+ use; use = DF_REF_NEXT_REG (use))
+ if (!next_use1 && NONDEBUG_INSN_P (DF_REF_INSN (use)))
+ next_use1 = DF_REF_INSN (use);
+
+ rtx next_use1_set = single_set (next_use1);
+ if (next_use1_set
+ && GET_CODE (SET_SRC (next_use1_set)) == ZERO_EXTEND)
+ {
+ rtx_insn *next_use2 = NULL;
+ for (use
+ = DF_REG_USE_CHAIN
+ (DF_REF_REGNO
+ (DF_INSN_INFO_DEFS (DF_INSN_INFO_GET (next_use1))));
+ use; use = DF_REF_NEXT_REG (use))
+ if (!next_use2 && NONDEBUG_INSN_P (DF_REF_INSN (use)))
+ next_use2 = DF_REF_INSN (use);
+
+ rtx next_use2_set = single_set (next_use2);
+ if (next_use2_set
+ && GET_CODE (SET_SRC (next_use2_set)) == SUBREG)
+ vctp_vpr_generated = SET_DEST (next_use2_set);
+ }
+
+ if (!vctp_vpr_generated || !REG_P (vctp_vpr_generated)
+ || !VALID_MVE_PRED_MODE (GET_MODE (vctp_vpr_generated)))
+ return NULL_RTX;
+
+ return vctp_vpr_generated;
+}
+
+/* Attempt to transform the loop contents of loop basic block from VPT
+ predicated insns into unpredicated insns for a dlstp/letp loop. Returns
+ rtx constant value to decrement from the total number of elements. Return
+ (const_int 1) if we can't use tail predication and fallback to scalar
+ low-overhead loops. */
+
+rtx
+arm_attempt_dlstp_transform (rtx label)
+{
+ basic_block body = BLOCK_FOR_INSN (label)->prev_bb;
+
+ /* Ensure that the bb is within a loop that has all required metadata. */
+ if (!body->loop_father || !body->loop_father->header
+ || !body->loop_father->simple_loop_desc)
+ return const1_rtx;
+
+ rtx_insn *vctp_insn = arm_mve_loop_valid_for_dlstp (body);
+ if (!vctp_insn)
+ return const1_rtx;
+
+ gcc_assert (single_set (vctp_insn));
+
+ rtx vctp_vpr_generated = arm_mve_get_vctp_vec_form (vctp_insn);
+ if (!vctp_vpr_generated)
+ return const1_rtx;
+
+ /* decrementunum is already known to be valid at this point. */
+ int decrementnum = arm_mve_get_vctp_lanes (vctp_insn);
+
+ rtx_insn *insn = 0;
+ rtx_insn *cur_insn = 0;
+ rtx_insn *seq;
+ hash_map <rtx_insn *, bool> *safe_insn_map
+ = new hash_map <rtx_insn *, bool>;
+
+ /* Scan through the insns in the loop bb and emit the transformed bb
+ insns to a sequence. */
+ start_sequence ();
+ FOR_BB_INSNS (body, insn)
+ {
+ if (GET_CODE (insn) == CODE_LABEL || NOTE_INSN_BASIC_BLOCK_P (insn))
+ continue;
+ else if (NOTE_P (insn))
+ emit_note ((enum insn_note)NOTE_KIND (insn));
+ else if (DEBUG_INSN_P (insn))
+ emit_debug_insn (PATTERN (insn));
+ else if (!INSN_P (insn))
+ {
+ end_sequence ();
+ return const1_rtx;
+ }
+ /* When we find the vctp instruction: continue. */
+ else if (insn == vctp_insn)
+ continue;
+ /* If the insn pattern requires the use of the VPR value from the
+ vctp as an input parameter for predication. */
+ else if (arm_mve_vec_insn_is_predicated_with_this_predicate
+ (insn, vctp_vpr_generated))
+ {
+ bool success = arm_emit_mve_unpredicated_insn_to_seq (insn);
+ if (!success)
+ {
+ end_sequence ();
+ return const1_rtx;
+ }
+ }
+ /* If the insn isn't VPT predicated on vctp_vpr_generated, we need to
+ make sure that it is still valid within the dlstp/letp loop. */
+ else
+ {
+ /* If this instruction USE-s the vctp_vpr_generated other than for
+ predication, this blocks the transformation as we are not allowed
+ to optimise the VPR value away. */
+ df_ref insn_uses = NULL;
+ FOR_EACH_INSN_USE (insn_uses, insn)
+ {
+ if (rtx_equal_p (vctp_vpr_generated, DF_REF_REG (insn_uses)))
+ {
+ end_sequence ();
+ return const1_rtx;
+ }
+ }
+ /* If within the loop we have an MVE vector instruction that is
+ unpredicated, the dlstp/letp looping will add implicit
+ predication to it. This will result in a change in behaviour
+ of the instruction, so we need to find out if any instructions
+ that feed into the current instruction were implicitly
+ predicated. */
+ if (arm_mve_check_df_chain_back_for_implic_predic
+ (safe_insn_map, insn, vctp_vpr_generated))
+ {
+ if (arm_mve_check_df_chain_fwd_for_implic_predic_impact
+ (insn, vctp_vpr_generated))
+ {
+ end_sequence ();
+ return const1_rtx;
+ }
+ }
+ emit_insn (PATTERN (insn));
+ }
+ }
+ seq = get_insns ();
+ end_sequence ();
+
+ /* Re-write the entire BB contents with the transformed
+ sequence. */
+ FOR_BB_INSNS_SAFE (body, insn, cur_insn)
+ if (!(GET_CODE (insn) == CODE_LABEL || NOTE_INSN_BASIC_BLOCK_P (insn)))
+ delete_insn (insn);
+
+ emit_insn_after (seq, BB_END (body));
+
+ /* The transformation has succeeded, so now modify the "count"
+ (a.k.a. niter_expr) for the middle-end. Also set noloop_assumptions
+ to NULL to stop the middle-end from making assumptions about the
+ number of iterations. */
+ simple_loop_desc (body->loop_father)->niter_expr
+ = XVECEXP (SET_SRC (PATTERN (vctp_insn)), 0, 0);
+ simple_loop_desc (body->loop_father)->noloop_assumptions = NULL_RTX;
+ return GEN_INT (decrementnum);
}
#if CHECKING_P
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index 8efdebecc3c..da745288f26 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -124,6 +124,11 @@ (define_attr "fpu" "none,vfp"
; and not all ARM insns do.
(define_attr "predicated" "yes,no" (const_string "no"))
+
+; An attribute that encodes the CODE_FOR_<insn> of the MVE VPT unpredicated
+; version of a VPT-predicated instruction. For unpredicated instructions
+; that are predicable, encode the same pattern's CODE_FOR_<insn> as a way to
+; encode that it is a predicable instruction.
(define_attr "mve_unpredicated_insn" "" (const_int 0))
; LENGTH of an instruction (in bytes)
diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md
index 5ea2d9e8668..9398702cddd 100644
--- a/gcc/config/arm/iterators.md
+++ b/gcc/config/arm/iterators.md
@@ -2673,6 +2673,17 @@ (define_int_iterator MRRCI [VUNSPEC_MRRC VUNSPEC_MRRC2])
(define_int_attr mrrc [(VUNSPEC_MRRC "mrrc") (VUNSPEC_MRRC2 "mrrc2")])
(define_int_attr MRRC [(VUNSPEC_MRRC "MRRC") (VUNSPEC_MRRC2 "MRRC2")])
+(define_int_attr dlstp_elemsize [(DLSTP8 "8") (DLSTP16 "16") (DLSTP32 "32")
+ (DLSTP64 "64")])
+
+(define_int_attr letp_num_lanes [(LETP8 "16") (LETP16 "8") (LETP32 "4")
+ (LETP64 "2")])
+(define_int_attr letp_num_lanes_neg [(LETP8 "-16") (LETP16 "-8") (LETP32 "-4")
+ (LETP64 "-2")])
+
+(define_int_attr letp_num_lanes_minus_1 [(LETP8 "15") (LETP16 "7") (LETP32 "3")
+ (LETP64 "1")])
+
(define_int_attr opsuffix [(UNSPEC_DOT_S "s8")
(UNSPEC_DOT_U "u8")
(UNSPEC_DOT_US "s8")
@@ -2916,6 +2927,10 @@ (define_int_iterator SQRSHRLQ [SQRSHRL_64 SQRSHRL_48])
(define_int_iterator VSHLCQ_M [VSHLCQ_M_S VSHLCQ_M_U])
(define_int_iterator VQSHLUQ_M_N [VQSHLUQ_M_N_S])
(define_int_iterator VQSHLUQ_N [VQSHLUQ_N_S])
+(define_int_iterator DLSTP [DLSTP8 DLSTP16 DLSTP32
+ DLSTP64])
+(define_int_iterator LETP [LETP8 LETP16 LETP32
+ LETP64])
;; Define iterators for VCMLA operations
(define_int_iterator VCMLA_OP [UNSPEC_VCMLA
diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md
index 62df022ef19..5748e2333eb 100644
--- a/gcc/config/arm/mve.md
+++ b/gcc/config/arm/mve.md
@@ -6922,23 +6922,24 @@ (define_expand "@arm_mve_reinterpret<mode>"
;; Originally expanded by 'predicated_doloop_end'.
;; In the rare situation where the branch is too far, we do also need to
;; revert FPSCR.LTPSIZE back to 0x100 after the last iteration.
-(define_insn "*predicated_doloop_end_internal"
+(define_insn "predicated_doloop_end_internal<letp_num_lanes>"
[(set (pc)
(if_then_else
- (ge (plus:SI (reg:SI LR_REGNUM)
- (match_operand:SI 0 "const_int_operand" ""))
- (const_int 0))
- (label_ref (match_operand 1 "" ""))
+ (gtu (unspec:SI [(plus:SI (match_operand:SI 0 "s_register_operand" "=r")
+ (const_int <letp_num_lanes_neg>))]
+ LETP)
+ (const_int <letp_num_lanes_minus_1>))
+ (match_operand 1 "" "")
(pc)))
- (set (reg:SI LR_REGNUM)
- (plus:SI (reg:SI LR_REGNUM) (match_dup 0)))
+ (set (match_dup 0)
+ (plus:SI (match_dup 0) (const_int <letp_num_lanes_neg>)))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2"
+ "TARGET_HAVE_MVE"
{
if (get_attr_length (insn) == 4)
return "letp\t%|lr, %l1";
else
- return "subs\t%|lr, #%n0\n\tbgt\t%l1\n\tlctp";
+ return "subs\t%|lr, #<letp_num_lanes>\n\tbhi\t%l1\n\tlctp";
}
[(set (attr "length")
(if_then_else
@@ -6947,11 +6948,11 @@ (define_insn "*predicated_doloop_end_internal"
(const_int 6)))
(set_attr "type" "branch")])
-(define_insn "dlstp<mode1>_insn"
+(define_insn "dlstp<dlstp_elemsize>_insn"
[
(set (reg:SI LR_REGNUM)
(unspec:SI [(match_operand:SI 0 "s_register_operand" "r")]
DLSTP))
]
- "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2"
- "dlstp.<mode1>\t%|lr, %0")
+ "TARGET_HAVE_MVE"
+ "dlstp.<dlstp_elemsize>\t%|lr, %0")
diff --git a/gcc/config/arm/thumb2.md b/gcc/config/arm/thumb2.md
index e1e013befa7..f2801cea36a 100644
--- a/gcc/config/arm/thumb2.md
+++ b/gcc/config/arm/thumb2.md
@@ -1613,7 +1613,7 @@ (define_expand "doloop_end"
(use (match_operand 1 "" ""))] ; label
"TARGET_32BIT"
"
- {
+{
/* Currently SMS relies on the do-loop pattern to recognize loops
where (1) the control part consists of all insns defining and/or
using a certain 'count' register and (2) the loop count can be
@@ -1623,41 +1623,77 @@ (define_expand "doloop_end"
Also used to implement the low over head loops feature, which is part of
the Armv8.1-M Mainline Low Overhead Branch (LOB) extension. */
- if (optimize > 0 && (flag_modulo_sched || TARGET_HAVE_LOB))
- {
- rtx s0;
- rtx bcomp;
- rtx loc_ref;
- rtx cc_reg;
- rtx insn;
- rtx cmp;
-
- if (GET_MODE (operands[0]) != SImode)
- FAIL;
-
- s0 = operands [0];
-
- /* Low over head loop instructions require the first operand to be LR. */
- if (TARGET_HAVE_LOB && arm_target_insn_ok_for_lob (operands [1]))
- s0 = gen_rtx_REG (SImode, LR_REGNUM);
-
- if (TARGET_THUMB2)
- insn = emit_insn (gen_thumb2_addsi3_compare0 (s0, s0, GEN_INT (-1)));
- else
- insn = emit_insn (gen_addsi3_compare0 (s0, s0, GEN_INT (-1)));
-
- cmp = XVECEXP (PATTERN (insn), 0, 0);
- cc_reg = SET_DEST (cmp);
- bcomp = gen_rtx_NE (VOIDmode, cc_reg, const0_rtx);
- loc_ref = gen_rtx_LABEL_REF (VOIDmode, operands [1]);
- emit_jump_insn (gen_rtx_SET (pc_rtx,
- gen_rtx_IF_THEN_ELSE (VOIDmode, bcomp,
- loc_ref, pc_rtx)));
- DONE;
- }
- else
- FAIL;
- }")
+ if (optimize > 0 && (flag_modulo_sched || TARGET_HAVE_LOB))
+ {
+ rtx s0;
+ rtx bcomp;
+ rtx loc_ref;
+ rtx cc_reg;
+ rtx insn;
+ rtx cmp;
+ rtx decrement_num;
+
+ if (GET_MODE (operands[0]) != SImode)
+ FAIL;
+
+ s0 = operands[0];
+
+ if (TARGET_HAVE_LOB && arm_target_bb_ok_for_lob (BLOCK_FOR_INSN (operands[1])))
+ {
+ s0 = gen_rtx_REG (SImode, LR_REGNUM);
+
+ /* If we have a compatible MVE target, try and analyse the loop
+ contents to determine if we can use predicated dlstp/letp
+ looping. */
+ if (TARGET_HAVE_MVE
+ && (decrement_num = arm_attempt_dlstp_transform (operands[1]))
+ && (INTVAL (decrement_num) != 1))
+ {
+ loc_ref = gen_rtx_LABEL_REF (VOIDmode, operands[1]);
+ switch (INTVAL (decrement_num))
+ {
+ case 2:
+ insn = emit_jump_insn (gen_predicated_doloop_end_internal2
+ (s0, loc_ref));
+ break;
+ case 4:
+ insn = emit_jump_insn (gen_predicated_doloop_end_internal4
+ (s0, loc_ref));
+ break;
+ case 8:
+ insn = emit_jump_insn (gen_predicated_doloop_end_internal8
+ (s0, loc_ref));
+ break;
+ case 16:
+ insn = emit_jump_insn (gen_predicated_doloop_end_internal16
+ (s0, loc_ref));
+ break;
+ default:
+ gcc_unreachable ();
+ }
+ DONE;
+ }
+ }
+
+ /* Otherwise, try standard decrement-by-one dls/le looping. */
+ if (TARGET_THUMB2)
+ insn = emit_insn (gen_thumb2_addsi3_compare0 (s0, s0,
+ GEN_INT (-1)));
+ else
+ insn = emit_insn (gen_addsi3_compare0 (s0, s0, GEN_INT (-1)));
+
+ cmp = XVECEXP (PATTERN (insn), 0, 0);
+ cc_reg = SET_DEST (cmp);
+ bcomp = gen_rtx_NE (VOIDmode, cc_reg, const0_rtx);
+ loc_ref = gen_rtx_LABEL_REF (VOIDmode, operands[1]);
+ emit_jump_insn (gen_rtx_SET (pc_rtx,
+ gen_rtx_IF_THEN_ELSE (VOIDmode, bcomp,
+ loc_ref, pc_rtx)));
+ DONE;
+ }
+ else
+ FAIL;
+}")
(define_insn "*clear_apsr"
[(unspec_volatile:SI [(const_int 0)] VUNSPEC_CLRM_APSR)
@@ -1755,7 +1791,37 @@ (define_expand "doloop_begin"
{
if (REGNO (operands[0]) == LR_REGNUM)
{
- emit_insn (gen_dls_insn (operands[0]));
+ /* Pick out the number by which we are decrementing the loop counter
+ in every iteration. If it's > 1, then use dlstp. */
+ int const_int_dec_num
+ = abs (INTVAL (XEXP (XEXP (XVECEXP (PATTERN (operands[1]), 0, 1),
+ 1),
+ 1)));
+ switch (const_int_dec_num)
+ {
+ case 16:
+ emit_insn (gen_dlstp8_insn (operands[0]));
+ break;
+
+ case 8:
+ emit_insn (gen_dlstp16_insn (operands[0]));
+ break;
+
+ case 4:
+ emit_insn (gen_dlstp32_insn (operands[0]));
+ break;
+
+ case 2:
+ emit_insn (gen_dlstp64_insn (operands[0]));
+ break;
+
+ case 1:
+ emit_insn (gen_dls_insn (operands[0]));
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
DONE;
}
else
diff --git a/gcc/config/arm/unspecs.md b/gcc/config/arm/unspecs.md
index 4713ec840ab..2d6f27c14f4 100644
--- a/gcc/config/arm/unspecs.md
+++ b/gcc/config/arm/unspecs.md
@@ -583,6 +583,14 @@ (define_c_enum "unspec" [
VADDLVQ_U
VCTP
VCTP_M
+ DLSTP8
+ DLSTP16
+ DLSTP32
+ DLSTP64
+ LETP8
+ LETP16
+ LETP32
+ LETP64
VPNOT
VCREATEQ_F
VCVTQ_N_TO_F_S
diff --git a/gcc/df-core.cc b/gcc/df-core.cc
index d4812b04a7c..4fcc14bf790 100644
--- a/gcc/df-core.cc
+++ b/gcc/df-core.cc
@@ -1964,6 +1964,21 @@ df_bb_regno_last_def_find (basic_block bb, unsigned int regno)
return NULL;
}
+/* Return the one and only def of REGNO within BB. If there is no def or
+ there are multiple defs, return NULL. */
+
+df_ref
+df_bb_regno_only_def_find (basic_block bb, unsigned int regno)
+{
+ df_ref temp = df_bb_regno_first_def_find (bb, regno);
+ if (!temp)
+ return NULL;
+ else if (temp == df_bb_regno_last_def_find (bb, regno))
+ return temp;
+ else
+ return NULL;
+}
+
/* Finds the reference corresponding to the definition of REG in INSN.
DF is the dataflow object. */
diff --git a/gcc/df.h b/gcc/df.h
index 402657a7076..98623637f9c 100644
--- a/gcc/df.h
+++ b/gcc/df.h
@@ -987,6 +987,7 @@ extern void df_check_cfg_clean (void);
#endif
extern df_ref df_bb_regno_first_def_find (basic_block, unsigned int);
extern df_ref df_bb_regno_last_def_find (basic_block, unsigned int);
+extern df_ref df_bb_regno_only_def_find (basic_block, unsigned int);
extern df_ref df_find_def (rtx_insn *, rtx);
extern bool df_reg_defined (rtx_insn *, rtx);
extern df_ref df_find_use (rtx_insn *, rtx);
diff --git a/gcc/loop-doloop.cc b/gcc/loop-doloop.cc
index 4feb0a25ab9..d919207505c 100644
--- a/gcc/loop-doloop.cc
+++ b/gcc/loop-doloop.cc
@@ -85,10 +85,10 @@ doloop_condition_get (rtx_insn *doloop_pat)
forms:
1) (parallel [(set (pc) (if_then_else (condition)
- (label_ref (label))
- (pc)))
- (set (reg) (plus (reg) (const_int -1)))
- (additional clobbers and uses)])
+ (label_ref (label))
+ (pc)))
+ (set (reg) (plus (reg) (const_int -1)))
+ (additional clobbers and uses)])
The branch must be the first entry of the parallel (also required
by jump.cc), and the second entry of the parallel must be a set of
@@ -96,19 +96,34 @@ doloop_condition_get (rtx_insn *doloop_pat)
the loop counter in an if_then_else too.
2) (set (reg) (plus (reg) (const_int -1))
- (set (pc) (if_then_else (reg != 0)
- (label_ref (label))
- (pc))).
+ (set (pc) (if_then_else (reg != 0)
+ (label_ref (label))
+ (pc))).
Some targets (ARM) do the comparison before the branch, as in the
following form:
- 3) (parallel [(set (cc) (compare ((plus (reg) (const_int -1), 0)))
- (set (reg) (plus (reg) (const_int -1)))])
- (set (pc) (if_then_else (cc == NE)
- (label_ref (label))
- (pc))) */
-
+ 3) (parallel [(set (cc) (compare (plus (reg) (const_int -1)) 0))
+ (set (reg) (plus (reg) (const_int -1)))])
+ (set (pc) (if_then_else (cc == NE)
+ (label_ref (label))
+ (pc)))
+
+ The ARM target also supports a special case of a counter that decrements
+ by `n` and terminating in a GTU condition. In that case, the compare and
+ branch are all part of one insn, containing an UNSPEC:
+
+ 4) (parallel [
+ (set (pc)
+ (if_then_else (gtu (unspec:SI [(plus:SI (reg:SI 14 lr)
+ (const_int -n))])
+ (const_int n-1]))
+ (label_ref)
+ (pc)))
+ (set (reg:SI 14 lr)
+ (plus:SI (reg:SI 14 lr)
+ (const_int -n)))
+ */
pattern = PATTERN (doloop_pat);
if (GET_CODE (pattern) != PARALLEL)
@@ -143,7 +158,7 @@ doloop_condition_get (rtx_insn *doloop_pat)
|| GET_CODE (cmp_arg1) != PLUS)
return 0;
reg_orig = XEXP (cmp_arg1, 0);
- if (XEXP (cmp_arg1, 1) != GEN_INT (-1)
+ if (XEXP (cmp_arg1, 1) != GEN_INT (-1)
|| !REG_P (reg_orig))
return 0;
cc_reg = SET_DEST (cmp_orig);
@@ -173,15 +188,17 @@ doloop_condition_get (rtx_insn *doloop_pat)
if (! REG_P (reg))
return 0;
- /* Check if something = (plus (reg) (const_int -1)).
+ /* Check if something = (plus (reg) (const_int -n)).
On IA-64, this decrement is wrapped in an if_then_else. */
inc_src = SET_SRC (inc);
if (GET_CODE (inc_src) == IF_THEN_ELSE)
inc_src = XEXP (inc_src, 1);
if (GET_CODE (inc_src) != PLUS
|| XEXP (inc_src, 0) != reg
- || XEXP (inc_src, 1) != constm1_rtx)
+ || !CONST_INT_P (XEXP (inc_src, 1))
+ || INTVAL (XEXP (inc_src, 1)) >= 0)
return 0;
+ int dec_num = abs (INTVAL (XEXP (inc_src, 1)));
/* Check for (set (pc) (if_then_else (condition)
(label_ref (label))
@@ -196,60 +213,71 @@ doloop_condition_get (rtx_insn *doloop_pat)
/* Extract loop termination condition. */
condition = XEXP (SET_SRC (cmp), 0);
- /* We expect a GE or NE comparison with 0 or 1. */
- if ((GET_CODE (condition) != GE
- && GET_CODE (condition) != NE)
- || (XEXP (condition, 1) != const0_rtx
- && XEXP (condition, 1) != const1_rtx))
+ /* We expect a GE or NE comparison with 0 or 1, or a GTU comparison with
+ dec_num - 1. */
+ if (!((GET_CODE (condition) == GE
+ || GET_CODE (condition) == NE)
+ && (XEXP (condition, 1) == const0_rtx
+ || XEXP (condition, 1) == const1_rtx ))
+ &&!(GET_CODE (condition) == GTU
+ && ((INTVAL (XEXP (condition, 1))) == (dec_num - 1))))
return 0;
- if ((XEXP (condition, 0) == reg)
+ /* For the ARM special case of having a GTU: re-form the condition without
+ the unspec for the benefit of the middle-end. */
+ if (GET_CODE (condition) == GTU)
+ {
+ condition = gen_rtx_fmt_ee (GTU, VOIDmode, inc_src,
+ GEN_INT (dec_num - 1));
+ return condition;
+ }
+ else if ((XEXP (condition, 0) == reg)
/* For the third case: */
|| ((cc_reg != NULL_RTX)
&& (XEXP (condition, 0) == cc_reg)
&& (reg_orig == reg))
|| (GET_CODE (XEXP (condition, 0)) == PLUS
&& XEXP (XEXP (condition, 0), 0) == reg))
- {
+ {
if (GET_CODE (pattern) != PARALLEL)
/* For the second form we expect:
- (set (reg) (plus (reg) (const_int -1))
- (set (pc) (if_then_else (reg != 0)
- (label_ref (label))
- (pc))).
+ (set (reg) (plus (reg) (const_int -1))
+ (set (pc) (if_then_else (reg != 0)
+ (label_ref (label))
+ (pc))).
- is equivalent to the following:
+ is equivalent to the following:
- (parallel [(set (pc) (if_then_else (reg != 1)
- (label_ref (label))
- (pc)))
- (set (reg) (plus (reg) (const_int -1)))
- (additional clobbers and uses)])
+ (parallel [(set (pc) (if_then_else (reg != 1)
+ (label_ref (label))
+ (pc)))
+ (set (reg) (plus (reg) (const_int -1)))
+ (additional clobbers and uses)])
- For the third form we expect:
+ For the third form we expect:
- (parallel [(set (cc) (compare ((plus (reg) (const_int -1)), 0))
- (set (reg) (plus (reg) (const_int -1)))])
- (set (pc) (if_then_else (cc == NE)
- (label_ref (label))
- (pc)))
+ (parallel [(set (cc) (compare ((plus (reg) (const_int -1)), 0))
+ (set (reg) (plus (reg) (const_int -1)))])
+ (set (pc) (if_then_else (cc == NE)
+ (label_ref (label))
+ (pc)))
- which is equivalent to the following:
+ which is equivalent to the following:
- (parallel [(set (cc) (compare (reg, 1))
- (set (reg) (plus (reg) (const_int -1)))
- (set (pc) (if_then_else (NE == cc)
- (label_ref (label))
- (pc))))])
+ (parallel [(set (cc) (compare (reg, 1))
+ (set (reg) (plus (reg) (const_int -1)))
+ (set (pc) (if_then_else (NE == cc)
+ (label_ref (label))
+ (pc))))])
- So we return the second form instead for the two cases.
+ So we return the second form instead for the two cases.
*/
- condition = gen_rtx_fmt_ee (NE, VOIDmode, inc_src, const1_rtx);
+ condition = gen_rtx_fmt_ee (NE, VOIDmode, inc_src, const1_rtx);
return condition;
- }
+ }
/* ??? If a machine uses a funny comparison, we could return a
canonicalized form here. */
@@ -507,6 +535,11 @@ doloop_modify (class loop *loop, class niter_desc *desc,
nonneg = 1;
break;
+ case GTU:
+ /* The iteration count does not need incrementing for a GTU test. */
+ increment_count = false;
+ break;
+
/* Abort if an invalid doloop pattern has been generated. */
default:
gcc_unreachable ();
@@ -529,6 +562,10 @@ doloop_modify (class loop *loop, class niter_desc *desc,
if (desc->noloop_assumptions)
{
+ /* The GTU case has only been implemented for the ARM target, where
+ noloop_assumptions gets explicitly set to NULL for that case, so
+ assert here for safety. */
+ gcc_assert (GET_CODE (condition) != GTU);
rtx ass = copy_rtx (desc->noloop_assumptions);
basic_block preheader = loop_preheader_edge (loop)->src;
basic_block set_zero = split_edge (loop_preheader_edge (loop));
@@ -642,7 +679,7 @@ doloop_optimize (class loop *loop)
{
scalar_int_mode mode;
rtx doloop_reg;
- rtx count;
+ rtx count = NULL_RTX;
widest_int iterations, iterations_max;
rtx_code_label *start_label;
rtx condition;
@@ -685,17 +722,6 @@ doloop_optimize (class loop *loop)
return false;
}
- max_cost
- = COSTS_N_INSNS (param_max_iterations_computation_cost);
- if (set_src_cost (desc->niter_expr, mode, optimize_loop_for_speed_p (loop))
- > max_cost)
- {
- if (dump_file)
- fprintf (dump_file,
- "Doloop: number of iterations too costly to compute.\n");
- return false;
- }
-
if (desc->const_iter)
iterations = widest_int::from (rtx_mode_t (desc->niter_expr, mode),
UNSIGNED);
@@ -716,12 +742,25 @@ doloop_optimize (class loop *loop)
/* Generate looping insn. If the pattern FAILs then give up trying
to modify the loop since there is some aspect the back-end does
- not like. */
- count = copy_rtx (desc->niter_expr);
+ not like. If this succeeds, there is a chance that the loop
+ desc->niter_expr has been altered by the backend, so only extract
+ that data after the gen_doloop_end. */
start_label = block_label (desc->in_edge->dest);
doloop_reg = gen_reg_rtx (mode);
rtx_insn *doloop_seq = targetm.gen_doloop_end (doloop_reg, start_label);
+ max_cost
+ = COSTS_N_INSNS (param_max_iterations_computation_cost);
+ if (set_src_cost (desc->niter_expr, mode, optimize_loop_for_speed_p (loop))
+ > max_cost)
+ {
+ if (dump_file)
+ fprintf (dump_file,
+ "Doloop: number of iterations too costly to compute.\n");
+ return false;
+ }
+
+ count = copy_rtx (desc->niter_expr);
word_mode_size = GET_MODE_PRECISION (word_mode);
word_mode_max = (HOST_WIDE_INT_1U << (word_mode_size - 1) << 1) - 1;
if (! doloop_seq
diff --git a/gcc/testsuite/gcc.target/arm/lob.h b/gcc/testsuite/gcc.target/arm/lob.h
index feaae7cc899..3941fe7a8b6 100644
--- a/gcc/testsuite/gcc.target/arm/lob.h
+++ b/gcc/testsuite/gcc.target/arm/lob.h
@@ -1,15 +1,131 @@
#include <string.h>
-
+#include <stdint.h>
/* Common code for lob tests. */
#define NO_LOB asm volatile ("@ clobber lr" : : : "lr" )
-#define N 10000
+#define N 100
+
+static void
+reset_data (int *a, int *b, int *c, int x)
+{
+ memset (a, -1, x * sizeof (*a));
+ memset (b, -1, x * sizeof (*b));
+ memset (c, 0, x * sizeof (*c));
+}
+
+static void
+reset_data8 (int8_t *a, int8_t *b, int8_t *c, int x)
+{
+ memset (a, -1, x * sizeof (*a));
+ memset (b, -1, x * sizeof (*b));
+ memset (c, 0, x * sizeof (*c));
+}
+
+static void
+reset_data16 (int16_t *a, int16_t *b, int16_t *c, int x)
+{
+ memset (a, -1, x * sizeof (*a));
+ memset (b, -1, x * sizeof (*b));
+ memset (c, 0, x * sizeof (*c));
+}
+
+static void
+reset_data32 (int32_t *a, int32_t *b, int32_t *c, int x)
+{
+ memset (a, -1, x * sizeof (*a));
+ memset (b, -1, x * sizeof (*b));
+ memset (c, 0, x * sizeof (*c));
+}
+
+static void
+reset_data64 (int64_t *a, int64_t *c, int x)
+{
+ memset (a, -1, x * sizeof (*a));
+ memset (c, 0, x * sizeof (*c));
+}
+
+static void
+check_plus (int *a, int *b, int *c, int x)
+{
+ for (int i = 0; i < N; i++)
+ {
+ NO_LOB;
+ if (i < x)
+ {
+ if (c[i] != (a[i] + b[i])) abort ();
+ }
+ else
+ {
+ if (c[i] != 0) abort ();
+ }
+ }
+}
+
+static void
+check_plus8 (int8_t *a, int8_t *b, int8_t *c, int x)
+{
+ for (int i = 0; i < N; i++)
+ {
+ NO_LOB;
+ if (i < x)
+ {
+ if (c[i] != (a[i] + b[i])) abort ();
+ }
+ else
+ {
+ if (c[i] != 0) abort ();
+ }
+ }
+}
+
+static void
+check_plus16 (int16_t *a, int16_t *b, int16_t *c, int x)
+{
+ for (int i = 0; i < N; i++)
+ {
+ NO_LOB;
+ if (i < x)
+ {
+ if (c[i] != (a[i] + b[i])) abort ();
+ }
+ else
+ {
+ if (c[i] != 0) abort ();
+ }
+ }
+}
+
+static void
+check_plus32 (int32_t *a, int32_t *b, int32_t *c, int x)
+{
+ for (int i = 0; i < N; i++)
+ {
+ NO_LOB;
+ if (i < x)
+ {
+ if (c[i] != (a[i] + b[i])) abort ();
+ }
+ else
+ {
+ if (c[i] != 0) abort ();
+ }
+ }
+}
static void
-reset_data (int *a, int *b, int *c)
+check_memcpy64 (int64_t *a, int64_t *c, int x)
{
- memset (a, -1, N * sizeof (*a));
- memset (b, -1, N * sizeof (*b));
- memset (c, -1, N * sizeof (*c));
+ for (int i = 0; i < N; i++)
+ {
+ NO_LOB;
+ if (i < x)
+ {
+ if (c[i] != a[i]) abort ();
+ }
+ else
+ {
+ if (c[i] != 0) abort ();
+ }
+ }
}
diff --git a/gcc/testsuite/gcc.target/arm/lob1.c b/gcc/testsuite/gcc.target/arm/lob1.c
index ba5c82cd55c..c8ce653a5c3 100644
--- a/gcc/testsuite/gcc.target/arm/lob1.c
+++ b/gcc/testsuite/gcc.target/arm/lob1.c
@@ -54,29 +54,18 @@ loop3 (int *a, int *b, int *c)
} while (i < N);
}
-void
-check (int *a, int *b, int *c)
-{
- for (int i = 0; i < N; i++)
- {
- NO_LOB;
- if (c[i] != a[i] + b[i])
- abort ();
- }
-}
-
int
main (void)
{
- reset_data (a, b, c);
+ reset_data (a, b, c, N);
loop1 (a, b ,c);
- check (a, b ,c);
- reset_data (a, b, c);
+ check_plus (a, b, c, N);
+ reset_data (a, b, c, N);
loop2 (a, b ,c);
- check (a, b ,c);
- reset_data (a, b, c);
+ check_plus (a, b, c, N);
+ reset_data (a, b, c, N);
loop3 (a, b ,c);
- check (a, b ,c);
+ check_plus (a, b, c, N);
return 0;
}
diff --git a/gcc/testsuite/gcc.target/arm/lob6.c b/gcc/testsuite/gcc.target/arm/lob6.c
index 17b6124295e..4fe116e2c2b 100644
--- a/gcc/testsuite/gcc.target/arm/lob6.c
+++ b/gcc/testsuite/gcc.target/arm/lob6.c
@@ -79,14 +79,14 @@ check (void)
int
main (void)
{
- reset_data (a1, b1, c1);
- reset_data (a2, b2, c2);
+ reset_data (a1, b1, c1, N);
+ reset_data (a2, b2, c2, N);
loop1 (a1, b1, c1);
ref1 (a2, b2, c2);
check ();
- reset_data (a1, b1, c1);
- reset_data (a2, b2, c2);
+ reset_data (a1, b1, c1, N);
+ reset_data (a2, b2, c2, N);
loop2 (a1, b1, c1);
ref2 (a2, b2, c2);
check ();
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-compile-asm.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-compile-asm.c
new file mode 100644
index 00000000000..5ddd994e53d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-compile-asm.c
@@ -0,0 +1,561 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-options "-O3 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <arm_mve.h>
+
+#define IMM 5
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \
+void test_##NAME##PRED##_##SIGN##BITS (TYPE##BITS##_t *a, TYPE##BITS##_t *b, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \
+ TYPE##BITS##x##LANES##_t vb = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (b, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_##SIGN##BITS (va, vb, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ b += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY(BITS, LANES, LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY(NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (8, 16, b, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (16, 8, h, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (32, 4, w, NAME, PRED)
+
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vaddq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vmulq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vsubq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vhaddq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vorrq, _x)
+
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY_M(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \
+void test_##NAME##PRED##_##SIGN##BITS (TYPE##BITS##x##LANES##_t __inactive, TYPE##BITS##_t *a, TYPE##BITS##_t *b, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \
+ TYPE##BITS##x##LANES##_t vb = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (b, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_##SIGN##BITS (__inactive, va, vb, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ b += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M(BITS, LANES, LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_M (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_M (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M(NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M (8, 16, b, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M (16, 8, h, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M (32, 4, w, NAME, PRED)
+
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vaddq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vmulq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vsubq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vhaddq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vorrq, _m)
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY_N(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \
+void test_##NAME##PRED##_n_##SIGN##BITS (TYPE##BITS##_t *a, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_n_##SIGN##BITS (va, IMM, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N(BITS, LANES, LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_N (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_N (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N(NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (8, 16, b, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (16, 8, h, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (32, 4, w, NAME, PRED)
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vaddq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vmulq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vsubq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vhaddq, _x)
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vbrsrq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vshlq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vshrq, _x)
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY_M_N(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \
+void test_##NAME##PRED##_n_##SIGN##BITS (TYPE##BITS##x##LANES##_t __inactive, TYPE##BITS##_t *a, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_n_##SIGN##BITS (__inactive, va, IMM, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N(BITS, LANES, LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_M_N (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_M_N (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N(NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N (8, 16, b, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N (16, 8, h, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N (32, 4, w, NAME, PRED)
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vaddq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vmulq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vsubq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vhaddq, _m)
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vbrsrq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vshlq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vshrq, _m)
+
+/* Now test some more configurations. */
+
+/* Using a >=1 condition. */
+void test1 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n >= 1)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c+=4;
+ a+=4;
+ b+=4;
+ n-=4;
+ }
+}
+
+/* Test a for loop format of decrementing to zero */
+int32_t a[] = {0, 1, 2, 3, 4, 5, 6, 7};
+void test2 (int32_t *b, int num_elems)
+{
+ for (int i = num_elems; i > 0; i-= 4)
+ {
+ mve_pred16_t p = vctp32q (i);
+ int32x4_t va = vldrwq_z_s32 (&(a[i]), p);
+ vstrwq_p_s32 (b + i, va, p);
+ }
+}
+
+/* Iteration counter counting up to num_iter. */
+void test3 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int num_iter = (n + 15)/16;
+ for (int i = 0; i < num_iter; i++)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n-=16;
+ }
+}
+
+/* Iteration counter counting down from num_iter. */
+void test4 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int num_iter = (n + 15)/16;
+ for (int i = num_iter; i > 0; i--)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n-=16;
+ }
+}
+
+/* Using an unpredicated arithmetic instruction within the loop. */
+void test5 (uint8_t *a, uint8_t *b, uint8_t *c, uint8_t *d, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_u8 (b);
+ /* Is affected by implicit predication, because vb also
+ came from an unpredicated load, but there is no functional
+ problem, because the result is used in a predicated store. */
+ uint8x16_t vc = vaddq_u8 (va, vb);
+ uint8x16_t vd = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ vstrbq_p_u8 (d, vd, p);
+ n-=16;
+ }
+}
+
+/* Using a different VPR value for one instruction in the loop. */
+void test6 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using another VPR value in the loop, with a vctp.
+ The doloop logic will always try to do the transform on the first
+ vctp it encounters, so this is still expected to work. */
+void test7 (int32_t *a, int32_t *b, int32_t *c, int n, int g)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ mve_pred16_t p1 = vctp32q (g);
+ int32x4_t vb = vldrwq_z_s32 (b, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using a different VPR value in the loop, with a vctp,
+ but this time the p1 will also change in every loop (still fine) */
+void test8 (int32_t *a, int32_t *b, int32_t *c, int n, int g)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ mve_pred16_t p1 = vctp32q (g);
+ int32x4_t vb = vldrwq_z_s32 (b, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ g++;
+ }
+}
+
+/* Generating and using a different VPR value in the loop, with a vctp_m
+ that is independent of the loop vctp VPR. */
+void test9 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ mve_pred16_t p2 = vctp32q_m (n, p1);
+ int32x4_t vb = vldrwq_z_s32 (b, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p2);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using a different VPR value in the loop,
+ with a vctp_m that is tied to the base vctp VPR. This
+ is still fine, because the vctp_m will be transformed
+ into a vctp and be implicitly predicated. */
+void test10 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ mve_pred16_t p1 = vctp32q_m (n, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p1);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using a different VPR value in the loop, with a vcmp. */
+void test11 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ mve_pred16_t p1 = vcmpeqq_s32 (va, vb);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p1);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using a different VPR value in the loop, with a vcmp_m. */
+void test12 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ mve_pred16_t p2 = vcmpeqq_m_s32 (va, vb, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p2);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using a different VPR value in the loop, with a vcmp_m
+ that is tied to the base vctp VPR (same as above, this will be turned
+ into a vcmp and be implicitly predicated). */
+void test13 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ mve_pred16_t p2 = vcmpeqq_m_s32 (va, vb, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p2);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using an unpredicated op with a scalar output, where the result is valid
+ outside the bb. This is valid, because all the inputs to the unpredicated
+ op are correctly predicated. */
+uint8_t test14 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx)
+{
+ uint8_t sum = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_m_u8 (vx, va, vb, p);
+ sum += vaddvq_u8 (vc);
+ a += 16;
+ b += 16;
+ n -= 16;
+ }
+ return sum;
+}
+
+/* Same as above, but with another scalar op between the unpredicated op and
+ the scalar op outside the loop. */
+uint8_t test15 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx, int g)
+{
+ uint8_t sum = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_m_u8 (vx, va, vb, p);
+ sum += vaddvq_u8 (vc);
+ sum += g;
+ a += 16;
+ b += 16;
+ n -= 16;
+ }
+ return sum;
+}
+
+/* Using an unpredicated vcmp to generate a new predicate value in the
+ loop and then using it in a predicated store insn. */
+void test16 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_s32 (b);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ mve_pred16_t p1 = vcmpeqq_s32 (va, vc);
+ vstrwq_p_s32 (c, vc, p1);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using a predicated vcmp to generate a new predicate value in the
+ loop and then using it in a predicated store insn. */
+void test17 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_s32 (va, vb);
+ mve_pred16_t p1 = vcmpeqq_m_s32 (va, vc, p);
+ vstrwq_p_s32 (c, vc, p1);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using an across-vector unpredicated instruction in a valid way.
+ This tests that "vc" has correctly masked the risky "vb". */
+uint16_t test18 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ uint16x8_t vc = vaddq_x_u16 (va, vb, p);
+ res = vaddvq_u16 (vc);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+/* Using an across-vector unpredicated instruction with a scalar from outside the loop. */
+uint16_t test19 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ uint16x8_t vc = vaddq_x_u16 (va, vb, p);
+ res = vaddvaq_u16 (res, vc);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+/* Using an across-vector predicated instruction in a valid way. */
+uint16_t test20 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ res = vaddvaq_p_u16 (res, vb, p);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+/* Using an across-vector predicated instruction in a valid way. */
+uint16_t test21 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ res++;
+ res = vaddvaq_p_u16 (res, vb, p);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+int test22 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ res = vmaxvq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+int test23 (int8_t *a, int8_t *b, int8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ res = vmaxavq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+/* The final number of DLSTPs currently is calculated by the number of
+ `TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY.*` macros * 6 + 23. */
+/* { dg-final { scan-assembler-times {\tdlstp} 167 } } */
+/* { dg-final { scan-assembler-times {\tletp} 167 } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8-run.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8-run.c
new file mode 100644
index 00000000000..6966a396604
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8-run.c
@@ -0,0 +1,44 @@
+/* { dg-do run { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-require-effective-target arm_mve_hw } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+#include "dlstp-int16x8.c"
+
+int main ()
+{
+ int i;
+ int16_t temp1[N];
+ int16_t temp2[N];
+ int16_t temp3[N];
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 0);
+ check_plus16 (temp1, temp2, temp3, 0);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 1);
+ check_plus16 (temp1, temp2, temp3, 1);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 7);
+ check_plus16 (temp1, temp2, temp3, 7);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 8);
+ check_plus16 (temp1, temp2, temp3, 8);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 9);
+ check_plus16 (temp1, temp2, temp3, 9);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 16);
+ check_plus16 (temp1, temp2, temp3, 16);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 17);
+ check_plus16 (temp1, temp2, temp3, 17);
+
+ reset_data16 (temp1, temp2, temp3, N);
+}
+
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8.c
new file mode 100644
index 00000000000..33632c5f14d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8.c
@@ -0,0 +1,31 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <arm_mve.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include "../lob.h"
+
+void __attribute__ ((noinline)) test (int16_t *a, int16_t *b, int16_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ int16x8_t va = vldrhq_z_s16 (a, p);
+ int16x8_t vb = vldrhq_z_s16 (b, p);
+ int16x8_t vc = vaddq_x_s16 (va, vb, p);
+ vstrhq_p_s16 (c, vc, p);
+ c+=8;
+ a+=8;
+ b+=8;
+ n-=8;
+ }
+}
+
+/* { dg-final { scan-assembler-times {\tdlstp.16} 1 } } */
+/* { dg-final { scan-assembler-times {\tletp} 1 } } */
+/* { dg-final { scan-assembler-not "\tvctp" } } */
+/* { dg-final { scan-assembler-not "\tvpst" } } */
+/* { dg-final { scan-assembler-not "p0" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4-run.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4-run.c
new file mode 100644
index 00000000000..6833dddde92
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4-run.c
@@ -0,0 +1,45 @@
+/* { dg-do run { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-require-effective-target arm_mve_hw } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include "dlstp-int32x4.c"
+
+int main ()
+{
+ int i;
+ int32_t temp1[N];
+ int32_t temp2[N];
+ int32_t temp3[N];
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 0);
+ check_plus32 (temp1, temp2, temp3, 0);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 1);
+ check_plus32 (temp1, temp2, temp3, 1);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 3);
+ check_plus32 (temp1, temp2, temp3, 3);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 4);
+ check_plus32 (temp1, temp2, temp3, 4);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 5);
+ check_plus32 (temp1, temp2, temp3, 5);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 8);
+ check_plus32 (temp1, temp2, temp3, 8);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 9);
+ check_plus32 (temp1, temp2, temp3, 9);
+
+ reset_data32 (temp1, temp2, temp3, N);
+}
+
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4.c
new file mode 100644
index 00000000000..5d09f784b77
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4.c
@@ -0,0 +1,31 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <arm_mve.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include "../lob.h"
+
+void __attribute__ ((noinline)) test (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c+=4;
+ a+=4;
+ b+=4;
+ n-=4;
+ }
+}
+
+/* { dg-final { scan-assembler-times {\tdlstp.32} 1 } } */
+/* { dg-final { scan-assembler-times {\tletp} 1 } } */
+/* { dg-final { scan-assembler-not "\tvctp" } } */
+/* { dg-final { scan-assembler-not "\tvpst" } } */
+/* { dg-final { scan-assembler-not "p0" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2-run.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2-run.c
new file mode 100644
index 00000000000..cc0b9ce7ee9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2-run.c
@@ -0,0 +1,48 @@
+/* { dg-do run { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-require-effective-target arm_mve_hw } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include "dlstp-int64x2.c"
+
+int main ()
+{
+ int i;
+ int64_t temp1[N];
+ int64_t temp3[N];
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 0);
+ check_memcpy64 (temp1, temp3, 0);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 1);
+ check_memcpy64 (temp1, temp3, 1);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 2);
+ check_memcpy64 (temp1, temp3, 2);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 3);
+ check_memcpy64 (temp1, temp3, 3);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 4);
+ check_memcpy64 (temp1, temp3, 4);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 5);
+ check_memcpy64 (temp1, temp3, 5);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 6);
+ check_memcpy64 (temp1, temp3, 6);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 7);
+ check_memcpy64 (temp1, temp3, 7);
+
+ reset_data64 (temp1, temp3, N);
+}
+
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2.c
new file mode 100644
index 00000000000..21e882424ec
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2.c
@@ -0,0 +1,28 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <arm_mve.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include "../lob.h"
+
+void __attribute__ ((noinline)) test (int64_t *a, int64_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp64q (n);
+ int64x2_t va = vldrdq_gather_offset_z_s64 (a, vcreateq_u64 (0, 8), p);
+ vstrdq_scatter_offset_p_s64 (c, vcreateq_u64 (0, 8), va, p);
+ c+=2;
+ a+=2;
+ n-=2;
+ }
+}
+
+/* { dg-final { scan-assembler-times {\tdlstp.64} 1 } } */
+/* { dg-final { scan-assembler-times {\tletp} 1 } } */
+/* { dg-final { scan-assembler-not "\tvctp" } } */
+/* { dg-final { scan-assembler-not "\tvpst" } } */
+/* { dg-final { scan-assembler-not "p0" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int8x16.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int8x16.c
new file mode 100644
index 00000000000..8ea181c82d4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int8x16.c
@@ -0,0 +1,69 @@
+/* { dg-do run { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-require-effective-target arm_mve_hw } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <arm_mve.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include "../lob.h"
+
+void __attribute__ ((noinline)) test (int8_t *a, int8_t *b, int8_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ int8x16_t vb = vldrbq_z_s8 (b, p);
+ int8x16_t vc = vaddq_x_s8 (va, vb, p);
+ vstrbq_p_s8 (c, vc, p);
+ c+=16;
+ a+=16;
+ b+=16;
+ n-=16;
+ }
+}
+
+int main ()
+{
+ int i;
+ int8_t temp1[N];
+ int8_t temp2[N];
+ int8_t temp3[N];
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 0);
+ check_plus8 (temp1, temp2, temp3, 0);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 1);
+ check_plus8 (temp1, temp2, temp3, 1);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 15);
+ check_plus8 (temp1, temp2, temp3, 15);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 16);
+ check_plus8 (temp1, temp2, temp3, 16);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 17);
+ check_plus8 (temp1, temp2, temp3, 17);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 32);
+ check_plus8 (temp1, temp2, temp3, 32);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 33);
+ check_plus8 (temp1, temp2, temp3, 33);
+
+ reset_data8 (temp1, temp2, temp3, N);
+}
+
+/* { dg-final { scan-assembler-times {\tdlstp.8} 1 } } */
+/* { dg-final { scan-assembler-times {\tletp} 1 } } */
+/* { dg-final { scan-assembler-not "\tvctp" } } */
+/* { dg-final { scan-assembler-not "\tvpst" } } */
+/* { dg-final { scan-assembler-not "p0" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-invalid-asm.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-invalid-asm.c
new file mode 100644
index 00000000000..f7c3e04f883
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-invalid-asm.c
@@ -0,0 +1,391 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-options "-O3 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <limits.h>
+#include <arm_mve.h>
+
+/* Terminating on a non-zero number of elements. */
+void test0 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ while (n > 1)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n -= 16;
+ }
+}
+
+/* Terminating on n >= 0. */
+void test1 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ while (n >= 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n -= 16;
+ }
+}
+
+/* Similar, terminating on a non-zero number of elements, but in a for loop
+ format. */
+int32_t a[] = {0, 1, 2, 3, 4, 5, 6, 7};
+void test2 (int32_t *b, int num_elems)
+{
+ for (int i = num_elems; i >= 2; i-= 4)
+ {
+ mve_pred16_t p = vctp32q (i);
+ int32x4_t va = vldrwq_z_s32 (&(a[i]), p);
+ vstrwq_p_s32 (b + i, va, p);
+ }
+}
+
+/* Iteration counter counting up to num_iter, with a non-zero starting num. */
+void test3 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int num_iter = (n + 15)/16;
+ for (int i = 1; i < num_iter; i++)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n -= 16;
+ }
+}
+
+/* Iteration counter counting up to num_iter, with a larger increment */
+void test4 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int num_iter = (n + 15)/16;
+ for (int i = 0; i < num_iter; i+=2)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n -= 16;
+ }
+}
+
+/* Using an unpredicated store instruction within the loop. */
+void test5 (uint8_t *a, uint8_t *b, uint8_t *c, uint8_t *d, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_u8 (va, vb);
+ uint8x16_t vd = vaddq_x_u8 (va, vb, p);
+ vstrbq_u8 (d, vd);
+ n -= 16;
+ }
+}
+
+/* Using an unpredicated store outside the loop. */
+void test6 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_m_u8 (vx, va, vb, p);
+ vx = vaddq_u8 (vx, vc);
+ a += 16;
+ b += 16;
+ n -= 16;
+ }
+ vstrbq_u8 (c, vx);
+}
+
+/* Using a VPR that gets modified within the loop. */
+void test9 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ p++;
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using a VPR that gets re-generated within the loop. */
+void test10 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ mve_pred16_t p = vctp32q (n);
+ while (n > 0)
+ {
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ p = vctp32q (n);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using vctp32q_m instead of vctp32q. */
+void test11 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p0)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q_m (n, p0);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using an unpredicated op with a scalar output, where the result is valid
+ outside the bb. This is invalid, because one of the inputs to the
+ unpredicated op is also unpredicated. */
+uint8_t test12 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx)
+{
+ uint8_t sum = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_u8 (b);
+ uint8x16_t vc = vaddq_u8 (va, vb);
+ sum += vaddvq_u8 (vc);
+ a += 16;
+ b += 16;
+ n -= 16;
+ }
+ return sum;
+}
+
+/* Using an unpredicated vcmp to generate a new predicate value in the
+ loop and then using that VPR to predicate a store insn. */
+void test13 (int32_t *a, int32_t *b, int32x4_t vc, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_s32 (a);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_s32 (va, vb);
+ mve_pred16_t p1 = vcmpeqq_s32 (va, vc);
+ vstrwq_p_s32 (c, vc, p1);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using an across-vector unpredicated instruction. "vb" is the risk. */
+uint16_t test14 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ vb = vaddq_u16 (va, vb);
+ res = vaddvq_u16 (vb);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+/* Using an across-vector unpredicated instruction. "vc" is the risk. */
+uint16_t test15 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ uint16x8_t vc = vaddq_u16 (va, vb);
+ res = vaddvaq_u16 (res, vc);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+uint16_t test16 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16_t res =0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ res = vaddvaq_u16 (res, vb);
+ res = vaddvaq_p_u16 (res, va, p);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+int test17 (int8_t *a, int8_t *b, int8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ res = vmaxvq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+
+
+int test18 (int8_t *a, int8_t *b, int8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ res = vminvq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+int test19 (int8_t *a, int8_t *b, int8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ res = vminavq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+int test20 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ res = vminvq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+uint8x16_t test21 (uint8_t *a, uint32_t *b, int n, uint8x16_t res)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ res = vshlcq_u8 (va, b, 1);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+int8x16_t test22 (int8_t *a, int32_t *b, int n, int8x16_t res)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ res = vshlcq_s8 (va, b, 1);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+/* Using an unsigned number of elements to count down from, with a >0*/
+void test23 (int32_t *a, int32_t *b, int32_t *c, unsigned int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c+=4;
+ a+=4;
+ b+=4;
+ n-=4;
+ }
+}
+
+/* Using an unsigned number of elements to count up to, with a <n*/
+void test24 (uint8_t *a, uint8_t *b, uint8_t *c, unsigned int n)
+{
+ for (int i = 0; i < n; i+=16)
+ {
+ mve_pred16_t p = vctp8q (n-i);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n-=16;
+ }
+}
+
+
+/* Using an unsigned number of elements to count up to, with a <=n*/
+void test25 (uint8_t *a, uint8_t *b, uint8_t *c, unsigned int n)
+{
+ for (int i = 1; i <= n; i+=16)
+ {
+ mve_pred16_t p = vctp8q (n-i+1);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n-=16;
+ }
+}
+
+/* { dg-final { scan-assembler-not "\tdlstp" } } */
+/* { dg-final { scan-assembler-not "\tletp" } } */
\ No newline at end of file
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops
2023-12-18 11:53 ` [PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops Andre Vieira
@ 2023-12-20 16:54 ` Andre Vieira (lists)
0 siblings, 0 replies; 10+ messages in thread
From: Andre Vieira (lists) @ 2023-12-20 16:54 UTC (permalink / raw)
To: gcc-patches; +Cc: Richard.Earnshaw, Stam Markianos-Wright
[-- Attachment #1: Type: text/plain, Size: 1124 bytes --]
Squashed the definition and changes to predicated_doloop_end_internal
and dlstp*_insn into this patch to make sure the first patch builds
independently
On 18/12/2023 11:53, Andre Vieira wrote:
>
> Reworked Stam's patch after comments in:
> https://gcc.gnu.org/pipermail/gcc-patches/2023-December/640362.html
>
> The original gcc ChangeLog remains unchanged, but I did split up some tests so
> here is the testsuite ChangeLog.
>
>
> gcc/testsuite/ChangeLog:
>
> * gcc.target/arm/lob.h: Update framework.
> * gcc.target/arm/lob1.c: Likewise.
> * gcc.target/arm/lob6.c: Likewise.
> * gcc.target/arm/mve/dlstp-compile-asm.c: New test.
> * gcc.target/arm/mve/dlstp-int16x8.c: New test.
> * gcc.target/arm/mve/dlstp-int16x8-run.c: New test.
> * gcc.target/arm/mve/dlstp-int32x4.c: New test.
> * gcc.target/arm/mve/dlstp-int32x4-run.c: New test.
> * gcc.target/arm/mve/dlstp-int64x2.c: New test.
> * gcc.target/arm/mve/dlstp-int64x2-run.c: New test.
> * gcc.target/arm/mve/dlstp-int8x16.c: New test.
> * gcc.target/arm/mve/dlstp-int8x16-run.c: New test.
> * gcc.target/arm/mve/dlstp-invalid-asm.c: New test.
>
[-- Attachment #2: 0002-arm-Add-support-for-MVE-Tail-Predicated-Low-Overhead_v2.patch --]
[-- Type: text/plain, Size: 107215 bytes --]
diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h
index 2f5ca79ed8ddd647b212782a0454ee4fefc07257..4f164c547406c43219900c111401540c7ef9d7d1 100644
--- a/gcc/config/arm/arm-protos.h
+++ b/gcc/config/arm/arm-protos.h
@@ -65,8 +65,8 @@ extern void arm_emit_speculation_barrier_function (void);
extern void arm_decompose_di_binop (rtx, rtx, rtx *, rtx *, rtx *, rtx *);
extern bool arm_q_bit_access (void);
extern bool arm_ge_bits_access (void);
-extern bool arm_target_insn_ok_for_lob (rtx);
-
+extern bool arm_target_bb_ok_for_lob (basic_block);
+extern rtx arm_attempt_dlstp_transform (rtx);
#ifdef RTX_CODE
enum reg_class
arm_mode_base_reg_class (machine_mode);
diff --git a/gcc/config/arm/arm.cc b/gcc/config/arm/arm.cc
index 0c0cb14a8a4f043357b8acd7042a9f9386af1eb1..1ee72bcb7ec4bea5feea8453ceef7702b0088a73 100644
--- a/gcc/config/arm/arm.cc
+++ b/gcc/config/arm/arm.cc
@@ -668,6 +668,12 @@ static const scoped_attribute_specs *const arm_attribute_table[] =
#undef TARGET_HAVE_CONDITIONAL_EXECUTION
#define TARGET_HAVE_CONDITIONAL_EXECUTION arm_have_conditional_execution
+#undef TARGET_LOOP_UNROLL_ADJUST
+#define TARGET_LOOP_UNROLL_ADJUST arm_loop_unroll_adjust
+
+#undef TARGET_PREDICT_DOLOOP_P
+#define TARGET_PREDICT_DOLOOP_P arm_predict_doloop_p
+
#undef TARGET_LEGITIMATE_CONSTANT_P
#define TARGET_LEGITIMATE_CONSTANT_P arm_legitimate_constant_p
@@ -34483,19 +34489,1147 @@ arm_invalid_within_doloop (const rtx_insn *insn)
}
bool
-arm_target_insn_ok_for_lob (rtx insn)
+arm_target_bb_ok_for_lob (basic_block bb)
{
- basic_block bb = BLOCK_FOR_INSN (insn);
/* Make sure the basic block of the target insn is a simple latch
having as single predecessor and successor the body of the loop
itself. Only simple loops with a single basic block as body are
supported for 'low over head loop' making sure that LE target is
above LE itself in the generated code. */
-
return single_succ_p (bb)
- && single_pred_p (bb)
- && single_succ_edge (bb)->dest == single_pred_edge (bb)->src
- && contains_no_active_insn_p (bb);
+ && single_pred_p (bb)
+ && single_succ_edge (bb)->dest == single_pred_edge (bb)->src;
+}
+
+/* Utility fuction: Given a VCTP or a VCTP_M insn, return the number of MVE
+ lanes based on the machine mode being used. */
+
+static int
+arm_mve_get_vctp_lanes (rtx_insn *insn)
+{
+ rtx insn_set = single_set (insn);
+ if (insn_set
+ && GET_CODE (SET_SRC (insn_set)) == UNSPEC
+ && (XINT (SET_SRC (insn_set), 1) == VCTP
+ || XINT (SET_SRC (insn_set), 1) == VCTP_M))
+ {
+ machine_mode mode = GET_MODE (SET_SRC (insn_set));
+ return (VECTOR_MODE_P (mode) && VALID_MVE_PRED_MODE (mode))
+ ? GET_MODE_NUNITS (mode) : 0;
+ }
+ return 0;
+}
+
+/* Check if INSN requires the use of the VPR reg, if it does, return the
+ sub-rtx of the VPR reg. The TYPE argument controls whether
+ this function should:
+ * For TYPE == 0, check all operands, including the OUT operands,
+ and return the first occurrence of the VPR reg.
+ * For TYPE == 1, only check the input operands.
+ * For TYPE == 2, only check the output operands.
+ (INOUT operands are considered both as input and output operands)
+*/
+static rtx
+arm_get_required_vpr_reg (rtx_insn *insn, unsigned int type = 0)
+{
+ gcc_assert (type < 3);
+ if (!NONJUMP_INSN_P (insn))
+ return NULL_RTX;
+
+ bool requires_vpr;
+ extract_constrain_insn (insn);
+ int n_operands = recog_data.n_operands;
+ if (recog_data.n_alternatives == 0)
+ return NULL_RTX;
+
+ /* Fill in recog_op_alt with information about the constraints of
+ this insn. */
+ preprocess_constraints (insn);
+
+ for (int op = 0; op < n_operands; op++)
+ {
+ requires_vpr = true;
+ if (type == 1 && recog_data.operand_type[op] == OP_OUT)
+ continue;
+ else if (type == 2 && recog_data.operand_type[op] == OP_IN)
+ continue;
+
+ /* Iterate through alternatives of operand "op" in recog_op_alt and
+ identify if the operand is required to be the VPR. */
+ for (int alt = 0; alt < recog_data.n_alternatives; alt++)
+ {
+ const operand_alternative *op_alt
+ = &recog_op_alt[alt * n_operands];
+ /* Fetch the reg_class for each entry and check it against the
+ VPR_REG reg_class. */
+ if (alternative_class (op_alt, op) != VPR_REG)
+ requires_vpr = false;
+ }
+ /* If all alternatives of the insn require the VPR reg for this operand,
+ it means that either this is VPR-generating instruction, like a vctp,
+ vcmp, etc., or it is a VPT-predicated insruction. Return the subrtx
+ of the VPR reg operand. */
+ if (requires_vpr)
+ return recog_data.operand[op];
+ }
+ return NULL_RTX;
+}
+
+/* Wrapper function of arm_get_required_vpr_reg with TYPE == 1, so return
+ something only if the VPR reg is an input operand to the insn. */
+
+static rtx
+arm_get_required_vpr_reg_param (rtx_insn *insn)
+{
+ return arm_get_required_vpr_reg (insn, 1);
+}
+
+/* Wrapper function of arm_get_required_vpr_reg with TYPE == 2, so return
+ something only if the VPR reg is the return value, an output of, or is
+ clobbered by the insn. */
+
+static rtx
+arm_get_required_vpr_reg_ret_val (rtx_insn *insn)
+{
+ return arm_get_required_vpr_reg (insn, 2);
+}
+
+/* Scan the basic block of a loop body for a vctp instruction. If there is
+ at least vctp instruction, return the first rtx_insn *. */
+
+static rtx_insn *
+arm_mve_get_loop_vctp (basic_block bb)
+{
+ rtx_insn *insn = BB_HEAD (bb);
+
+ /* Now scan through all the instruction patterns and pick out the VCTP
+ instruction. We require arm_get_required_vpr_reg_param to be false
+ to make sure we pick up a VCTP, rather than a VCTP_M. */
+ FOR_BB_INSNS (bb, insn)
+ if (NONDEBUG_INSN_P (insn))
+ if (arm_get_required_vpr_reg_ret_val (insn)
+ && (arm_mve_get_vctp_lanes (insn) != 0)
+ && !arm_get_required_vpr_reg_param (insn))
+ return insn;
+ return NULL;
+}
+
+/* Return true if INSN is a MVE instruction that is VPT-predicable, but in
+ its unpredicated form, or if it is predicated, but on a predicate other
+ than VPR_REG. */
+
+static bool
+arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate (rtx_insn *insn,
+ rtx vpr_reg)
+{
+ rtx insn_vpr_reg_operand;
+ if (MVE_VPT_UNPREDICATED_INSN_P (insn)
+ || (MVE_VPT_PREDICATED_INSN_P (insn)
+ && (insn_vpr_reg_operand = arm_get_required_vpr_reg_param (insn))
+ && !rtx_equal_p (vpr_reg, insn_vpr_reg_operand)))
+ return true;
+ else
+ return false;
+}
+
+/* Return true if INSN is a MVE instruction that is VPT-predicable and is
+ predicated on VPR_REG. */
+
+static bool
+arm_mve_vec_insn_is_predicated_with_this_predicate (rtx_insn *insn,
+ rtx vpr_reg)
+{
+ rtx insn_vpr_reg_operand;
+ if (MVE_VPT_PREDICATED_INSN_P (insn)
+ && (insn_vpr_reg_operand = arm_get_required_vpr_reg_param (insn))
+ && rtx_equal_p (vpr_reg, insn_vpr_reg_operand))
+ return true;
+ else
+ return false;
+}
+
+/* Utility function to identify if INSN is an MVE instruction that performs
+ some across-vector operation (and as a result does not align with normal
+ lane predication rules). All such instructions give one only scalar
+ output, except for vshlcq which gives a PARALLEL of a vector and a scalar
+ (one vector result and one carry output). */
+
+static bool
+arm_is_mve_across_vector_insn (rtx_insn* insn)
+{
+ df_ref insn_defs = NULL;
+ if (!MVE_VPT_PREDICABLE_INSN_P (insn))
+ return false;
+
+ bool is_across_vector = false;
+ FOR_EACH_INSN_DEF (insn_defs, insn)
+ if (!VALID_MVE_MODE (GET_MODE (DF_REF_REG (insn_defs)))
+ && !arm_get_required_vpr_reg_ret_val (insn))
+ is_across_vector = true;
+
+ return is_across_vector;
+}
+
+/* Utility function to identify if INSN is an MVE load or store instruction.
+ * For TYPE == 0, check all operands. If the function returns true,
+ INSN is a load or a store insn.
+ * For TYPE == 1, only check the input operands. If the function returns
+ true, INSN is a load insn.
+ * For TYPE == 2, only check the output operands. If the function returns
+ true, INSN is a store insn. */
+
+static bool
+arm_is_mve_load_store_insn (rtx_insn* insn, int type = 0)
+{
+ int n_operands = recog_data.n_operands;
+ extract_insn (insn);
+
+ for (int op = 0; op < n_operands; op++)
+ {
+ if (type == 1 && recog_data.operand_type[op] == OP_OUT)
+ continue;
+ else if (type == 2 && recog_data.operand_type[op] == OP_IN)
+ continue;
+ if (mve_memory_operand (recog_data.operand[op],
+ GET_MODE (recog_data.operand[op])))
+ return true;
+ }
+ return false;
+}
+
+/* When transforming an MVE intrinsic loop into an MVE Tail Predicated Low
+ Overhead Loop, there are a number of instructions that, if in their
+ unpredicated form, act across vector lanes, but are still safe to include
+ within the loop, despite the implicit predication added to the vector lanes.
+ This list has been compiled by carefully analyzing the instruction
+ pseudocode in the Arm-ARM.
+ All other across-vector instructions aren't allowed, because the addition
+ of implicit predication could influnce the result of the operation.
+ Any new across-vector instructions to the MVE ISA will have to assessed for
+ inclusion to this list. */
+
+static bool
+arm_mve_is_allowed_unpredic_across_vector_insn (rtx_insn* insn)
+{
+ gcc_assert (MVE_VPT_UNPREDICATED_INSN_P (insn)
+ && arm_is_mve_across_vector_insn (insn));
+ rtx insn_set = single_set (insn);
+ if (!insn_set)
+ return false;
+ rtx unspec = SET_SRC (insn_set);
+ if (GET_CODE (unspec) != UNSPEC)
+ return false;
+ switch (XINT (unspec, 1))
+ {
+ case VADDVQ_U:
+ case VADDVQ_S:
+ case VADDVAQ_U:
+ case VADDVAQ_S:
+ case VMLADAVQ_U:
+ case VMLADAVQ_S:
+ case VMLADAVXQ_S:
+ case VMLADAVAQ_U:
+ case VMLADAVAQ_S:
+ case VMLADAVAXQ_S:
+ case VABAVQ_S:
+ case VABAVQ_U:
+ case VADDLVQ_S:
+ case VADDLVQ_U:
+ case VADDLVAQ_S:
+ case VADDLVAQ_U:
+ case VMAXVQ_U:
+ case VMAXAVQ_S:
+ case VMLALDAVQ_U:
+ case VMLALDAVXQ_U:
+ case VMLALDAVXQ_S:
+ case VMLALDAVQ_S:
+ case VMLALDAVAQ_S:
+ case VMLALDAVAQ_U:
+ case VMLALDAVAXQ_S:
+ case VMLALDAVAXQ_U:
+ case VMLSDAVQ_S:
+ case VMLSDAVXQ_S:
+ case VMLSDAVAXQ_S:
+ case VMLSDAVAQ_S:
+ case VMLSLDAVQ_S:
+ case VMLSLDAVXQ_S:
+ case VMLSLDAVAQ_S:
+ case VMLSLDAVAXQ_S:
+ case VRMLALDAVHXQ_S:
+ case VRMLALDAVHQ_U:
+ case VRMLALDAVHQ_S:
+ case VRMLALDAVHAQ_S:
+ case VRMLALDAVHAQ_U:
+ case VRMLALDAVHAXQ_S:
+ case VRMLSLDAVHQ_S:
+ case VRMLSLDAVHXQ_S:
+ case VRMLSLDAVHAQ_S:
+ case VRMLSLDAVHAXQ_S:
+ return true;
+ default:
+ break;
+ }
+ return false;
+}
+
+/* Scan through the DF chain backwards within the basic block and
+ determine if any of the USEs of the original insn (or the USEs of the insns
+ where thy were DEF-ed, etc.) were affected by implicit VPT
+ predication of an MVE_VPT_UNPREDICATED_INSN_P in a dlstp/letp loop.
+ This function returns true if the insn is affected implicit predication
+ and false otherwise.
+ Having such implicit predication on an unpredicated insn wouldn't in itself
+ block tail predication, because the output of that insn might then be used
+ in a correctly predicated store insn, where the disabled lanes will be
+ ignored. To verify this we later call:
+ `arm_mve_check_df_chain_fwd_for_implic_predic_impact`, which will check the
+ DF chains forward to see if any implicitly-predicated operand gets used in
+ an improper way. */
+
+static bool
+arm_mve_check_df_chain_back_for_implic_predic
+ (hash_map <rtx_insn *, bool> *safe_insn_map, rtx_insn *insn_in,
+ rtx vctp_vpr_generated)
+{
+
+ auto_vec<rtx_insn *> worklist;
+ worklist.safe_push (insn_in);
+
+ bool *temp = NULL;
+
+ while (worklist.length () > 0)
+ {
+ rtx_insn *insn = worklist.pop ();
+
+ if ((temp = safe_insn_map->get (insn)))
+ return *temp;
+
+ basic_block body = BLOCK_FOR_INSN (insn);
+
+ /* The circumstances under which an instruction is affected by "implicit
+ predication" are as follows:
+ * It is an UNPREDICATED_INSN_P:
+ * That loads/stores from/to memory.
+ * Where any one of its operands is an MVE vector from outside the
+ loop body bb.
+ Or:
+ * Any of it's operands were affected earlier in the insn chain. */
+ if (MVE_VPT_UNPREDICATED_INSN_P (insn)
+ && (arm_is_mve_load_store_insn (insn)
+ || (arm_is_mve_across_vector_insn (insn)
+ && !arm_mve_is_allowed_unpredic_across_vector_insn (insn))))
+ {
+ safe_insn_map->put (insn, true);
+ return true;
+ }
+
+ df_ref insn_uses = NULL;
+ FOR_EACH_INSN_USE (insn_uses, insn)
+ {
+ /* If the operand is in the input reg set to the the basic block,
+ (i.e. it has come from outside the loop!), consider it unsafe if:
+ * It's being used in an unpredicated insn.
+ * It is a predicable MVE vector. */
+ if (MVE_VPT_UNPREDICATED_INSN_P (insn)
+ && VALID_MVE_MODE (GET_MODE (DF_REF_REG (insn_uses)))
+ && REGNO_REG_SET_P (DF_LR_IN (body), DF_REF_REGNO (insn_uses)))
+ {
+ safe_insn_map->put (insn, true);
+ return true;
+ }
+
+ /* Scan backwards from the current INSN through the instruction chain
+ until the start of the basic block. */
+ for (rtx_insn *prev_insn = PREV_INSN (insn);
+ prev_insn && prev_insn != PREV_INSN (BB_HEAD (body));
+ prev_insn = PREV_INSN (prev_insn))
+ {
+ /* If a previous insn defines a register that INSN uses, then
+ add to the worklist to check that insn's USEs. If any of these
+ insns return true as MVE_VPT_UNPREDICATED_INSN_Ps, then the
+ whole chain is affected by the change in behaviour from being
+ placed in dlstp/letp loop. */
+ df_ref prev_insn_defs = NULL;
+ FOR_EACH_INSN_DEF (prev_insn_defs, prev_insn)
+ {
+ if (DF_REF_REGNO (insn_uses) == DF_REF_REGNO (prev_insn_defs)
+ && !arm_mve_vec_insn_is_predicated_with_this_predicate
+ (insn, vctp_vpr_generated))
+ worklist.safe_push (prev_insn);
+ }
+ }
+ }
+ }
+ safe_insn_map->put (insn_in, false);
+ return false;
+}
+
+/* If we have identified that the current DEF will be modified
+ by such implicit predication, scan through all the
+ insns that USE it and bail out if any one is outside the
+ current basic block (i.e. the reg is live after the loop)
+ or if any are store insns that are unpredicated or using a
+ predicate other than the loop VPR.
+ This function returns true if the insn is not suitable for
+ implicit predication and false otherwise.*/
+
+static bool
+arm_mve_check_df_chain_fwd_for_implic_predic_impact (rtx_insn *insn,
+ rtx vctp_vpr_generated)
+{
+
+ /* If this insn is indeed an unpredicated store to memory, bail out. */
+ if (arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate
+ (insn, vctp_vpr_generated)
+ && (arm_is_mve_load_store_insn (insn, 2)
+ || arm_is_mve_across_vector_insn (insn)))
+ return true;
+
+ /* Next, scan forward to the various USEs of the DEFs in this insn. */
+ df_ref insn_def = NULL;
+ FOR_EACH_INSN_DEF (insn_def, insn)
+ {
+ for (df_ref use = DF_REG_USE_CHAIN (DF_REF_REGNO (insn_def)); use;
+ use = DF_REF_NEXT_REG (use))
+ {
+ rtx_insn *next_use_insn = DF_REF_INSN (use);
+ if (next_use_insn != insn
+ && NONDEBUG_INSN_P (next_use_insn))
+ {
+ /* If the USE is outside the loop body bb, or it is inside, but
+ is an differently-predicated store to memory or it is any
+ across-vector instruction. */
+ if (BLOCK_FOR_INSN (insn) != BLOCK_FOR_INSN (next_use_insn)
+ || (arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate
+ (next_use_insn, vctp_vpr_generated)
+ && (arm_is_mve_load_store_insn (next_use_insn, 2)
+ || arm_is_mve_across_vector_insn (next_use_insn))))
+ return true;
+ }
+ }
+ }
+ return false;
+}
+
+/* Helper function to `arm_mve_dlstp_check_inc_counter` and to
+ `arm_mve_dlstp_check_dec_counter`. In the situations where the loop counter
+ is incrementing by 1 or decrementing by 1 in each iteration, ensure that the
+ target value or the initialisation value, respectively, was a calculation
+ of the number of iterations of the loop, which is expected to be an ASHIFTRT
+ by VCTP_STEP. */
+
+static bool
+arm_mve_check_reg_origin_is_num_elems (basic_block body, rtx reg, rtx vctp_step)
+{
+ /* Ok, we now know the loop starts from zero and increments by one.
+ Now just show that the max value of the counter came from an
+ appropriate ASHIFRT expr of the correct amount. */
+ basic_block pre_loop_bb = body->prev_bb;
+ while (pre_loop_bb && BB_END (pre_loop_bb)
+ && !df_bb_regno_only_def_find (pre_loop_bb, REGNO (reg)))
+ pre_loop_bb = pre_loop_bb->prev_bb;
+
+ df_ref counter_max_last_def = df_bb_regno_only_def_find (pre_loop_bb, REGNO (reg));
+ if (!counter_max_last_def)
+ return false;
+ rtx counter_max_last_set = single_set (DF_REF_INSN (counter_max_last_def));
+ if (!counter_max_last_set)
+ return false;
+
+ /* If we encounter a simple SET from a REG, follow it through. */
+ if (REG_P (SET_SRC (counter_max_last_set)))
+ return arm_mve_check_reg_origin_is_num_elems
+ (pre_loop_bb->next_bb, SET_SRC (counter_max_last_set), vctp_step);
+
+ /* If we encounter a SET from an IF_THEN_ELSE where one of the operands is a
+ constant and the other is a REG, follow through to that REG. */
+ if (GET_CODE (SET_SRC (counter_max_last_set)) == IF_THEN_ELSE
+ && REG_P (XEXP (SET_SRC (counter_max_last_set), 1))
+ && CONST_INT_P (XEXP (SET_SRC (counter_max_last_set), 2)))
+ return arm_mve_check_reg_origin_is_num_elems
+ (pre_loop_bb->next_bb, XEXP (SET_SRC (counter_max_last_set), 1), vctp_step);
+
+ if (GET_CODE (SET_SRC (counter_max_last_set)) == ASHIFTRT
+ && CONST_INT_P (XEXP (SET_SRC (counter_max_last_set), 1))
+ && ((1 << INTVAL (XEXP (SET_SRC (counter_max_last_set), 1)))
+ == abs (INTVAL (vctp_step))))
+ return true;
+
+ return false;
+}
+
+/* If we have identified the loop to have an incrementing counter, we need to
+ make sure that it increments by 1 and that the loop is structured correctly:
+ * The counter starts from 0
+ * The counter terminates at (num_of_elem + num_of_lanes - 1) / num_of_lanes
+ * The vctp insn uses a reg that decrements appropriately in each iteration.
+*/
+
+static rtx_insn*
+arm_mve_dlstp_check_inc_counter (basic_block body, rtx_insn* vctp_insn,
+ rtx condconst, rtx condcount)
+{
+ rtx vctp_reg = XVECEXP (XEXP (PATTERN (vctp_insn), 1), 0, 0);
+ /* The loop latch has to be empty. When compiling all the known MVE LoLs in
+ user applications, none of those with incrementing counters had any real
+ insns in the loop latch. As such, this function has only been tested with
+ an empty latch and may misbehave or ICE if we somehow get here with an
+ increment in the latch, so, for correctness, error out early. */
+ if (!empty_block_p (body->loop_father->latch))
+ return NULL;
+
+ class rtx_iv vctp_reg_iv;
+ /* For loops of type B) the loop counter is independent of the decrement
+ of the reg used in the vctp_insn. So run iv analysis on that reg. This
+ has to succeed for such loops to be supported. */
+ if (!iv_analyze (vctp_insn, as_a<scalar_int_mode> (GET_MODE (vctp_reg)),
+ vctp_reg, &vctp_reg_iv))
+ return NULL;
+
+ /* Extract the decrementnum of the vctp reg from the iv. This decrementnum
+ is the number of lanes/elements it decrements from the remaining number of
+ lanes/elements to process in the loop, for this reason this is always a
+ negative number, but to simplify later checks we use it's absolute value. */
+ int decrementnum = INTVAL (vctp_reg_iv.step);
+ if (decrementnum >= 0)
+ return NULL;
+ decrementnum = abs (decrementnum);
+
+ /* Find where both of those are modified in the loop body bb. */
+ df_ref condcount_reg_set_df = df_bb_regno_only_def_find (body, REGNO (condcount));
+ df_ref vctp_reg_set_df = df_bb_regno_only_def_find (body, REGNO (vctp_reg));
+ if (!condcount_reg_set_df || !vctp_reg_set_df)
+ return NULL;
+ rtx condcount_reg_set = single_set (DF_REF_INSN (condcount_reg_set_df));
+ rtx vctp_reg_set = single_set (DF_REF_INSN (vctp_reg_set_df));
+ if (!condcount_reg_set || !vctp_reg_set)
+ return NULL;
+
+ /* Ensure the modification of the vctp reg from df is consistent with
+ the iv and the number of lanes on the vctp insn. */
+ if (GET_CODE (SET_SRC (vctp_reg_set)) != PLUS
+ || !REG_P (SET_DEST (vctp_reg_set))
+ || !REG_P (XEXP (SET_SRC (vctp_reg_set), 0))
+ || REGNO (SET_DEST (vctp_reg_set))
+ != REGNO (XEXP (SET_SRC (vctp_reg_set), 0))
+ || !CONST_INT_P (XEXP (SET_SRC (vctp_reg_set), 1))
+ || INTVAL (XEXP (SET_SRC (vctp_reg_set), 1)) >= 0
+ || decrementnum != abs (INTVAL (XEXP (SET_SRC (vctp_reg_set), 1)))
+ || decrementnum != arm_mve_get_vctp_lanes (vctp_insn))
+ return NULL;
+
+ if (REG_P (condcount) && REG_P (condconst))
+ {
+ /* First we need to prove that the loop is going 0..condconst with an
+ inc of 1 in each iteration. */
+ if (GET_CODE (SET_SRC (condcount_reg_set)) == PLUS
+ && CONST_INT_P (XEXP (SET_SRC (condcount_reg_set), 1))
+ && INTVAL (XEXP (SET_SRC (condcount_reg_set), 1)) == 1)
+ {
+ rtx counter_reg = SET_DEST (condcount_reg_set);
+ /* Check that the counter did indeed start from zero. */
+ df_ref this_set = DF_REG_DEF_CHAIN (REGNO (counter_reg));
+ if (!this_set)
+ return NULL;
+ df_ref last_set_def = DF_REF_NEXT_REG (this_set);
+ if (!last_set_def)
+ return NULL;
+ rtx_insn* last_set_insn = DF_REF_INSN (last_set_def);
+ rtx last_set = single_set (last_set_insn);
+ if (!last_set)
+ return NULL;
+ rtx counter_orig_set;
+ counter_orig_set = SET_SRC (last_set);
+ if (!CONST_INT_P (counter_orig_set)
+ || (INTVAL (counter_orig_set) != 0))
+ return NULL;
+ /* And finally check that the target value of the counter,
+ condconst, is of the correct shape. */
+ if (!arm_mve_check_reg_origin_is_num_elems (body, condconst,
+ vctp_reg_iv.step))
+ return NULL;
+ }
+ else
+ return NULL;
+ }
+ else
+ return NULL;
+
+ /* Everything looks valid. */
+ return vctp_insn;
+}
+
+/* Helper function to `arm_mve_loop_valid_for_dlstp`. In the case of a
+ counter that is decrementing, ensure that it is decrementing by the
+ right amount in each iteration and that the target condition is what
+ we expect. */
+
+static rtx_insn*
+arm_mve_dlstp_check_dec_counter (basic_block body, rtx_insn* vctp_insn,
+ rtx condconst, rtx condcount)
+{
+ rtx vctp_reg = XVECEXP (XEXP (PATTERN (vctp_insn), 1), 0, 0);
+ class rtx_iv vctp_reg_iv;
+ int decrementnum;
+ /* For decrementing loops of type A), the counter is usually present in the
+ loop latch. Here we simply need to verify that this counter is the same
+ reg that is also used in the vctp_insn and that it is not otherwise
+ modified. */
+ rtx_insn *dec_insn = BB_END (body->loop_father->latch);
+ /* If not in the loop latch, try to find the decrement in the loop body. */
+ if (!NONDEBUG_INSN_P (dec_insn))
+ {
+ df_ref temp = df_bb_regno_only_def_find (body, REGNO (condcount));
+ /* If we haven't been able to find the decrement, bail out. */
+ if (!temp)
+ return NULL;
+ dec_insn = DF_REF_INSN (temp);
+ }
+
+ rtx dec_set = single_set (dec_insn);
+
+ /* Next, ensure that it is a PLUS of the form:
+ (set (reg a) (plus (reg a) (const_int)))
+ where (reg a) is the same as condcount. */
+ if (!dec_set
+ || !REG_P (SET_DEST (dec_set))
+ || !REG_P (XEXP (SET_SRC (dec_set), 0))
+ || !CONST_INT_P (XEXP (SET_SRC (dec_set), 1))
+ || REGNO (SET_DEST (dec_set))
+ != REGNO (XEXP (SET_SRC (dec_set), 0))
+ || REGNO (SET_DEST (dec_set)) != REGNO (condcount))
+ return NULL;
+
+ decrementnum = INTVAL (XEXP (SET_SRC (dec_set), 1));
+
+ /* This decrementnum is the number of lanes/elements it decrements from the
+ remaining number of lanes/elements to process in the loop, for this reason
+ this is always a negative number, but to simplify later checks we use it's
+ absolute value. */
+ if (decrementnum >= 0)
+ return NULL;
+ decrementnum = abs (decrementnum);
+
+ /* Ok, so we now know the loop decrement. If it is a 1, then we need to
+ look at the loop vctp_reg and verify that it also decrements correctly.
+ Then, we need to establish that the starting value of the loop decrement
+ originates from the starting value of the vctp decrement. */
+ if (decrementnum == 1)
+ {
+ class rtx_iv vctp_reg_iv;
+ /* The loop counter is found to be independent of the decrement
+ of the reg used in the vctp_insn, again. Ensure that IV analysis
+ succeeds and check the step. */
+ if (!iv_analyze (vctp_insn, as_a<scalar_int_mode> (GET_MODE (vctp_reg)),
+ vctp_reg, &vctp_reg_iv))
+ return NULL;
+ /* Ensure it matches the number of lanes of the vctp instruction. */
+ if (abs (INTVAL (vctp_reg_iv.step))
+ != arm_mve_get_vctp_lanes (vctp_insn))
+ return NULL;
+ if (!arm_mve_check_reg_origin_is_num_elems (body, condcount, vctp_reg_iv.step))
+ return NULL;
+ }
+ /* If the decrements are the same, then the situation is simple: either they
+ are also the same reg, which is safe, or they are different registers, in
+ which case makse sure that there is a only simple SET from one to the
+ other inside the loop.*/
+ else if (decrementnum == arm_mve_get_vctp_lanes (vctp_insn))
+ {
+ if (REGNO (condcount) != REGNO (vctp_reg))
+ {
+ /* It wasn't the same reg, but it could be behild a
+ (set (vctp_reg) (condcount)), so instead find where
+ the VCTP insn is DEF'd inside the loop. */
+ rtx_insn *vctp_reg_insn
+ = DF_REF_INSN (df_bb_regno_only_def_find (body, REGNO (vctp_reg)));
+ rtx vctp_reg_set = single_set (vctp_reg_insn);
+ /* This must just be a simple SET from the condcount. */
+ if (!vctp_reg_set
+ || !REG_P (SET_DEST (vctp_reg_set))
+ || !REG_P (SET_SRC (vctp_reg_set))
+ || REGNO (SET_SRC (vctp_reg_set)) != REGNO (condcount))
+ return NULL;
+ }
+ }
+ else
+ return NULL;
+
+ /* We now only need to find out that the loop terminates with a LE
+ zero condition. If condconst is a const_int, then this is easy.
+ If its a REG, look at the last condition+jump in a bb before
+ the loop, because that usually will have a branch jumping over
+ the loop body. */
+ if (CONST_INT_P (condconst)
+ && !(INTVAL (condconst) == 0 && JUMP_P (BB_END (body))
+ && GET_CODE (XEXP (PATTERN (BB_END (body)), 1)) == IF_THEN_ELSE
+ && (GET_CODE (XEXP (XEXP (PATTERN (BB_END (body)), 1), 0)) == NE
+ ||GET_CODE (XEXP (XEXP (PATTERN (BB_END (body)), 1), 0)) == GT)))
+ return NULL;
+ else if (REG_P (condconst))
+ {
+ basic_block pre_loop_bb = body;
+ while (pre_loop_bb->prev_bb && BB_END (pre_loop_bb->prev_bb)
+ && !JUMP_P (BB_END (pre_loop_bb->prev_bb)))
+ pre_loop_bb = pre_loop_bb->prev_bb;
+ if (pre_loop_bb && BB_END (pre_loop_bb))
+ pre_loop_bb = pre_loop_bb->prev_bb;
+ else
+ return NULL;
+ rtx initial_compare = NULL_RTX;
+ if (!(prev_nonnote_nondebug_insn_bb (BB_END (pre_loop_bb))
+ && INSN_P (prev_nonnote_nondebug_insn_bb (BB_END (pre_loop_bb)))))
+ return NULL;
+ else
+ initial_compare
+ = single_set (prev_nonnote_nondebug_insn_bb (BB_END (pre_loop_bb)));
+ if (!(initial_compare
+ && cc_register (SET_DEST (initial_compare), VOIDmode)
+ && GET_CODE (SET_SRC (initial_compare)) == COMPARE
+ && CONST_INT_P (XEXP (SET_SRC (initial_compare), 1))
+ && INTVAL (XEXP (SET_SRC (initial_compare), 1)) == 0))
+ return NULL;
+
+ /* Usually this is a LE condition, but it can also just be a GT or an EQ
+ condition (if the value is unsigned or the compiler knows its not negative) */
+ rtx_insn *loop_jumpover = BB_END (pre_loop_bb);
+ if (!(JUMP_P (loop_jumpover)
+ && GET_CODE (XEXP (PATTERN (loop_jumpover), 1)) == IF_THEN_ELSE
+ && (GET_CODE (XEXP (XEXP (PATTERN (loop_jumpover), 1), 0)) == LE
+ || GET_CODE (XEXP (XEXP (PATTERN (loop_jumpover), 1), 0)) == GT
+ || GET_CODE (XEXP (XEXP (PATTERN (loop_jumpover), 1), 0)) == EQ)))
+ return NULL;
+ }
+
+ /* Everything looks valid. */
+ return vctp_insn;
+}
+
+/* Function to check a loop's structure to see if it is a valid candidate for
+ an MVE Tail Predicated Low-Overhead Loop. Returns the loop's VCTP_INSN if
+ it is valid, or NULL if it isn't. */
+
+static rtx_insn*
+arm_mve_loop_valid_for_dlstp (basic_block body)
+{
+ /* Doloop can only be done "elementwise" with predicated dlstp/letp if it
+ contains a VCTP on the number of elements processed by the loop.
+ Find the VCTP predicate generation inside the loop body BB. */
+ rtx_insn *vctp_insn = arm_mve_get_loop_vctp (body);
+ if (!vctp_insn)
+ return NULL;
+
+ /* There are only two types of loops that can be turned into dlstp/letp
+ loops:
+ A) Loops of the form:
+ while (num_of_elem > 0)
+ {
+ p = vctp<size> (num_of_elem)
+ n -= num_of_lanes;
+ }
+ B) Loops of the form:
+ int num_of_iters = (num_of_elem + num_of_lanes - 1) / num_of_lanes
+ for (i = 0; i < num_of_iters; i++)
+ {
+ p = vctp<size> (num_of_elem)
+ n -= num_of_lanes;
+ }
+
+ Then, depending on the type of loop above we need will need to do
+ different sets of checks. */
+ iv_analysis_loop_init (body->loop_father);
+
+ /* In order to find out if the loop is of type A or B above look for the
+ loop counter: it will either be incrementing by one per iteration or
+ it will be decrementing by num_of_lanes. We can find the loop counter
+ in the condition at the end of the loop. */
+ rtx_insn *loop_cond = prev_nonnote_nondebug_insn_bb (BB_END (body));
+ if (!(cc_register (XEXP (PATTERN (loop_cond), 0), VOIDmode)
+ && GET_CODE (XEXP (PATTERN (loop_cond), 1)) == COMPARE))
+ return NULL;
+
+ /* The operands in the condition: Try to identify which one is the
+ constant and which is the counter and run IV analysis on the latter. */
+ rtx cond_arg_1 = XEXP (XEXP (PATTERN (loop_cond), 1), 0);
+ rtx cond_arg_2 = XEXP (XEXP (PATTERN (loop_cond), 1), 1);
+
+ rtx loop_cond_constant;
+ rtx loop_counter;
+ class rtx_iv cond_counter_iv, cond_temp_iv;
+
+ if (CONST_INT_P (cond_arg_1))
+ {
+ /* cond_arg_1 is the constant and cond_arg_2 is the counter. */
+ loop_cond_constant = cond_arg_1;
+ loop_counter = cond_arg_2;
+ iv_analyze (loop_cond, as_a<scalar_int_mode> (GET_MODE (cond_arg_2)),
+ cond_arg_2, &cond_counter_iv);
+ }
+ else if (CONST_INT_P (cond_arg_2))
+ {
+ /* cond_arg_2 is the constant and cond_arg_1 is the counter. */
+ loop_cond_constant = cond_arg_2;
+ loop_counter = cond_arg_1;
+ iv_analyze (loop_cond, as_a<scalar_int_mode> (GET_MODE (cond_arg_1)),
+ cond_arg_1, &cond_counter_iv);
+ }
+ else if (REG_P (cond_arg_1) && REG_P (cond_arg_2))
+ {
+ /* If both operands to the compare are REGs, we can safely
+ run IV analysis on both and then determine which is the
+ constant by looking at the step.
+ First assume cond_arg_1 is the counter. */
+ loop_counter = cond_arg_1;
+ loop_cond_constant = cond_arg_2;
+ iv_analyze (loop_cond, as_a<scalar_int_mode> (GET_MODE (cond_arg_1)),
+ cond_arg_1, &cond_counter_iv);
+ iv_analyze (loop_cond, as_a<scalar_int_mode> (GET_MODE (cond_arg_2)),
+ cond_arg_2, &cond_temp_iv);
+
+ /* Look at the steps and swap around the rtx's if needed. Error out if
+ one of them cannot be identified as constant. */
+ if (!CONST_INT_P (cond_counter_iv.step) || !CONST_INT_P (cond_temp_iv.step))
+ return NULL;
+ if (INTVAL (cond_counter_iv.step) != 0 && INTVAL (cond_temp_iv.step) != 0)
+ return NULL;
+ if (INTVAL (cond_counter_iv.step) == 0 && INTVAL (cond_temp_iv.step) != 0)
+ {
+ loop_counter = cond_arg_2;
+ loop_cond_constant = cond_arg_1;
+ cond_counter_iv = cond_temp_iv;
+ }
+ }
+ else
+ return NULL;
+
+ if (!REG_P (loop_counter))
+ return NULL;
+ if (!(REG_P (loop_cond_constant) || CONST_INT_P (loop_cond_constant)))
+ return NULL;
+
+ /* Now we have extracted the IV step of the loop counter, call the
+ appropriate checking function. */
+ if (INTVAL (cond_counter_iv.step) > 0)
+ return arm_mve_dlstp_check_inc_counter (body, vctp_insn,
+ loop_cond_constant, loop_counter);
+ else if (INTVAL (cond_counter_iv.step) < 0)
+ return arm_mve_dlstp_check_dec_counter (body, vctp_insn,
+ loop_cond_constant, loop_counter);
+ else
+ return NULL;
+}
+
+/* Predict whether the given loop in gimple will be transformed in the RTL
+ doloop_optimize pass. It could be argued that turning large enough loops
+ into low-overhead loops would not show a signficant performance boost.
+ Howeer, in the case of tail predication we would still avoid using VPT/VPST
+ instructions inside the loop, and in either case using low-overhead loops
+ would not be detrimental, so we decided to not consider size, avoiding the
+ need of a heuristic to determine what an appropriate size boundary is. */
+
+static bool
+arm_predict_doloop_p (struct loop *loop)
+{
+ gcc_assert (loop);
+ /* On arm, targetm.can_use_doloop_p is actually
+ can_use_doloop_if_innermost. Ensure the loop is innermost,
+ it is valid and as per arm_target_bb_ok_for_lob and the
+ correct architecture flags are enabled. */
+ if (!(TARGET_HAVE_LOB && optimize > 0))
+ {
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ fprintf (dump_file, "Predict doloop failure due to"
+ " target architecture or optimisation flags.\n");
+ return false;
+ }
+ else if (loop->inner != NULL)
+ {
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ fprintf (dump_file, "Predict doloop failure due to"
+ " loop nesting.\n");
+ return false;
+ }
+ else if (!arm_target_bb_ok_for_lob (loop->header->next_bb))
+ {
+ if (dump_file && (dump_flags & TDF_DETAILS))
+ fprintf (dump_file, "Predict doloop failure due to"
+ " loop bb complexity.\n");
+ return false;
+ }
+
+ return true;
+}
+
+/* Implement targetm.loop_unroll_adjust. Use this to block unrolling of loops
+ that may later be turned into MVE Tail Predicated Low Overhead Loops. The
+ performance benefit of an MVE LoL is likely to be much higher than that of
+ the unrolling. */
+
+unsigned
+arm_loop_unroll_adjust (unsigned nunroll, struct loop *loop)
+{
+ if (TARGET_HAVE_MVE
+ && arm_target_bb_ok_for_lob (loop->latch)
+ && arm_mve_loop_valid_for_dlstp (loop->header))
+ return 0;
+ else
+ return nunroll;
+}
+
+/* Function to hadle emitting a VPT-unpredicated version of a VPT-predicated
+ insn to a sequence. */
+
+static bool
+arm_emit_mve_unpredicated_insn_to_seq (rtx_insn* insn)
+{
+ rtx insn_vpr_reg_operand = arm_get_required_vpr_reg_param (insn);
+ int new_icode = get_attr_mve_unpredicated_insn (insn);
+ if (!in_sequence_p ()
+ || !MVE_VPT_PREDICATED_INSN_P (insn)
+ || (!insn_vpr_reg_operand)
+ || (!new_icode))
+ return false;
+
+ extract_insn (insn);
+ rtx arr[8];
+ int j = 0;
+
+ /* When transforming a VPT-predicated instruction
+ into its unpredicated equivalent we need to drop
+ the VPR operand and we may need to also drop a
+ merge "vuninit" input operand, depending on the
+ instruction pattern. Here ensure that we have at
+ most a two-operand difference between the two
+ instrunctions. */
+ int n_operands_diff
+ = recog_data.n_operands - insn_data[new_icode].n_operands;
+ if (!(n_operands_diff > 0 && n_operands_diff <= 2))
+ return false;
+
+ /* Then, loop through the operands of the predicated
+ instruction, and retain the ones that map to the
+ unpredicated instruction. */
+ for (int i = 0; i < recog_data.n_operands; i++)
+ {
+ /* Ignore the VPR and, if needed, the vuninit
+ operand. */
+ if (insn_vpr_reg_operand == recog_data.operand[i]
+ || (n_operands_diff == 2
+ && !strcmp (recog_data.constraints[i], "0")))
+ continue;
+ else
+ {
+ arr[j] = recog_data.operand[i];
+ j++;
+ }
+ }
+
+ /* Finally, emit the upredicated instruction. */
+ rtx_insn *new_insn;
+ switch (j)
+ {
+ case 1:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0]));
+ break;
+ case 2:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1]));
+ break;
+ case 3:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2]));
+ break;
+ case 4:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2],
+ arr[3]));
+ break;
+ case 5:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2],
+ arr[3], arr[4]));
+ break;
+ case 6:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2],
+ arr[3], arr[4], arr[5]));
+ break;
+ case 7:
+ new_insn = emit_insn (GEN_FCN (new_icode) (arr[0], arr[1], arr[2],
+ arr[3], arr[4], arr[5],
+ arr[6]));
+ break;
+ default:
+ gcc_unreachable ();
+ }
+ INSN_LOCATION (new_insn) = INSN_LOCATION (insn);
+ return true;
+}
+
+/* When a vctp insn is used, its out is often followed by
+ a zero-extend insn to SImode, which is then SUBREG'd into a
+ vector form of mode VALID_MVE_PRED_MODE: this vector form is
+ what is then used as an input to the instructions within the
+ loop. Hence, store that vector form of the VPR reg into
+ vctp_vpr_generated, so that we can match it with instructions
+ in the loop to determine if they are predicated on this same
+ VPR. If there is no zero-extend and subreg or it is otherwise
+ invalid, then return NULL to cancel the dlstp transform. */
+
+static rtx
+arm_mve_get_vctp_vec_form (rtx_insn *insn)
+{
+ rtx vctp_vpr_generated = NULL_RTX;
+ rtx_insn *next_use1 = NULL;
+ df_ref use;
+ for (use
+ = DF_REG_USE_CHAIN
+ (DF_REF_REGNO (DF_INSN_INFO_DEFS (DF_INSN_INFO_GET (insn))));
+ use; use = DF_REF_NEXT_REG (use))
+ if (!next_use1 && NONDEBUG_INSN_P (DF_REF_INSN (use)))
+ next_use1 = DF_REF_INSN (use);
+
+ rtx next_use1_set = single_set (next_use1);
+ if (next_use1_set
+ && GET_CODE (SET_SRC (next_use1_set)) == ZERO_EXTEND)
+ {
+ rtx_insn *next_use2 = NULL;
+ for (use
+ = DF_REG_USE_CHAIN
+ (DF_REF_REGNO
+ (DF_INSN_INFO_DEFS (DF_INSN_INFO_GET (next_use1))));
+ use; use = DF_REF_NEXT_REG (use))
+ if (!next_use2 && NONDEBUG_INSN_P (DF_REF_INSN (use)))
+ next_use2 = DF_REF_INSN (use);
+
+ rtx next_use2_set = single_set (next_use2);
+ if (next_use2_set
+ && GET_CODE (SET_SRC (next_use2_set)) == SUBREG)
+ vctp_vpr_generated = SET_DEST (next_use2_set);
+ }
+
+ if (!vctp_vpr_generated || !REG_P (vctp_vpr_generated)
+ || !VALID_MVE_PRED_MODE (GET_MODE (vctp_vpr_generated)))
+ return NULL_RTX;
+
+ return vctp_vpr_generated;
+}
+
+/* Attempt to transform the loop contents of loop basic block from VPT
+ predicated insns into unpredicated insns for a dlstp/letp loop. Returns
+ rtx constant value to decrement from the total number of elements. Return
+ (const_int 1) if we can't use tail predication and fallback to scalar
+ low-overhead loops. */
+
+rtx
+arm_attempt_dlstp_transform (rtx label)
+{
+ basic_block body = BLOCK_FOR_INSN (label)->prev_bb;
+
+ /* Ensure that the bb is within a loop that has all required metadata. */
+ if (!body->loop_father || !body->loop_father->header
+ || !body->loop_father->simple_loop_desc)
+ return const1_rtx;
+
+ rtx_insn *vctp_insn = arm_mve_loop_valid_for_dlstp (body);
+ if (!vctp_insn)
+ return const1_rtx;
+
+ gcc_assert (single_set (vctp_insn));
+
+ rtx vctp_vpr_generated = arm_mve_get_vctp_vec_form (vctp_insn);
+ if (!vctp_vpr_generated)
+ return const1_rtx;
+
+ /* decrementunum is already known to be valid at this point. */
+ int decrementnum = arm_mve_get_vctp_lanes (vctp_insn);
+
+ rtx_insn *insn = 0;
+ rtx_insn *cur_insn = 0;
+ rtx_insn *seq;
+ hash_map <rtx_insn *, bool> *safe_insn_map
+ = new hash_map <rtx_insn *, bool>;
+
+ /* Scan through the insns in the loop bb and emit the transformed bb
+ insns to a sequence. */
+ start_sequence ();
+ FOR_BB_INSNS (body, insn)
+ {
+ if (GET_CODE (insn) == CODE_LABEL || NOTE_INSN_BASIC_BLOCK_P (insn))
+ continue;
+ else if (NOTE_P (insn))
+ emit_note ((enum insn_note)NOTE_KIND (insn));
+ else if (DEBUG_INSN_P (insn))
+ emit_debug_insn (PATTERN (insn));
+ else if (!INSN_P (insn))
+ {
+ end_sequence ();
+ return const1_rtx;
+ }
+ /* When we find the vctp instruction: continue. */
+ else if (insn == vctp_insn)
+ continue;
+ /* If the insn pattern requires the use of the VPR value from the
+ vctp as an input parameter for predication. */
+ else if (arm_mve_vec_insn_is_predicated_with_this_predicate
+ (insn, vctp_vpr_generated))
+ {
+ bool success = arm_emit_mve_unpredicated_insn_to_seq (insn);
+ if (!success)
+ {
+ end_sequence ();
+ return const1_rtx;
+ }
+ }
+ /* If the insn isn't VPT predicated on vctp_vpr_generated, we need to
+ make sure that it is still valid within the dlstp/letp loop. */
+ else
+ {
+ /* If this instruction USE-s the vctp_vpr_generated other than for
+ predication, this blocks the transformation as we are not allowed
+ to optimise the VPR value away. */
+ df_ref insn_uses = NULL;
+ FOR_EACH_INSN_USE (insn_uses, insn)
+ {
+ if (rtx_equal_p (vctp_vpr_generated, DF_REF_REG (insn_uses)))
+ {
+ end_sequence ();
+ return const1_rtx;
+ }
+ }
+ /* If within the loop we have an MVE vector instruction that is
+ unpredicated, the dlstp/letp looping will add implicit
+ predication to it. This will result in a change in behaviour
+ of the instruction, so we need to find out if any instructions
+ that feed into the current instruction were implicitly
+ predicated. */
+ if (arm_mve_check_df_chain_back_for_implic_predic
+ (safe_insn_map, insn, vctp_vpr_generated))
+ {
+ if (arm_mve_check_df_chain_fwd_for_implic_predic_impact
+ (insn, vctp_vpr_generated))
+ {
+ end_sequence ();
+ return const1_rtx;
+ }
+ }
+ emit_insn (PATTERN (insn));
+ }
+ }
+ seq = get_insns ();
+ end_sequence ();
+
+ /* Re-write the entire BB contents with the transformed
+ sequence. */
+ FOR_BB_INSNS_SAFE (body, insn, cur_insn)
+ if (!(GET_CODE (insn) == CODE_LABEL || NOTE_INSN_BASIC_BLOCK_P (insn)))
+ delete_insn (insn);
+
+ emit_insn_after (seq, BB_END (body));
+
+ /* The transformation has succeeded, so now modify the "count"
+ (a.k.a. niter_expr) for the middle-end. Also set noloop_assumptions
+ to NULL to stop the middle-end from making assumptions about the
+ number of iterations. */
+ simple_loop_desc (body->loop_father)->niter_expr
+ = XVECEXP (SET_SRC (PATTERN (vctp_insn)), 0, 0);
+ simple_loop_desc (body->loop_father)->noloop_assumptions = NULL_RTX;
+ return GEN_INT (decrementnum);
}
#if CHECKING_P
diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md
index 5ea2d9e866891bdb3dc73fcf6cbd6cdd2f989951..9398702cddd076a7eacf1ca6eac6c5a1fbd9a3d0 100644
--- a/gcc/config/arm/iterators.md
+++ b/gcc/config/arm/iterators.md
@@ -2673,6 +2673,17 @@ (define_int_iterator MRRCI [VUNSPEC_MRRC VUNSPEC_MRRC2])
(define_int_attr mrrc [(VUNSPEC_MRRC "mrrc") (VUNSPEC_MRRC2 "mrrc2")])
(define_int_attr MRRC [(VUNSPEC_MRRC "MRRC") (VUNSPEC_MRRC2 "MRRC2")])
+(define_int_attr dlstp_elemsize [(DLSTP8 "8") (DLSTP16 "16") (DLSTP32 "32")
+ (DLSTP64 "64")])
+
+(define_int_attr letp_num_lanes [(LETP8 "16") (LETP16 "8") (LETP32 "4")
+ (LETP64 "2")])
+(define_int_attr letp_num_lanes_neg [(LETP8 "-16") (LETP16 "-8") (LETP32 "-4")
+ (LETP64 "-2")])
+
+(define_int_attr letp_num_lanes_minus_1 [(LETP8 "15") (LETP16 "7") (LETP32 "3")
+ (LETP64 "1")])
+
(define_int_attr opsuffix [(UNSPEC_DOT_S "s8")
(UNSPEC_DOT_U "u8")
(UNSPEC_DOT_US "s8")
@@ -2916,6 +2927,10 @@ (define_int_iterator SQRSHRLQ [SQRSHRL_64 SQRSHRL_48])
(define_int_iterator VSHLCQ_M [VSHLCQ_M_S VSHLCQ_M_U])
(define_int_iterator VQSHLUQ_M_N [VQSHLUQ_M_N_S])
(define_int_iterator VQSHLUQ_N [VQSHLUQ_N_S])
+(define_int_iterator DLSTP [DLSTP8 DLSTP16 DLSTP32
+ DLSTP64])
+(define_int_iterator LETP [LETP8 LETP16 LETP32
+ LETP64])
;; Define iterators for VCMLA operations
(define_int_iterator VCMLA_OP [UNSPEC_VCMLA
diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md
index b1862d7977e91605cd971e634105bed3fa6e75cb..5748e2333eb3a88d659892f2bcc72849bcf388b5 100644
--- a/gcc/config/arm/mve.md
+++ b/gcc/config/arm/mve.md
@@ -6918,3 +6918,41 @@ (define_expand "@arm_mve_reinterpret<mode>"
}
}
)
+
+;; Originally expanded by 'predicated_doloop_end'.
+;; In the rare situation where the branch is too far, we do also need to
+;; revert FPSCR.LTPSIZE back to 0x100 after the last iteration.
+(define_insn "predicated_doloop_end_internal<letp_num_lanes>"
+ [(set (pc)
+ (if_then_else
+ (gtu (unspec:SI [(plus:SI (match_operand:SI 0 "s_register_operand" "=r")
+ (const_int <letp_num_lanes_neg>))]
+ LETP)
+ (const_int <letp_num_lanes_minus_1>))
+ (match_operand 1 "" "")
+ (pc)))
+ (set (match_dup 0)
+ (plus:SI (match_dup 0) (const_int <letp_num_lanes_neg>)))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_HAVE_MVE"
+ {
+ if (get_attr_length (insn) == 4)
+ return "letp\t%|lr, %l1";
+ else
+ return "subs\t%|lr, #<letp_num_lanes>\n\tbhi\t%l1\n\tlctp";
+ }
+ [(set (attr "length")
+ (if_then_else
+ (ltu (minus (pc) (match_dup 1)) (const_int 1024))
+ (const_int 4)
+ (const_int 6)))
+ (set_attr "type" "branch")])
+
+(define_insn "dlstp<dlstp_elemsize>_insn"
+ [
+ (set (reg:SI LR_REGNUM)
+ (unspec:SI [(match_operand:SI 0 "s_register_operand" "r")]
+ DLSTP))
+ ]
+ "TARGET_HAVE_MVE"
+ "dlstp.<dlstp_elemsize>\t%|lr, %0")
diff --git a/gcc/config/arm/thumb2.md b/gcc/config/arm/thumb2.md
index e1e013befa7a67ddbf517bf22797bdaeeb96b94f..f2801cea36a34d326fd6f3a213e0e149c3e0784f 100644
--- a/gcc/config/arm/thumb2.md
+++ b/gcc/config/arm/thumb2.md
@@ -1613,7 +1613,7 @@ (define_expand "doloop_end"
(use (match_operand 1 "" ""))] ; label
"TARGET_32BIT"
"
- {
+{
/* Currently SMS relies on the do-loop pattern to recognize loops
where (1) the control part consists of all insns defining and/or
using a certain 'count' register and (2) the loop count can be
@@ -1623,41 +1623,77 @@ (define_expand "doloop_end"
Also used to implement the low over head loops feature, which is part of
the Armv8.1-M Mainline Low Overhead Branch (LOB) extension. */
- if (optimize > 0 && (flag_modulo_sched || TARGET_HAVE_LOB))
- {
- rtx s0;
- rtx bcomp;
- rtx loc_ref;
- rtx cc_reg;
- rtx insn;
- rtx cmp;
-
- if (GET_MODE (operands[0]) != SImode)
- FAIL;
-
- s0 = operands [0];
-
- /* Low over head loop instructions require the first operand to be LR. */
- if (TARGET_HAVE_LOB && arm_target_insn_ok_for_lob (operands [1]))
- s0 = gen_rtx_REG (SImode, LR_REGNUM);
-
- if (TARGET_THUMB2)
- insn = emit_insn (gen_thumb2_addsi3_compare0 (s0, s0, GEN_INT (-1)));
- else
- insn = emit_insn (gen_addsi3_compare0 (s0, s0, GEN_INT (-1)));
-
- cmp = XVECEXP (PATTERN (insn), 0, 0);
- cc_reg = SET_DEST (cmp);
- bcomp = gen_rtx_NE (VOIDmode, cc_reg, const0_rtx);
- loc_ref = gen_rtx_LABEL_REF (VOIDmode, operands [1]);
- emit_jump_insn (gen_rtx_SET (pc_rtx,
- gen_rtx_IF_THEN_ELSE (VOIDmode, bcomp,
- loc_ref, pc_rtx)));
- DONE;
- }
- else
- FAIL;
- }")
+ if (optimize > 0 && (flag_modulo_sched || TARGET_HAVE_LOB))
+ {
+ rtx s0;
+ rtx bcomp;
+ rtx loc_ref;
+ rtx cc_reg;
+ rtx insn;
+ rtx cmp;
+ rtx decrement_num;
+
+ if (GET_MODE (operands[0]) != SImode)
+ FAIL;
+
+ s0 = operands[0];
+
+ if (TARGET_HAVE_LOB && arm_target_bb_ok_for_lob (BLOCK_FOR_INSN (operands[1])))
+ {
+ s0 = gen_rtx_REG (SImode, LR_REGNUM);
+
+ /* If we have a compatible MVE target, try and analyse the loop
+ contents to determine if we can use predicated dlstp/letp
+ looping. */
+ if (TARGET_HAVE_MVE
+ && (decrement_num = arm_attempt_dlstp_transform (operands[1]))
+ && (INTVAL (decrement_num) != 1))
+ {
+ loc_ref = gen_rtx_LABEL_REF (VOIDmode, operands[1]);
+ switch (INTVAL (decrement_num))
+ {
+ case 2:
+ insn = emit_jump_insn (gen_predicated_doloop_end_internal2
+ (s0, loc_ref));
+ break;
+ case 4:
+ insn = emit_jump_insn (gen_predicated_doloop_end_internal4
+ (s0, loc_ref));
+ break;
+ case 8:
+ insn = emit_jump_insn (gen_predicated_doloop_end_internal8
+ (s0, loc_ref));
+ break;
+ case 16:
+ insn = emit_jump_insn (gen_predicated_doloop_end_internal16
+ (s0, loc_ref));
+ break;
+ default:
+ gcc_unreachable ();
+ }
+ DONE;
+ }
+ }
+
+ /* Otherwise, try standard decrement-by-one dls/le looping. */
+ if (TARGET_THUMB2)
+ insn = emit_insn (gen_thumb2_addsi3_compare0 (s0, s0,
+ GEN_INT (-1)));
+ else
+ insn = emit_insn (gen_addsi3_compare0 (s0, s0, GEN_INT (-1)));
+
+ cmp = XVECEXP (PATTERN (insn), 0, 0);
+ cc_reg = SET_DEST (cmp);
+ bcomp = gen_rtx_NE (VOIDmode, cc_reg, const0_rtx);
+ loc_ref = gen_rtx_LABEL_REF (VOIDmode, operands[1]);
+ emit_jump_insn (gen_rtx_SET (pc_rtx,
+ gen_rtx_IF_THEN_ELSE (VOIDmode, bcomp,
+ loc_ref, pc_rtx)));
+ DONE;
+ }
+ else
+ FAIL;
+}")
(define_insn "*clear_apsr"
[(unspec_volatile:SI [(const_int 0)] VUNSPEC_CLRM_APSR)
@@ -1755,7 +1791,37 @@ (define_expand "doloop_begin"
{
if (REGNO (operands[0]) == LR_REGNUM)
{
- emit_insn (gen_dls_insn (operands[0]));
+ /* Pick out the number by which we are decrementing the loop counter
+ in every iteration. If it's > 1, then use dlstp. */
+ int const_int_dec_num
+ = abs (INTVAL (XEXP (XEXP (XVECEXP (PATTERN (operands[1]), 0, 1),
+ 1),
+ 1)));
+ switch (const_int_dec_num)
+ {
+ case 16:
+ emit_insn (gen_dlstp8_insn (operands[0]));
+ break;
+
+ case 8:
+ emit_insn (gen_dlstp16_insn (operands[0]));
+ break;
+
+ case 4:
+ emit_insn (gen_dlstp32_insn (operands[0]));
+ break;
+
+ case 2:
+ emit_insn (gen_dlstp64_insn (operands[0]));
+ break;
+
+ case 1:
+ emit_insn (gen_dls_insn (operands[0]));
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
DONE;
}
else
diff --git a/gcc/config/arm/unspecs.md b/gcc/config/arm/unspecs.md
index 4713ec840abae48ca70f418dbc0d4028ad4ad527..2d6f27c14f4a1e7db05b9684a8958a76a1c79ef2 100644
--- a/gcc/config/arm/unspecs.md
+++ b/gcc/config/arm/unspecs.md
@@ -583,6 +583,14 @@ (define_c_enum "unspec" [
VADDLVQ_U
VCTP
VCTP_M
+ DLSTP8
+ DLSTP16
+ DLSTP32
+ DLSTP64
+ LETP8
+ LETP16
+ LETP32
+ LETP64
VPNOT
VCREATEQ_F
VCVTQ_N_TO_F_S
diff --git a/gcc/df-core.cc b/gcc/df-core.cc
index d4812b04a7cb97ea1606082e26e910472da5bcc1..4fcc14bf790d43e792b3c926fe1f80073d908c17 100644
--- a/gcc/df-core.cc
+++ b/gcc/df-core.cc
@@ -1964,6 +1964,21 @@ df_bb_regno_last_def_find (basic_block bb, unsigned int regno)
return NULL;
}
+/* Return the one and only def of REGNO within BB. If there is no def or
+ there are multiple defs, return NULL. */
+
+df_ref
+df_bb_regno_only_def_find (basic_block bb, unsigned int regno)
+{
+ df_ref temp = df_bb_regno_first_def_find (bb, regno);
+ if (!temp)
+ return NULL;
+ else if (temp == df_bb_regno_last_def_find (bb, regno))
+ return temp;
+ else
+ return NULL;
+}
+
/* Finds the reference corresponding to the definition of REG in INSN.
DF is the dataflow object. */
diff --git a/gcc/df.h b/gcc/df.h
index 402657a7076f1bcad24e9c50682e033e57f432f9..98623637f9c839c799222e99df2a7173a770b2ac 100644
--- a/gcc/df.h
+++ b/gcc/df.h
@@ -987,6 +987,7 @@ extern void df_check_cfg_clean (void);
#endif
extern df_ref df_bb_regno_first_def_find (basic_block, unsigned int);
extern df_ref df_bb_regno_last_def_find (basic_block, unsigned int);
+extern df_ref df_bb_regno_only_def_find (basic_block, unsigned int);
extern df_ref df_find_def (rtx_insn *, rtx);
extern bool df_reg_defined (rtx_insn *, rtx);
extern df_ref df_find_use (rtx_insn *, rtx);
diff --git a/gcc/loop-doloop.cc b/gcc/loop-doloop.cc
index 4feb0a25ab9331b7124df900f73c9fc6fb3eb10b..d919207505c472c8a54a2c9c982a09061584177b 100644
--- a/gcc/loop-doloop.cc
+++ b/gcc/loop-doloop.cc
@@ -85,10 +85,10 @@ doloop_condition_get (rtx_insn *doloop_pat)
forms:
1) (parallel [(set (pc) (if_then_else (condition)
- (label_ref (label))
- (pc)))
- (set (reg) (plus (reg) (const_int -1)))
- (additional clobbers and uses)])
+ (label_ref (label))
+ (pc)))
+ (set (reg) (plus (reg) (const_int -1)))
+ (additional clobbers and uses)])
The branch must be the first entry of the parallel (also required
by jump.cc), and the second entry of the parallel must be a set of
@@ -96,19 +96,34 @@ doloop_condition_get (rtx_insn *doloop_pat)
the loop counter in an if_then_else too.
2) (set (reg) (plus (reg) (const_int -1))
- (set (pc) (if_then_else (reg != 0)
- (label_ref (label))
- (pc))).
+ (set (pc) (if_then_else (reg != 0)
+ (label_ref (label))
+ (pc))).
Some targets (ARM) do the comparison before the branch, as in the
following form:
- 3) (parallel [(set (cc) (compare ((plus (reg) (const_int -1), 0)))
- (set (reg) (plus (reg) (const_int -1)))])
- (set (pc) (if_then_else (cc == NE)
- (label_ref (label))
- (pc))) */
-
+ 3) (parallel [(set (cc) (compare (plus (reg) (const_int -1)) 0))
+ (set (reg) (plus (reg) (const_int -1)))])
+ (set (pc) (if_then_else (cc == NE)
+ (label_ref (label))
+ (pc)))
+
+ The ARM target also supports a special case of a counter that decrements
+ by `n` and terminating in a GTU condition. In that case, the compare and
+ branch are all part of one insn, containing an UNSPEC:
+
+ 4) (parallel [
+ (set (pc)
+ (if_then_else (gtu (unspec:SI [(plus:SI (reg:SI 14 lr)
+ (const_int -n))])
+ (const_int n-1]))
+ (label_ref)
+ (pc)))
+ (set (reg:SI 14 lr)
+ (plus:SI (reg:SI 14 lr)
+ (const_int -n)))
+ */
pattern = PATTERN (doloop_pat);
if (GET_CODE (pattern) != PARALLEL)
@@ -143,7 +158,7 @@ doloop_condition_get (rtx_insn *doloop_pat)
|| GET_CODE (cmp_arg1) != PLUS)
return 0;
reg_orig = XEXP (cmp_arg1, 0);
- if (XEXP (cmp_arg1, 1) != GEN_INT (-1)
+ if (XEXP (cmp_arg1, 1) != GEN_INT (-1)
|| !REG_P (reg_orig))
return 0;
cc_reg = SET_DEST (cmp_orig);
@@ -173,15 +188,17 @@ doloop_condition_get (rtx_insn *doloop_pat)
if (! REG_P (reg))
return 0;
- /* Check if something = (plus (reg) (const_int -1)).
+ /* Check if something = (plus (reg) (const_int -n)).
On IA-64, this decrement is wrapped in an if_then_else. */
inc_src = SET_SRC (inc);
if (GET_CODE (inc_src) == IF_THEN_ELSE)
inc_src = XEXP (inc_src, 1);
if (GET_CODE (inc_src) != PLUS
|| XEXP (inc_src, 0) != reg
- || XEXP (inc_src, 1) != constm1_rtx)
+ || !CONST_INT_P (XEXP (inc_src, 1))
+ || INTVAL (XEXP (inc_src, 1)) >= 0)
return 0;
+ int dec_num = abs (INTVAL (XEXP (inc_src, 1)));
/* Check for (set (pc) (if_then_else (condition)
(label_ref (label))
@@ -196,60 +213,71 @@ doloop_condition_get (rtx_insn *doloop_pat)
/* Extract loop termination condition. */
condition = XEXP (SET_SRC (cmp), 0);
- /* We expect a GE or NE comparison with 0 or 1. */
- if ((GET_CODE (condition) != GE
- && GET_CODE (condition) != NE)
- || (XEXP (condition, 1) != const0_rtx
- && XEXP (condition, 1) != const1_rtx))
+ /* We expect a GE or NE comparison with 0 or 1, or a GTU comparison with
+ dec_num - 1. */
+ if (!((GET_CODE (condition) == GE
+ || GET_CODE (condition) == NE)
+ && (XEXP (condition, 1) == const0_rtx
+ || XEXP (condition, 1) == const1_rtx ))
+ &&!(GET_CODE (condition) == GTU
+ && ((INTVAL (XEXP (condition, 1))) == (dec_num - 1))))
return 0;
- if ((XEXP (condition, 0) == reg)
+ /* For the ARM special case of having a GTU: re-form the condition without
+ the unspec for the benefit of the middle-end. */
+ if (GET_CODE (condition) == GTU)
+ {
+ condition = gen_rtx_fmt_ee (GTU, VOIDmode, inc_src,
+ GEN_INT (dec_num - 1));
+ return condition;
+ }
+ else if ((XEXP (condition, 0) == reg)
/* For the third case: */
|| ((cc_reg != NULL_RTX)
&& (XEXP (condition, 0) == cc_reg)
&& (reg_orig == reg))
|| (GET_CODE (XEXP (condition, 0)) == PLUS
&& XEXP (XEXP (condition, 0), 0) == reg))
- {
+ {
if (GET_CODE (pattern) != PARALLEL)
/* For the second form we expect:
- (set (reg) (plus (reg) (const_int -1))
- (set (pc) (if_then_else (reg != 0)
- (label_ref (label))
- (pc))).
+ (set (reg) (plus (reg) (const_int -1))
+ (set (pc) (if_then_else (reg != 0)
+ (label_ref (label))
+ (pc))).
- is equivalent to the following:
+ is equivalent to the following:
- (parallel [(set (pc) (if_then_else (reg != 1)
- (label_ref (label))
- (pc)))
- (set (reg) (plus (reg) (const_int -1)))
- (additional clobbers and uses)])
+ (parallel [(set (pc) (if_then_else (reg != 1)
+ (label_ref (label))
+ (pc)))
+ (set (reg) (plus (reg) (const_int -1)))
+ (additional clobbers and uses)])
- For the third form we expect:
+ For the third form we expect:
- (parallel [(set (cc) (compare ((plus (reg) (const_int -1)), 0))
- (set (reg) (plus (reg) (const_int -1)))])
- (set (pc) (if_then_else (cc == NE)
- (label_ref (label))
- (pc)))
+ (parallel [(set (cc) (compare ((plus (reg) (const_int -1)), 0))
+ (set (reg) (plus (reg) (const_int -1)))])
+ (set (pc) (if_then_else (cc == NE)
+ (label_ref (label))
+ (pc)))
- which is equivalent to the following:
+ which is equivalent to the following:
- (parallel [(set (cc) (compare (reg, 1))
- (set (reg) (plus (reg) (const_int -1)))
- (set (pc) (if_then_else (NE == cc)
- (label_ref (label))
- (pc))))])
+ (parallel [(set (cc) (compare (reg, 1))
+ (set (reg) (plus (reg) (const_int -1)))
+ (set (pc) (if_then_else (NE == cc)
+ (label_ref (label))
+ (pc))))])
- So we return the second form instead for the two cases.
+ So we return the second form instead for the two cases.
*/
- condition = gen_rtx_fmt_ee (NE, VOIDmode, inc_src, const1_rtx);
+ condition = gen_rtx_fmt_ee (NE, VOIDmode, inc_src, const1_rtx);
return condition;
- }
+ }
/* ??? If a machine uses a funny comparison, we could return a
canonicalized form here. */
@@ -507,6 +535,11 @@ doloop_modify (class loop *loop, class niter_desc *desc,
nonneg = 1;
break;
+ case GTU:
+ /* The iteration count does not need incrementing for a GTU test. */
+ increment_count = false;
+ break;
+
/* Abort if an invalid doloop pattern has been generated. */
default:
gcc_unreachable ();
@@ -529,6 +562,10 @@ doloop_modify (class loop *loop, class niter_desc *desc,
if (desc->noloop_assumptions)
{
+ /* The GTU case has only been implemented for the ARM target, where
+ noloop_assumptions gets explicitly set to NULL for that case, so
+ assert here for safety. */
+ gcc_assert (GET_CODE (condition) != GTU);
rtx ass = copy_rtx (desc->noloop_assumptions);
basic_block preheader = loop_preheader_edge (loop)->src;
basic_block set_zero = split_edge (loop_preheader_edge (loop));
@@ -642,7 +679,7 @@ doloop_optimize (class loop *loop)
{
scalar_int_mode mode;
rtx doloop_reg;
- rtx count;
+ rtx count = NULL_RTX;
widest_int iterations, iterations_max;
rtx_code_label *start_label;
rtx condition;
@@ -685,17 +722,6 @@ doloop_optimize (class loop *loop)
return false;
}
- max_cost
- = COSTS_N_INSNS (param_max_iterations_computation_cost);
- if (set_src_cost (desc->niter_expr, mode, optimize_loop_for_speed_p (loop))
- > max_cost)
- {
- if (dump_file)
- fprintf (dump_file,
- "Doloop: number of iterations too costly to compute.\n");
- return false;
- }
-
if (desc->const_iter)
iterations = widest_int::from (rtx_mode_t (desc->niter_expr, mode),
UNSIGNED);
@@ -716,12 +742,25 @@ doloop_optimize (class loop *loop)
/* Generate looping insn. If the pattern FAILs then give up trying
to modify the loop since there is some aspect the back-end does
- not like. */
- count = copy_rtx (desc->niter_expr);
+ not like. If this succeeds, there is a chance that the loop
+ desc->niter_expr has been altered by the backend, so only extract
+ that data after the gen_doloop_end. */
start_label = block_label (desc->in_edge->dest);
doloop_reg = gen_reg_rtx (mode);
rtx_insn *doloop_seq = targetm.gen_doloop_end (doloop_reg, start_label);
+ max_cost
+ = COSTS_N_INSNS (param_max_iterations_computation_cost);
+ if (set_src_cost (desc->niter_expr, mode, optimize_loop_for_speed_p (loop))
+ > max_cost)
+ {
+ if (dump_file)
+ fprintf (dump_file,
+ "Doloop: number of iterations too costly to compute.\n");
+ return false;
+ }
+
+ count = copy_rtx (desc->niter_expr);
word_mode_size = GET_MODE_PRECISION (word_mode);
word_mode_max = (HOST_WIDE_INT_1U << (word_mode_size - 1) << 1) - 1;
if (! doloop_seq
diff --git a/gcc/testsuite/gcc.target/arm/lob.h b/gcc/testsuite/gcc.target/arm/lob.h
index feaae7cc89959b3147368980120700bbc3e85ecb..3941fe7a8b620e62a5f742722be1ba2d031f5a8d 100644
--- a/gcc/testsuite/gcc.target/arm/lob.h
+++ b/gcc/testsuite/gcc.target/arm/lob.h
@@ -1,15 +1,131 @@
#include <string.h>
-
+#include <stdint.h>
/* Common code for lob tests. */
#define NO_LOB asm volatile ("@ clobber lr" : : : "lr" )
-#define N 10000
+#define N 100
+
+static void
+reset_data (int *a, int *b, int *c, int x)
+{
+ memset (a, -1, x * sizeof (*a));
+ memset (b, -1, x * sizeof (*b));
+ memset (c, 0, x * sizeof (*c));
+}
+
+static void
+reset_data8 (int8_t *a, int8_t *b, int8_t *c, int x)
+{
+ memset (a, -1, x * sizeof (*a));
+ memset (b, -1, x * sizeof (*b));
+ memset (c, 0, x * sizeof (*c));
+}
+
+static void
+reset_data16 (int16_t *a, int16_t *b, int16_t *c, int x)
+{
+ memset (a, -1, x * sizeof (*a));
+ memset (b, -1, x * sizeof (*b));
+ memset (c, 0, x * sizeof (*c));
+}
+
+static void
+reset_data32 (int32_t *a, int32_t *b, int32_t *c, int x)
+{
+ memset (a, -1, x * sizeof (*a));
+ memset (b, -1, x * sizeof (*b));
+ memset (c, 0, x * sizeof (*c));
+}
+
+static void
+reset_data64 (int64_t *a, int64_t *c, int x)
+{
+ memset (a, -1, x * sizeof (*a));
+ memset (c, 0, x * sizeof (*c));
+}
+
+static void
+check_plus (int *a, int *b, int *c, int x)
+{
+ for (int i = 0; i < N; i++)
+ {
+ NO_LOB;
+ if (i < x)
+ {
+ if (c[i] != (a[i] + b[i])) abort ();
+ }
+ else
+ {
+ if (c[i] != 0) abort ();
+ }
+ }
+}
+
+static void
+check_plus8 (int8_t *a, int8_t *b, int8_t *c, int x)
+{
+ for (int i = 0; i < N; i++)
+ {
+ NO_LOB;
+ if (i < x)
+ {
+ if (c[i] != (a[i] + b[i])) abort ();
+ }
+ else
+ {
+ if (c[i] != 0) abort ();
+ }
+ }
+}
+
+static void
+check_plus16 (int16_t *a, int16_t *b, int16_t *c, int x)
+{
+ for (int i = 0; i < N; i++)
+ {
+ NO_LOB;
+ if (i < x)
+ {
+ if (c[i] != (a[i] + b[i])) abort ();
+ }
+ else
+ {
+ if (c[i] != 0) abort ();
+ }
+ }
+}
+
+static void
+check_plus32 (int32_t *a, int32_t *b, int32_t *c, int x)
+{
+ for (int i = 0; i < N; i++)
+ {
+ NO_LOB;
+ if (i < x)
+ {
+ if (c[i] != (a[i] + b[i])) abort ();
+ }
+ else
+ {
+ if (c[i] != 0) abort ();
+ }
+ }
+}
static void
-reset_data (int *a, int *b, int *c)
+check_memcpy64 (int64_t *a, int64_t *c, int x)
{
- memset (a, -1, N * sizeof (*a));
- memset (b, -1, N * sizeof (*b));
- memset (c, -1, N * sizeof (*c));
+ for (int i = 0; i < N; i++)
+ {
+ NO_LOB;
+ if (i < x)
+ {
+ if (c[i] != a[i]) abort ();
+ }
+ else
+ {
+ if (c[i] != 0) abort ();
+ }
+ }
}
diff --git a/gcc/testsuite/gcc.target/arm/lob1.c b/gcc/testsuite/gcc.target/arm/lob1.c
index ba5c82cd55c582c96a18ad417a3041e43d843613..c8ce653a5c39fb1ffcf82a6e584d9a0467a130c0 100644
--- a/gcc/testsuite/gcc.target/arm/lob1.c
+++ b/gcc/testsuite/gcc.target/arm/lob1.c
@@ -54,29 +54,18 @@ loop3 (int *a, int *b, int *c)
} while (i < N);
}
-void
-check (int *a, int *b, int *c)
-{
- for (int i = 0; i < N; i++)
- {
- NO_LOB;
- if (c[i] != a[i] + b[i])
- abort ();
- }
-}
-
int
main (void)
{
- reset_data (a, b, c);
+ reset_data (a, b, c, N);
loop1 (a, b ,c);
- check (a, b ,c);
- reset_data (a, b, c);
+ check_plus (a, b, c, N);
+ reset_data (a, b, c, N);
loop2 (a, b ,c);
- check (a, b ,c);
- reset_data (a, b, c);
+ check_plus (a, b, c, N);
+ reset_data (a, b, c, N);
loop3 (a, b ,c);
- check (a, b ,c);
+ check_plus (a, b, c, N);
return 0;
}
diff --git a/gcc/testsuite/gcc.target/arm/lob6.c b/gcc/testsuite/gcc.target/arm/lob6.c
index 17b6124295e8ae9e1cb57e41fa43a954b3390eec..4fe116e2c2be3748d1bb6da7bb9092db8f962abc 100644
--- a/gcc/testsuite/gcc.target/arm/lob6.c
+++ b/gcc/testsuite/gcc.target/arm/lob6.c
@@ -79,14 +79,14 @@ check (void)
int
main (void)
{
- reset_data (a1, b1, c1);
- reset_data (a2, b2, c2);
+ reset_data (a1, b1, c1, N);
+ reset_data (a2, b2, c2, N);
loop1 (a1, b1, c1);
ref1 (a2, b2, c2);
check ();
- reset_data (a1, b1, c1);
- reset_data (a2, b2, c2);
+ reset_data (a1, b1, c1, N);
+ reset_data (a2, b2, c2, N);
loop2 (a1, b1, c1);
ref2 (a2, b2, c2);
check ();
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-compile-asm.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-compile-asm.c
new file mode 100644
index 0000000000000000000000000000000000000000..5ddd994e53d55c7b4d05bfb858e6078ce7da4ce4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-compile-asm.c
@@ -0,0 +1,561 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-options "-O3 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <arm_mve.h>
+
+#define IMM 5
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \
+void test_##NAME##PRED##_##SIGN##BITS (TYPE##BITS##_t *a, TYPE##BITS##_t *b, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \
+ TYPE##BITS##x##LANES##_t vb = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (b, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_##SIGN##BITS (va, vb, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ b += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY(BITS, LANES, LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY(NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (8, 16, b, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (16, 8, h, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (32, 4, w, NAME, PRED)
+
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vaddq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vmulq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vsubq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vhaddq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vorrq, _x)
+
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY_M(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \
+void test_##NAME##PRED##_##SIGN##BITS (TYPE##BITS##x##LANES##_t __inactive, TYPE##BITS##_t *a, TYPE##BITS##_t *b, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \
+ TYPE##BITS##x##LANES##_t vb = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (b, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_##SIGN##BITS (__inactive, va, vb, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ b += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M(BITS, LANES, LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_M (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_M (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M(NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M (8, 16, b, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M (16, 8, h, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M (32, 4, w, NAME, PRED)
+
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vaddq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vmulq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vsubq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vhaddq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M (vorrq, _m)
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY_N(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \
+void test_##NAME##PRED##_n_##SIGN##BITS (TYPE##BITS##_t *a, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_n_##SIGN##BITS (va, IMM, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N(BITS, LANES, LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_N (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_N (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N(NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (8, 16, b, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (16, 8, h, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (32, 4, w, NAME, PRED)
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vaddq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vmulq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vsubq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vhaddq, _x)
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vbrsrq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vshlq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vshrq, _x)
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY_M_N(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \
+void test_##NAME##PRED##_n_##SIGN##BITS (TYPE##BITS##x##LANES##_t __inactive, TYPE##BITS##_t *a, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_n_##SIGN##BITS (__inactive, va, IMM, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N(BITS, LANES, LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_M_N (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_M_N (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N(NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N (8, 16, b, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N (16, 8, h, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_M_N (32, 4, w, NAME, PRED)
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vaddq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vmulq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vsubq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vhaddq, _m)
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vbrsrq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vshlq, _m)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_M_N (vshrq, _m)
+
+/* Now test some more configurations. */
+
+/* Using a >=1 condition. */
+void test1 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n >= 1)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c+=4;
+ a+=4;
+ b+=4;
+ n-=4;
+ }
+}
+
+/* Test a for loop format of decrementing to zero */
+int32_t a[] = {0, 1, 2, 3, 4, 5, 6, 7};
+void test2 (int32_t *b, int num_elems)
+{
+ for (int i = num_elems; i > 0; i-= 4)
+ {
+ mve_pred16_t p = vctp32q (i);
+ int32x4_t va = vldrwq_z_s32 (&(a[i]), p);
+ vstrwq_p_s32 (b + i, va, p);
+ }
+}
+
+/* Iteration counter counting up to num_iter. */
+void test3 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int num_iter = (n + 15)/16;
+ for (int i = 0; i < num_iter; i++)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n-=16;
+ }
+}
+
+/* Iteration counter counting down from num_iter. */
+void test4 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int num_iter = (n + 15)/16;
+ for (int i = num_iter; i > 0; i--)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n-=16;
+ }
+}
+
+/* Using an unpredicated arithmetic instruction within the loop. */
+void test5 (uint8_t *a, uint8_t *b, uint8_t *c, uint8_t *d, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_u8 (b);
+ /* Is affected by implicit predication, because vb also
+ came from an unpredicated load, but there is no functional
+ problem, because the result is used in a predicated store. */
+ uint8x16_t vc = vaddq_u8 (va, vb);
+ uint8x16_t vd = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ vstrbq_p_u8 (d, vd, p);
+ n-=16;
+ }
+}
+
+/* Using a different VPR value for one instruction in the loop. */
+void test6 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using another VPR value in the loop, with a vctp.
+ The doloop logic will always try to do the transform on the first
+ vctp it encounters, so this is still expected to work. */
+void test7 (int32_t *a, int32_t *b, int32_t *c, int n, int g)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ mve_pred16_t p1 = vctp32q (g);
+ int32x4_t vb = vldrwq_z_s32 (b, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using a different VPR value in the loop, with a vctp,
+ but this time the p1 will also change in every loop (still fine) */
+void test8 (int32_t *a, int32_t *b, int32_t *c, int n, int g)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ mve_pred16_t p1 = vctp32q (g);
+ int32x4_t vb = vldrwq_z_s32 (b, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ g++;
+ }
+}
+
+/* Generating and using a different VPR value in the loop, with a vctp_m
+ that is independent of the loop vctp VPR. */
+void test9 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ mve_pred16_t p2 = vctp32q_m (n, p1);
+ int32x4_t vb = vldrwq_z_s32 (b, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p2);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using a different VPR value in the loop,
+ with a vctp_m that is tied to the base vctp VPR. This
+ is still fine, because the vctp_m will be transformed
+ into a vctp and be implicitly predicated. */
+void test10 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ mve_pred16_t p1 = vctp32q_m (n, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p1);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using a different VPR value in the loop, with a vcmp. */
+void test11 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ mve_pred16_t p1 = vcmpeqq_s32 (va, vb);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p1);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using a different VPR value in the loop, with a vcmp_m. */
+void test12 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ mve_pred16_t p2 = vcmpeqq_m_s32 (va, vb, p1);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p2);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Generating and using a different VPR value in the loop, with a vcmp_m
+ that is tied to the base vctp VPR (same as above, this will be turned
+ into a vcmp and be implicitly predicated). */
+void test13 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p1)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ mve_pred16_t p2 = vcmpeqq_m_s32 (va, vb, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p2);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using an unpredicated op with a scalar output, where the result is valid
+ outside the bb. This is valid, because all the inputs to the unpredicated
+ op are correctly predicated. */
+uint8_t test14 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx)
+{
+ uint8_t sum = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_m_u8 (vx, va, vb, p);
+ sum += vaddvq_u8 (vc);
+ a += 16;
+ b += 16;
+ n -= 16;
+ }
+ return sum;
+}
+
+/* Same as above, but with another scalar op between the unpredicated op and
+ the scalar op outside the loop. */
+uint8_t test15 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx, int g)
+{
+ uint8_t sum = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_m_u8 (vx, va, vb, p);
+ sum += vaddvq_u8 (vc);
+ sum += g;
+ a += 16;
+ b += 16;
+ n -= 16;
+ }
+ return sum;
+}
+
+/* Using an unpredicated vcmp to generate a new predicate value in the
+ loop and then using it in a predicated store insn. */
+void test16 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_s32 (b);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ mve_pred16_t p1 = vcmpeqq_s32 (va, vc);
+ vstrwq_p_s32 (c, vc, p1);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using a predicated vcmp to generate a new predicate value in the
+ loop and then using it in a predicated store insn. */
+void test17 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_s32 (va, vb);
+ mve_pred16_t p1 = vcmpeqq_m_s32 (va, vc, p);
+ vstrwq_p_s32 (c, vc, p1);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using an across-vector unpredicated instruction in a valid way.
+ This tests that "vc" has correctly masked the risky "vb". */
+uint16_t test18 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ uint16x8_t vc = vaddq_x_u16 (va, vb, p);
+ res = vaddvq_u16 (vc);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+/* Using an across-vector unpredicated instruction with a scalar from outside the loop. */
+uint16_t test19 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ uint16x8_t vc = vaddq_x_u16 (va, vb, p);
+ res = vaddvaq_u16 (res, vc);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+/* Using an across-vector predicated instruction in a valid way. */
+uint16_t test20 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ res = vaddvaq_p_u16 (res, vb, p);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+/* Using an across-vector predicated instruction in a valid way. */
+uint16_t test21 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ res++;
+ res = vaddvaq_p_u16 (res, vb, p);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+int test22 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ res = vmaxvq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+int test23 (int8_t *a, int8_t *b, int8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ res = vmaxavq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+/* The final number of DLSTPs currently is calculated by the number of
+ `TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY.*` macros * 6 + 23. */
+/* { dg-final { scan-assembler-times {\tdlstp} 167 } } */
+/* { dg-final { scan-assembler-times {\tletp} 167 } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8-run.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8-run.c
new file mode 100644
index 0000000000000000000000000000000000000000..6966a3966046fce59bdabda639c048ed398cac20
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8-run.c
@@ -0,0 +1,44 @@
+/* { dg-do run { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-require-effective-target arm_mve_hw } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+#include "dlstp-int16x8.c"
+
+int main ()
+{
+ int i;
+ int16_t temp1[N];
+ int16_t temp2[N];
+ int16_t temp3[N];
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 0);
+ check_plus16 (temp1, temp2, temp3, 0);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 1);
+ check_plus16 (temp1, temp2, temp3, 1);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 7);
+ check_plus16 (temp1, temp2, temp3, 7);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 8);
+ check_plus16 (temp1, temp2, temp3, 8);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 9);
+ check_plus16 (temp1, temp2, temp3, 9);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 16);
+ check_plus16 (temp1, temp2, temp3, 16);
+
+ reset_data16 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 17);
+ check_plus16 (temp1, temp2, temp3, 17);
+
+ reset_data16 (temp1, temp2, temp3, N);
+}
+
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8.c
new file mode 100644
index 0000000000000000000000000000000000000000..33632c5f14dc6603d56934dfdd0072a980fbd01e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int16x8.c
@@ -0,0 +1,31 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <arm_mve.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include "../lob.h"
+
+void __attribute__ ((noinline)) test (int16_t *a, int16_t *b, int16_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ int16x8_t va = vldrhq_z_s16 (a, p);
+ int16x8_t vb = vldrhq_z_s16 (b, p);
+ int16x8_t vc = vaddq_x_s16 (va, vb, p);
+ vstrhq_p_s16 (c, vc, p);
+ c+=8;
+ a+=8;
+ b+=8;
+ n-=8;
+ }
+}
+
+/* { dg-final { scan-assembler-times {\tdlstp.16} 1 } } */
+/* { dg-final { scan-assembler-times {\tletp} 1 } } */
+/* { dg-final { scan-assembler-not "\tvctp" } } */
+/* { dg-final { scan-assembler-not "\tvpst" } } */
+/* { dg-final { scan-assembler-not "p0" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4-run.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4-run.c
new file mode 100644
index 0000000000000000000000000000000000000000..6833dddde92b7cf16a18d42c003ee5bd2b9da847
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4-run.c
@@ -0,0 +1,45 @@
+/* { dg-do run { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-require-effective-target arm_mve_hw } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include "dlstp-int32x4.c"
+
+int main ()
+{
+ int i;
+ int32_t temp1[N];
+ int32_t temp2[N];
+ int32_t temp3[N];
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 0);
+ check_plus32 (temp1, temp2, temp3, 0);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 1);
+ check_plus32 (temp1, temp2, temp3, 1);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 3);
+ check_plus32 (temp1, temp2, temp3, 3);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 4);
+ check_plus32 (temp1, temp2, temp3, 4);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 5);
+ check_plus32 (temp1, temp2, temp3, 5);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 8);
+ check_plus32 (temp1, temp2, temp3, 8);
+
+ reset_data32 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 9);
+ check_plus32 (temp1, temp2, temp3, 9);
+
+ reset_data32 (temp1, temp2, temp3, N);
+}
+
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4.c
new file mode 100644
index 0000000000000000000000000000000000000000..5d09f784b7716c14e56086b7e66eb12b31772a45
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int32x4.c
@@ -0,0 +1,31 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <arm_mve.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include "../lob.h"
+
+void __attribute__ ((noinline)) test (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c+=4;
+ a+=4;
+ b+=4;
+ n-=4;
+ }
+}
+
+/* { dg-final { scan-assembler-times {\tdlstp.32} 1 } } */
+/* { dg-final { scan-assembler-times {\tletp} 1 } } */
+/* { dg-final { scan-assembler-not "\tvctp" } } */
+/* { dg-final { scan-assembler-not "\tvpst" } } */
+/* { dg-final { scan-assembler-not "p0" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2-run.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2-run.c
new file mode 100644
index 0000000000000000000000000000000000000000..cc0b9ce7ee9a5a8400b18f539ff96b8e675414cb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2-run.c
@@ -0,0 +1,48 @@
+/* { dg-do run { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-require-effective-target arm_mve_hw } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include "dlstp-int64x2.c"
+
+int main ()
+{
+ int i;
+ int64_t temp1[N];
+ int64_t temp3[N];
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 0);
+ check_memcpy64 (temp1, temp3, 0);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 1);
+ check_memcpy64 (temp1, temp3, 1);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 2);
+ check_memcpy64 (temp1, temp3, 2);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 3);
+ check_memcpy64 (temp1, temp3, 3);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 4);
+ check_memcpy64 (temp1, temp3, 4);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 5);
+ check_memcpy64 (temp1, temp3, 5);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 6);
+ check_memcpy64 (temp1, temp3, 6);
+
+ reset_data64 (temp1, temp3, N);
+ test (temp1, temp3, 7);
+ check_memcpy64 (temp1, temp3, 7);
+
+ reset_data64 (temp1, temp3, N);
+}
+
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2.c
new file mode 100644
index 0000000000000000000000000000000000000000..21e882424ec3ba4e7a141eadb0f4e593146e81ad
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int64x2.c
@@ -0,0 +1,28 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <arm_mve.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include "../lob.h"
+
+void __attribute__ ((noinline)) test (int64_t *a, int64_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp64q (n);
+ int64x2_t va = vldrdq_gather_offset_z_s64 (a, vcreateq_u64 (0, 8), p);
+ vstrdq_scatter_offset_p_s64 (c, vcreateq_u64 (0, 8), va, p);
+ c+=2;
+ a+=2;
+ n-=2;
+ }
+}
+
+/* { dg-final { scan-assembler-times {\tdlstp.64} 1 } } */
+/* { dg-final { scan-assembler-times {\tletp} 1 } } */
+/* { dg-final { scan-assembler-not "\tvctp" } } */
+/* { dg-final { scan-assembler-not "\tvpst" } } */
+/* { dg-final { scan-assembler-not "p0" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-int8x16.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-int8x16.c
new file mode 100644
index 0000000000000000000000000000000000000000..8ea181c82d45a008d60a66c1f9e9b289c5f05611
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-int8x16.c
@@ -0,0 +1,69 @@
+/* { dg-do run { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-require-effective-target arm_mve_hw } */
+/* { dg-options "-O2 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <arm_mve.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include "../lob.h"
+
+void __attribute__ ((noinline)) test (int8_t *a, int8_t *b, int8_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ int8x16_t vb = vldrbq_z_s8 (b, p);
+ int8x16_t vc = vaddq_x_s8 (va, vb, p);
+ vstrbq_p_s8 (c, vc, p);
+ c+=16;
+ a+=16;
+ b+=16;
+ n-=16;
+ }
+}
+
+int main ()
+{
+ int i;
+ int8_t temp1[N];
+ int8_t temp2[N];
+ int8_t temp3[N];
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 0);
+ check_plus8 (temp1, temp2, temp3, 0);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 1);
+ check_plus8 (temp1, temp2, temp3, 1);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 15);
+ check_plus8 (temp1, temp2, temp3, 15);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 16);
+ check_plus8 (temp1, temp2, temp3, 16);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 17);
+ check_plus8 (temp1, temp2, temp3, 17);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 32);
+ check_plus8 (temp1, temp2, temp3, 32);
+
+ reset_data8 (temp1, temp2, temp3, N);
+ test (temp1, temp2, temp3, 33);
+ check_plus8 (temp1, temp2, temp3, 33);
+
+ reset_data8 (temp1, temp2, temp3, N);
+}
+
+/* { dg-final { scan-assembler-times {\tdlstp.8} 1 } } */
+/* { dg-final { scan-assembler-times {\tletp} 1 } } */
+/* { dg-final { scan-assembler-not "\tvctp" } } */
+/* { dg-final { scan-assembler-not "\tvpst" } } */
+/* { dg-final { scan-assembler-not "p0" } } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/dlstp-invalid-asm.c b/gcc/testsuite/gcc.target/arm/mve/dlstp-invalid-asm.c
new file mode 100644
index 0000000000000000000000000000000000000000..f7c3e04f8831e6b6eb709c8f3b0a0a896313ca64
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mve/dlstp-invalid-asm.c
@@ -0,0 +1,391 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-options "-O3 -save-temps" } */
+/* { dg-add-options arm_v8_1m_mve } */
+
+#include <limits.h>
+#include <arm_mve.h>
+
+/* Terminating on a non-zero number of elements. */
+void test0 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ while (n > 1)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n -= 16;
+ }
+}
+
+/* Terminating on n >= 0. */
+void test1 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ while (n >= 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n -= 16;
+ }
+}
+
+/* Similar, terminating on a non-zero number of elements, but in a for loop
+ format. */
+int32_t a[] = {0, 1, 2, 3, 4, 5, 6, 7};
+void test2 (int32_t *b, int num_elems)
+{
+ for (int i = num_elems; i >= 2; i-= 4)
+ {
+ mve_pred16_t p = vctp32q (i);
+ int32x4_t va = vldrwq_z_s32 (&(a[i]), p);
+ vstrwq_p_s32 (b + i, va, p);
+ }
+}
+
+/* Iteration counter counting up to num_iter, with a non-zero starting num. */
+void test3 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int num_iter = (n + 15)/16;
+ for (int i = 1; i < num_iter; i++)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n -= 16;
+ }
+}
+
+/* Iteration counter counting up to num_iter, with a larger increment */
+void test4 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int num_iter = (n + 15)/16;
+ for (int i = 0; i < num_iter; i+=2)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n -= 16;
+ }
+}
+
+/* Using an unpredicated store instruction within the loop. */
+void test5 (uint8_t *a, uint8_t *b, uint8_t *c, uint8_t *d, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_u8 (va, vb);
+ uint8x16_t vd = vaddq_x_u8 (va, vb, p);
+ vstrbq_u8 (d, vd);
+ n -= 16;
+ }
+}
+
+/* Using an unpredicated store outside the loop. */
+void test6 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_m_u8 (vx, va, vb, p);
+ vx = vaddq_u8 (vx, vc);
+ a += 16;
+ b += 16;
+ n -= 16;
+ }
+ vstrbq_u8 (c, vx);
+}
+
+/* Using a VPR that gets modified within the loop. */
+void test9 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ p++;
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using a VPR that gets re-generated within the loop. */
+void test10 (int32_t *a, int32_t *b, int32_t *c, int n)
+{
+ mve_pred16_t p = vctp32q (n);
+ while (n > 0)
+ {
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ p = vctp32q (n);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using vctp32q_m instead of vctp32q. */
+void test11 (int32_t *a, int32_t *b, int32_t *c, int n, mve_pred16_t p0)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q_m (n, p0);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using an unpredicated op with a scalar output, where the result is valid
+ outside the bb. This is invalid, because one of the inputs to the
+ unpredicated op is also unpredicated. */
+uint8_t test12 (uint8_t *a, uint8_t *b, uint8_t *c, int n, uint8x16_t vx)
+{
+ uint8_t sum = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_u8 (b);
+ uint8x16_t vc = vaddq_u8 (va, vb);
+ sum += vaddvq_u8 (vc);
+ a += 16;
+ b += 16;
+ n -= 16;
+ }
+ return sum;
+}
+
+/* Using an unpredicated vcmp to generate a new predicate value in the
+ loop and then using that VPR to predicate a store insn. */
+void test13 (int32_t *a, int32_t *b, int32x4_t vc, int32_t *c, int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_s32 (a);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_s32 (va, vb);
+ mve_pred16_t p1 = vcmpeqq_s32 (va, vc);
+ vstrwq_p_s32 (c, vc, p1);
+ c += 4;
+ a += 4;
+ b += 4;
+ n -= 4;
+ }
+}
+
+/* Using an across-vector unpredicated instruction. "vb" is the risk. */
+uint16_t test14 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ vb = vaddq_u16 (va, vb);
+ res = vaddvq_u16 (vb);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+/* Using an across-vector unpredicated instruction. "vc" is the risk. */
+uint16_t test15 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16_t res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ uint16x8_t vc = vaddq_u16 (va, vb);
+ res = vaddvaq_u16 (res, vc);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+uint16_t test16 (uint16_t *a, uint16_t *b, uint16_t *c, int n)
+{
+ uint16_t res =0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp16q (n);
+ uint16x8_t vb = vldrhq_u16 (b);
+ uint16x8_t va = vldrhq_z_u16 (a, p);
+ res = vaddvaq_u16 (res, vb);
+ res = vaddvaq_p_u16 (res, va, p);
+ c += 8;
+ a += 8;
+ b += 8;
+ n -= 8;
+ }
+ return res;
+}
+
+int test17 (int8_t *a, int8_t *b, int8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ res = vmaxvq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+
+
+int test18 (int8_t *a, int8_t *b, int8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ res = vminvq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+int test19 (int8_t *a, int8_t *b, int8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ res = vminavq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+int test20 (uint8_t *a, uint8_t *b, uint8_t *c, int n)
+{
+ int res = 0;
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ res = vminvq (res, va);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+uint8x16_t test21 (uint8_t *a, uint32_t *b, int n, uint8x16_t res)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ res = vshlcq_u8 (va, b, 1);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+int8x16_t test22 (int8_t *a, int32_t *b, int n, int8x16_t res)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp8q (n);
+ int8x16_t va = vldrbq_z_s8 (a, p);
+ res = vshlcq_s8 (va, b, 1);
+ n-=16;
+ a+=16;
+ }
+ return res;
+}
+
+/* Using an unsigned number of elements to count down from, with a >0*/
+void test23 (int32_t *a, int32_t *b, int32_t *c, unsigned int n)
+{
+ while (n > 0)
+ {
+ mve_pred16_t p = vctp32q (n);
+ int32x4_t va = vldrwq_z_s32 (a, p);
+ int32x4_t vb = vldrwq_z_s32 (b, p);
+ int32x4_t vc = vaddq_x_s32 (va, vb, p);
+ vstrwq_p_s32 (c, vc, p);
+ c+=4;
+ a+=4;
+ b+=4;
+ n-=4;
+ }
+}
+
+/* Using an unsigned number of elements to count up to, with a <n*/
+void test24 (uint8_t *a, uint8_t *b, uint8_t *c, unsigned int n)
+{
+ for (int i = 0; i < n; i+=16)
+ {
+ mve_pred16_t p = vctp8q (n-i);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n-=16;
+ }
+}
+
+
+/* Using an unsigned number of elements to count up to, with a <=n*/
+void test25 (uint8_t *a, uint8_t *b, uint8_t *c, unsigned int n)
+{
+ for (int i = 1; i <= n; i+=16)
+ {
+ mve_pred16_t p = vctp8q (n-i+1);
+ uint8x16_t va = vldrbq_z_u8 (a, p);
+ uint8x16_t vb = vldrbq_z_u8 (b, p);
+ uint8x16_t vc = vaddq_x_u8 (va, vb, p);
+ vstrbq_p_u8 (c, vc, p);
+ n-=16;
+ }
+}
+
+/* { dg-final { scan-assembler-not "\tdlstp" } } */
+/* { dg-final { scan-assembler-not "\tletp" } } */
\ No newline at end of file
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns
@ 2023-11-06 11:20 Stamatis Markianos-Wright
2023-12-12 10:33 ` Richard Earnshaw
0 siblings, 1 reply; 10+ messages in thread
From: Stamatis Markianos-Wright @ 2023-11-06 11:20 UTC (permalink / raw)
To: gcc-patches
Cc: Kyrylo Tkachov, Richard Earnshaw, richard.sandiford, ramana.gcc
[-- Attachment #1: Type: text/plain, Size: 180 bytes --]
Patch has already been approved at:
https://gcc.gnu.org/pipermail/gcc-patches/2023-September/630326.html
... But I'm sending this again for archiving on the list after rebasing
[-- Attachment #2: 1.patch --]
[-- Type: text/x-patch, Size: 127650 bytes --]
commit 5919a33d0280d35b0ebcbc07f10b2a09461b1508
Author: Stam Markianos-Wright <stam.markianos-wright@arm.com>
Date: Tue Oct 18 17:42:56 2022 +0100
arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns
I'd like to submit two patches that add support for Arm's MVE
Tail Predicated Low Overhead Loop feature.
--- Introduction ---
The M-class Arm-ARM:
https://developer.arm.com/documentation/ddi0553/bu/?lang=en
Section B5.5.1 "Loop tail predication" describes the feature
we are adding support for with this patch (although
we only add codegen for DLSTP/LETP instruction loops).
Previously with commit d2ed233cb94 we'd added support for
non-MVE DLS/LE loops through the loop-doloop pass, which, given
a standard MVE loop like:
```
void __attribute__ ((noinline)) test (int16_t *a, int16_t *b, int16_t *c, int n)
{
while (n > 0)
{
mve_pred16_t p = vctp16q (n);
int16x8_t va = vldrhq_z_s16 (a, p);
int16x8_t vb = vldrhq_z_s16 (b, p);
int16x8_t vc = vaddq_x_s16 (va, vb, p);
vstrhq_p_s16 (c, vc, p);
c+=8;
a+=8;
b+=8;
n-=8;
}
}
```
.. would output:
```
<pre-calculate the number of iterations and place it into lr>
dls lr, lr
.L3:
vctp.16 r3
vmrs ip, P0 @ movhi
sxth ip, ip
vmsr P0, ip @ movhi
mov r4, r0
vpst
vldrht.16 q2, [r4]
mov r4, r1
vmov q3, q0
vpst
vldrht.16 q1, [r4]
mov r4, r2
vpst
vaddt.i16 q3, q2, q1
subs r3, r3, #8
vpst
vstrht.16 q3, [r4]
adds r0, r0, #16
adds r1, r1, #16
adds r2, r2, #16
le lr, .L3
```
where the LE instruction will decrement LR by 1, compare and
branch if needed.
(there are also other inefficiencies with the above code, like the
pointless vmrs/sxth/vmsr on the VPR and the adds not being merged
into the vldrht/vstrht as a #16 offsets and some random movs!
But that's different problems...)
The MVE version is similar, except that:
* Instead of DLS/LE the instructions are DLSTP/LETP.
* Instead of pre-calculating the number of iterations of the
loop, we place the number of elements to be processed by the
loop into LR.
* Instead of decrementing the LR by one, LETP will decrement it
by FPSCR.LTPSIZE, which is the number of elements being
processed in each iteration: 16 for 8-bit elements, 5 for 16-bit
elements, etc.
* On the final iteration, automatic Loop Tail Predication is
performed, as if the instructions within the loop had been VPT
predicated with a VCTP generating the VPR predicate in every
loop iteration.
The dlstp/letp loop now looks like:
```
<place n into r3>
dlstp.16 lr, r3
.L14:
mov r3, r0
vldrh.16 q3, [r3]
mov r3, r1
vldrh.16 q2, [r3]
mov r3, r2
vadd.i16 q3, q3, q2
adds r0, r0, #16
vstrh.16 q3, [r3]
adds r1, r1, #16
adds r2, r2, #16
letp lr, .L14
```
Since the loop tail predication is automatic, we have eliminated
the VCTP that had been specified by the user in the intrinsic
and converted the VPT-predicated instructions into their
unpredicated equivalents (which also saves us from VPST insns).
The LE instruction here decrements LR by 8 in each iteration.
--- This 1/2 patch ---
This first patch lays some groundwork by adding an attribute to
md patterns, and then the second patch contains the functional
changes.
One major difficulty in implementing MVE Tail-Predicated Low
Overhead Loops was the need to transform VPT-predicated insns
in the insn chain into their unpredicated equivalents, like:
`mve_vldrbq_z_<supf><mode> -> mve_vldrbq_<supf><mode>`.
This requires us to have a deterministic link between two
different patterns in mve.md -- this _could_ be done by
re-ordering the entirety of mve.md such that the patterns are
at some constant icode proximity (e.g. having the _z immediately
after the unpredicated version would mean that to map from the
former to the latter you could use icode-1), but that is a very
messy solution that would lead to complex unknown dependencies
between the ordering of patterns.
This patch proves an alternative way of doing that: using an insn
attribute to encode the icode of the unpredicated instruction.
No regressions on arm-none-eabi with an MVE target.
Thank you,
Stam Markianos-Wright
gcc/ChangeLog:
* config/arm/arm.md (mve_unpredicated_insn): New attribute.
* config/arm/arm.h (MVE_VPT_PREDICATED_INSN_P): New define.
(MVE_VPT_UNPREDICATED_INSN_P): Likewise.
(MVE_VPT_PREDICABLE_INSN_P): Likewise.
* config/arm/vec-common.md (mve_vshlq_<supf><mode>): Add attribute.
* config/arm/mve.md (arm_vcx1q<a>_p_v16qi): Add attribute.
(arm_vcx1q<a>v16qi): Likewise.
(arm_vcx1qav16qi): Likewise.
(arm_vcx1qv16qi): Likewise.
(arm_vcx2q<a>_p_v16qi): Likewise.
(arm_vcx2q<a>v16qi): Likewise.
(arm_vcx2qav16qi): Likewise.
(arm_vcx2qv16qi): Likewise.
(arm_vcx3q<a>_p_v16qi): Likewise.
(arm_vcx3q<a>v16qi): Likewise.
(arm_vcx3qav16qi): Likewise.
(arm_vcx3qv16qi): Likewise.
(mve_vabavq_<supf><mode>): Likewise.
(mve_vabavq_p_<supf><mode>): Likewise.
(mve_vabdq_<supf><mode>): Likewise.
(mve_vabdq_f<mode>): Likewise.
(mve_vabdq_m_<supf><mode>): Likewise.
(mve_vabdq_m_f<mode>): Likewise.
(mve_vabsq_f<mode>): Likewise.
(mve_vabsq_m_f<mode>): Likewise.
(mve_vabsq_m_s<mode>): Likewise.
(mve_vabsq_s<mode>): Likewise.
(mve_vadciq_<supf>v4si): Likewise.
(mve_vadciq_m_<supf>v4si): Likewise.
(mve_vadcq_<supf>v4si): Likewise.
(mve_vadcq_m_<supf>v4si): Likewise.
(mve_vaddlvaq_<supf>v4si): Likewise.
(mve_vaddlvaq_p_<supf>v4si): Likewise.
(mve_vaddlvq_<supf>v4si): Likewise.
(mve_vaddlvq_p_<supf>v4si): Likewise.
(mve_vaddq_f<mode>): Likewise.
(mve_vaddq_m_<supf><mode>): Likewise.
(mve_vaddq_m_f<mode>): Likewise.
(mve_vaddq_m_n_<supf><mode>): Likewise.
(mve_vaddq_m_n_f<mode>): Likewise.
(mve_vaddq_n_<supf><mode>): Likewise.
(mve_vaddq_n_f<mode>): Likewise.
(mve_vaddq<mode>): Likewise.
(mve_vaddvaq_<supf><mode>): Likewise.
(mve_vaddvaq_p_<supf><mode>): Likewise.
(mve_vaddvq_<supf><mode>): Likewise.
(mve_vaddvq_p_<supf><mode>): Likewise.
(mve_vandq_<supf><mode>): Likewise.
(mve_vandq_f<mode>): Likewise.
(mve_vandq_m_<supf><mode>): Likewise.
(mve_vandq_m_f<mode>): Likewise.
(mve_vandq_s<mode>): Likewise.
(mve_vandq_u<mode>): Likewise.
(mve_vbicq_<supf><mode>): Likewise.
(mve_vbicq_f<mode>): Likewise.
(mve_vbicq_m_<supf><mode>): Likewise.
(mve_vbicq_m_f<mode>): Likewise.
(mve_vbicq_m_n_<supf><mode>): Likewise.
(mve_vbicq_n_<supf><mode>): Likewise.
(mve_vbicq_s<mode>): Likewise.
(mve_vbicq_u<mode>): Likewise.
(mve_vbrsrq_m_n_<supf><mode>): Likewise.
(mve_vbrsrq_m_n_f<mode>): Likewise.
(mve_vbrsrq_n_<supf><mode>): Likewise.
(mve_vbrsrq_n_f<mode>): Likewise.
(mve_vcaddq_rot270_m_<supf><mode>): Likewise.
(mve_vcaddq_rot270_m_f<mode>): Likewise.
(mve_vcaddq_rot270<mode>): Likewise.
(mve_vcaddq_rot270<mode>): Likewise.
(mve_vcaddq_rot90_m_<supf><mode>): Likewise.
(mve_vcaddq_rot90_m_f<mode>): Likewise.
(mve_vcaddq_rot90<mode>): Likewise.
(mve_vcaddq_rot90<mode>): Likewise.
(mve_vcaddq<mve_rot><mode>): Likewise.
(mve_vcaddq<mve_rot><mode>): Likewise.
(mve_vclsq_m_s<mode>): Likewise.
(mve_vclsq_s<mode>): Likewise.
(mve_vclzq_<supf><mode>): Likewise.
(mve_vclzq_m_<supf><mode>): Likewise.
(mve_vclzq_s<mode>): Likewise.
(mve_vclzq_u<mode>): Likewise.
(mve_vcmlaq_m_f<mode>): Likewise.
(mve_vcmlaq_rot180_m_f<mode>): Likewise.
(mve_vcmlaq_rot180<mode>): Likewise.
(mve_vcmlaq_rot270_m_f<mode>): Likewise.
(mve_vcmlaq_rot270<mode>): Likewise.
(mve_vcmlaq_rot90_m_f<mode>): Likewise.
(mve_vcmlaq_rot90<mode>): Likewise.
(mve_vcmlaq<mode>): Likewise.
(mve_vcmlaq<mve_rot><mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_f<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_n_<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_n_f<mode>): Likewise.
(mve_vcmpcsq_<mode>): Likewise.
(mve_vcmpcsq_m_n_u<mode>): Likewise.
(mve_vcmpcsq_m_u<mode>): Likewise.
(mve_vcmpcsq_n_<mode>): Likewise.
(mve_vcmpeqq_<mode>): Likewise.
(mve_vcmpeqq_f<mode>): Likewise.
(mve_vcmpeqq_m_<supf><mode>): Likewise.
(mve_vcmpeqq_m_f<mode>): Likewise.
(mve_vcmpeqq_m_n_<supf><mode>): Likewise.
(mve_vcmpeqq_m_n_f<mode>): Likewise.
(mve_vcmpeqq_n_<mode>): Likewise.
(mve_vcmpeqq_n_f<mode>): Likewise.
(mve_vcmpgeq_<mode>): Likewise.
(mve_vcmpgeq_f<mode>): Likewise.
(mve_vcmpgeq_m_f<mode>): Likewise.
(mve_vcmpgeq_m_n_f<mode>): Likewise.
(mve_vcmpgeq_m_n_s<mode>): Likewise.
(mve_vcmpgeq_m_s<mode>): Likewise.
(mve_vcmpgeq_n_<mode>): Likewise.
(mve_vcmpgeq_n_f<mode>): Likewise.
(mve_vcmpgtq_<mode>): Likewise.
(mve_vcmpgtq_f<mode>): Likewise.
(mve_vcmpgtq_m_f<mode>): Likewise.
(mve_vcmpgtq_m_n_f<mode>): Likewise.
(mve_vcmpgtq_m_n_s<mode>): Likewise.
(mve_vcmpgtq_m_s<mode>): Likewise.
(mve_vcmpgtq_n_<mode>): Likewise.
(mve_vcmpgtq_n_f<mode>): Likewise.
(mve_vcmphiq_<mode>): Likewise.
(mve_vcmphiq_m_n_u<mode>): Likewise.
(mve_vcmphiq_m_u<mode>): Likewise.
(mve_vcmphiq_n_<mode>): Likewise.
(mve_vcmpleq_<mode>): Likewise.
(mve_vcmpleq_f<mode>): Likewise.
(mve_vcmpleq_m_f<mode>): Likewise.
(mve_vcmpleq_m_n_f<mode>): Likewise.
(mve_vcmpleq_m_n_s<mode>): Likewise.
(mve_vcmpleq_m_s<mode>): Likewise.
(mve_vcmpleq_n_<mode>): Likewise.
(mve_vcmpleq_n_f<mode>): Likewise.
(mve_vcmpltq_<mode>): Likewise.
(mve_vcmpltq_f<mode>): Likewise.
(mve_vcmpltq_m_f<mode>): Likewise.
(mve_vcmpltq_m_n_f<mode>): Likewise.
(mve_vcmpltq_m_n_s<mode>): Likewise.
(mve_vcmpltq_m_s<mode>): Likewise.
(mve_vcmpltq_n_<mode>): Likewise.
(mve_vcmpltq_n_f<mode>): Likewise.
(mve_vcmpneq_<mode>): Likewise.
(mve_vcmpneq_f<mode>): Likewise.
(mve_vcmpneq_m_<supf><mode>): Likewise.
(mve_vcmpneq_m_f<mode>): Likewise.
(mve_vcmpneq_m_n_<supf><mode>): Likewise.
(mve_vcmpneq_m_n_f<mode>): Likewise.
(mve_vcmpneq_n_<mode>): Likewise.
(mve_vcmpneq_n_f<mode>): Likewise.
(mve_vcmulq_m_f<mode>): Likewise.
(mve_vcmulq_rot180_m_f<mode>): Likewise.
(mve_vcmulq_rot180<mode>): Likewise.
(mve_vcmulq_rot270_m_f<mode>): Likewise.
(mve_vcmulq_rot270<mode>): Likewise.
(mve_vcmulq_rot90_m_f<mode>): Likewise.
(mve_vcmulq_rot90<mode>): Likewise.
(mve_vcmulq<mode>): Likewise.
(mve_vcmulq<mve_rot><mode>): Likewise.
(mve_vctp<mode1>q_mhi): Likewise.
(mve_vctp<mode1>qhi): Likewise.
(mve_vcvtaq_<supf><mode>): Likewise.
(mve_vcvtaq_m_<supf><mode>): Likewise.
(mve_vcvtbq_f16_f32v8hf): Likewise.
(mve_vcvtbq_f32_f16v4sf): Likewise.
(mve_vcvtbq_m_f16_f32v8hf): Likewise.
(mve_vcvtbq_m_f32_f16v4sf): Likewise.
(mve_vcvtmq_<supf><mode>): Likewise.
(mve_vcvtmq_m_<supf><mode>): Likewise.
(mve_vcvtnq_<supf><mode>): Likewise.
(mve_vcvtnq_m_<supf><mode>): Likewise.
(mve_vcvtpq_<supf><mode>): Likewise.
(mve_vcvtpq_m_<supf><mode>): Likewise.
(mve_vcvtq_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_m_to_f_<supf><mode>): Likewise.
(mve_vcvtq_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_to_f_<supf><mode>): Likewise.
(mve_vcvttq_f16_f32v8hf): Likewise.
(mve_vcvttq_f32_f16v4sf): Likewise.
(mve_vcvttq_m_f16_f32v8hf): Likewise.
(mve_vcvttq_m_f32_f16v4sf): Likewise.
(mve_vddupq_m_wb_u<mode>_insn): Likewise.
(mve_vddupq_u<mode>_insn): Likewise.
(mve_vdupq_m_n_<supf><mode>): Likewise.
(mve_vdupq_m_n_f<mode>): Likewise.
(mve_vdupq_n_<supf><mode>): Likewise.
(mve_vdupq_n_f<mode>): Likewise.
(mve_vdwdupq_m_wb_u<mode>_insn): Likewise.
(mve_vdwdupq_wb_u<mode>_insn): Likewise.
(mve_veorq_<supf><mode>): Likewise.
(mve_veorq_f<mode>): Likewise.
(mve_veorq_m_<supf><mode>): Likewise.
(mve_veorq_m_f<mode>): Likewise.
(mve_veorq_s<mode>): Likewise.
(mve_veorq_u<mode>): Likewise.
(mve_vfmaq_f<mode>): Likewise.
(mve_vfmaq_m_f<mode>): Likewise.
(mve_vfmaq_m_n_f<mode>): Likewise.
(mve_vfmaq_n_f<mode>): Likewise.
(mve_vfmasq_m_n_f<mode>): Likewise.
(mve_vfmasq_n_f<mode>): Likewise.
(mve_vfmsq_f<mode>): Likewise.
(mve_vfmsq_m_f<mode>): Likewise.
(mve_vhaddq_<supf><mode>): Likewise.
(mve_vhaddq_m_<supf><mode>): Likewise.
(mve_vhaddq_m_n_<supf><mode>): Likewise.
(mve_vhaddq_n_<supf><mode>): Likewise.
(mve_vhcaddq_rot270_m_s<mode>): Likewise.
(mve_vhcaddq_rot270_s<mode>): Likewise.
(mve_vhcaddq_rot90_m_s<mode>): Likewise.
(mve_vhcaddq_rot90_s<mode>): Likewise.
(mve_vhsubq_<supf><mode>): Likewise.
(mve_vhsubq_m_<supf><mode>): Likewise.
(mve_vhsubq_m_n_<supf><mode>): Likewise.
(mve_vhsubq_n_<supf><mode>): Likewise.
(mve_vidupq_m_wb_u<mode>_insn): Likewise.
(mve_vidupq_u<mode>_insn): Likewise.
(mve_viwdupq_m_wb_u<mode>_insn): Likewise.
(mve_viwdupq_wb_u<mode>_insn): Likewise.
(mve_vldrbq_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrbq_z_<supf><mode>): Likewise.
(mve_vldrdq_gather_base_<supf>v2di): Likewise.
(mve_vldrdq_gather_base_wb_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_wb_z_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_z_<supf>v2di): Likewise.
(mve_vldrhq_<supf><mode>): Likewise.
(mve_vldrhq_fv8hf): Likewise.
(mve_vldrhq_gather_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_fv8hf): Likewise.
(mve_vldrhq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_z_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_z_fv8hf): Likewise.
(mve_vldrhq_z_<supf><mode>): Likewise.
(mve_vldrhq_z_fv8hf): Likewise.
(mve_vldrwq_<supf>v4si): Likewise.
(mve_vldrwq_fv4sf): Likewise.
(mve_vldrwq_gather_base_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_fv4sf): Likewise.
(mve_vldrwq_gather_base_wb_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_z_fv4sf): Likewise.
(mve_vldrwq_gather_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_fv4sf): Likewise.
(mve_vldrwq_gather_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_z_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_z_fv4sf): Likewise.
(mve_vldrwq_z_<supf>v4si): Likewise.
(mve_vldrwq_z_fv4sf): Likewise.
(mve_vmaxaq_m_s<mode>): Likewise.
(mve_vmaxaq_s<mode>): Likewise.
(mve_vmaxavq_p_s<mode>): Likewise.
(mve_vmaxavq_s<mode>): Likewise.
(mve_vmaxnmaq_f<mode>): Likewise.
(mve_vmaxnmaq_m_f<mode>): Likewise.
(mve_vmaxnmavq_f<mode>): Likewise.
(mve_vmaxnmavq_p_f<mode>): Likewise.
(mve_vmaxnmq_f<mode>): Likewise.
(mve_vmaxnmq_m_f<mode>): Likewise.
(mve_vmaxnmvq_f<mode>): Likewise.
(mve_vmaxnmvq_p_f<mode>): Likewise.
(mve_vmaxq_<supf><mode>): Likewise.
(mve_vmaxq_m_<supf><mode>): Likewise.
(mve_vmaxq_s<mode>): Likewise.
(mve_vmaxq_u<mode>): Likewise.
(mve_vmaxvq_<supf><mode>): Likewise.
(mve_vmaxvq_p_<supf><mode>): Likewise.
(mve_vminaq_m_s<mode>): Likewise.
(mve_vminaq_s<mode>): Likewise.
(mve_vminavq_p_s<mode>): Likewise.
(mve_vminavq_s<mode>): Likewise.
(mve_vminnmaq_f<mode>): Likewise.
(mve_vminnmaq_m_f<mode>): Likewise.
(mve_vminnmavq_f<mode>): Likewise.
(mve_vminnmavq_p_f<mode>): Likewise.
(mve_vminnmq_f<mode>): Likewise.
(mve_vminnmq_m_f<mode>): Likewise.
(mve_vminnmvq_f<mode>): Likewise.
(mve_vminnmvq_p_f<mode>): Likewise.
(mve_vminq_<supf><mode>): Likewise.
(mve_vminq_m_<supf><mode>): Likewise.
(mve_vminq_s<mode>): Likewise.
(mve_vminq_u<mode>): Likewise.
(mve_vminvq_<supf><mode>): Likewise.
(mve_vminvq_p_<supf><mode>): Likewise.
(mve_vmladavaq_<supf><mode>): Likewise.
(mve_vmladavaq_p_<supf><mode>): Likewise.
(mve_vmladavaxq_p_s<mode>): Likewise.
(mve_vmladavaxq_s<mode>): Likewise.
(mve_vmladavq_<supf><mode>): Likewise.
(mve_vmladavq_p_<supf><mode>): Likewise.
(mve_vmladavxq_p_s<mode>): Likewise.
(mve_vmladavxq_s<mode>): Likewise.
(mve_vmlaldavaq_<supf><mode>): Likewise.
(mve_vmlaldavaq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_<supf><mode>): Likewise.
(mve_vmlaldavaxq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_s<mode>): Likewise.
(mve_vmlaldavq_<supf><mode>): Likewise.
(mve_vmlaldavq_p_<supf><mode>): Likewise.
(mve_vmlaldavxq_p_s<mode>): Likewise.
(mve_vmlaldavxq_s<mode>): Likewise.
(mve_vmlaq_m_n_<supf><mode>): Likewise.
(mve_vmlaq_n_<supf><mode>): Likewise.
(mve_vmlasq_m_n_<supf><mode>): Likewise.
(mve_vmlasq_n_<supf><mode>): Likewise.
(mve_vmlsdavaq_p_s<mode>): Likewise.
(mve_vmlsdavaq_s<mode>): Likewise.
(mve_vmlsdavaxq_p_s<mode>): Likewise.
(mve_vmlsdavaxq_s<mode>): Likewise.
(mve_vmlsdavq_p_s<mode>): Likewise.
(mve_vmlsdavq_s<mode>): Likewise.
(mve_vmlsdavxq_p_s<mode>): Likewise.
(mve_vmlsdavxq_s<mode>): Likewise.
(mve_vmlsldavaq_p_s<mode>): Likewise.
(mve_vmlsldavaq_s<mode>): Likewise.
(mve_vmlsldavaxq_p_s<mode>): Likewise.
(mve_vmlsldavaxq_s<mode>): Likewise.
(mve_vmlsldavq_p_s<mode>): Likewise.
(mve_vmlsldavq_s<mode>): Likewise.
(mve_vmlsldavxq_p_s<mode>): Likewise.
(mve_vmlsldavxq_s<mode>): Likewise.
(mve_vmovlbq_<supf><mode>): Likewise.
(mve_vmovlbq_m_<supf><mode>): Likewise.
(mve_vmovltq_<supf><mode>): Likewise.
(mve_vmovltq_m_<supf><mode>): Likewise.
(mve_vmovnbq_<supf><mode>): Likewise.
(mve_vmovnbq_m_<supf><mode>): Likewise.
(mve_vmovntq_<supf><mode>): Likewise.
(mve_vmovntq_m_<supf><mode>): Likewise.
(mve_vmulhq_<supf><mode>): Likewise.
(mve_vmulhq_m_<supf><mode>): Likewise.
(mve_vmullbq_int_<supf><mode>): Likewise.
(mve_vmullbq_int_m_<supf><mode>): Likewise.
(mve_vmullbq_poly_m_p<mode>): Likewise.
(mve_vmullbq_poly_p<mode>): Likewise.
(mve_vmulltq_int_<supf><mode>): Likewise.
(mve_vmulltq_int_m_<supf><mode>): Likewise.
(mve_vmulltq_poly_m_p<mode>): Likewise.
(mve_vmulltq_poly_p<mode>): Likewise.
(mve_vmulq_<supf><mode>): Likewise.
(mve_vmulq_f<mode>): Likewise.
(mve_vmulq_m_<supf><mode>): Likewise.
(mve_vmulq_m_f<mode>): Likewise.
(mve_vmulq_m_n_<supf><mode>): Likewise.
(mve_vmulq_m_n_f<mode>): Likewise.
(mve_vmulq_n_<supf><mode>): Likewise.
(mve_vmulq_n_f<mode>): Likewise.
(mve_vmvnq_<supf><mode>): Likewise.
(mve_vmvnq_m_<supf><mode>): Likewise.
(mve_vmvnq_m_n_<supf><mode>): Likewise.
(mve_vmvnq_n_<supf><mode>): Likewise.
(mve_vmvnq_s<mode>): Likewise.
(mve_vmvnq_u<mode>): Likewise.
(mve_vnegq_f<mode>): Likewise.
(mve_vnegq_m_f<mode>): Likewise.
(mve_vnegq_m_s<mode>): Likewise.
(mve_vnegq_s<mode>): Likewise.
(mve_vornq_<supf><mode>): Likewise.
(mve_vornq_f<mode>): Likewise.
(mve_vornq_m_<supf><mode>): Likewise.
(mve_vornq_m_f<mode>): Likewise.
(mve_vornq_s<mode>): Likewise.
(mve_vornq_u<mode>): Likewise.
(mve_vorrq_<supf><mode>): Likewise.
(mve_vorrq_f<mode>): Likewise.
(mve_vorrq_m_<supf><mode>): Likewise.
(mve_vorrq_m_f<mode>): Likewise.
(mve_vorrq_m_n_<supf><mode>): Likewise.
(mve_vorrq_n_<supf><mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vqabsq_m_s<mode>): Likewise.
(mve_vqabsq_s<mode>): Likewise.
(mve_vqaddq_<supf><mode>): Likewise.
(mve_vqaddq_m_<supf><mode>): Likewise.
(mve_vqaddq_m_n_<supf><mode>): Likewise.
(mve_vqaddq_n_<supf><mode>): Likewise.
(mve_vqdmladhq_m_s<mode>): Likewise.
(mve_vqdmladhq_s<mode>): Likewise.
(mve_vqdmladhxq_m_s<mode>): Likewise.
(mve_vqdmladhxq_s<mode>): Likewise.
(mve_vqdmlahq_m_n_s<mode>): Likewise.
(mve_vqdmlahq_n_<supf><mode>): Likewise.
(mve_vqdmlahq_n_s<mode>): Likewise.
(mve_vqdmlashq_m_n_s<mode>): Likewise.
(mve_vqdmlashq_n_<supf><mode>): Likewise.
(mve_vqdmlashq_n_s<mode>): Likewise.
(mve_vqdmlsdhq_m_s<mode>): Likewise.
(mve_vqdmlsdhq_s<mode>): Likewise.
(mve_vqdmlsdhxq_m_s<mode>): Likewise.
(mve_vqdmlsdhxq_s<mode>): Likewise.
(mve_vqdmulhq_m_n_s<mode>): Likewise.
(mve_vqdmulhq_m_s<mode>): Likewise.
(mve_vqdmulhq_n_s<mode>): Likewise.
(mve_vqdmulhq_s<mode>): Likewise.
(mve_vqdmullbq_m_n_s<mode>): Likewise.
(mve_vqdmullbq_m_s<mode>): Likewise.
(mve_vqdmullbq_n_s<mode>): Likewise.
(mve_vqdmullbq_s<mode>): Likewise.
(mve_vqdmulltq_m_n_s<mode>): Likewise.
(mve_vqdmulltq_m_s<mode>): Likewise.
(mve_vqdmulltq_n_s<mode>): Likewise.
(mve_vqdmulltq_s<mode>): Likewise.
(mve_vqmovnbq_<supf><mode>): Likewise.
(mve_vqmovnbq_m_<supf><mode>): Likewise.
(mve_vqmovntq_<supf><mode>): Likewise.
(mve_vqmovntq_m_<supf><mode>): Likewise.
(mve_vqmovunbq_m_s<mode>): Likewise.
(mve_vqmovunbq_s<mode>): Likewise.
(mve_vqmovuntq_m_s<mode>): Likewise.
(mve_vqmovuntq_s<mode>): Likewise.
(mve_vqnegq_m_s<mode>): Likewise.
(mve_vqnegq_s<mode>): Likewise.
(mve_vqrdmladhq_m_s<mode>): Likewise.
(mve_vqrdmladhq_s<mode>): Likewise.
(mve_vqrdmladhxq_m_s<mode>): Likewise.
(mve_vqrdmladhxq_s<mode>): Likewise.
(mve_vqrdmlahq_m_n_s<mode>): Likewise.
(mve_vqrdmlahq_n_<supf><mode>): Likewise.
(mve_vqrdmlahq_n_s<mode>): Likewise.
(mve_vqrdmlashq_m_n_s<mode>): Likewise.
(mve_vqrdmlashq_n_<supf><mode>): Likewise.
(mve_vqrdmlashq_n_s<mode>): Likewise.
(mve_vqrdmlsdhq_m_s<mode>): Likewise.
(mve_vqrdmlsdhq_s<mode>): Likewise.
(mve_vqrdmlsdhxq_m_s<mode>): Likewise.
(mve_vqrdmlsdhxq_s<mode>): Likewise.
(mve_vqrdmulhq_m_n_s<mode>): Likewise.
(mve_vqrdmulhq_m_s<mode>): Likewise.
(mve_vqrdmulhq_n_s<mode>): Likewise.
(mve_vqrdmulhq_s<mode>): Likewise.
(mve_vqrshlq_<supf><mode>): Likewise.
(mve_vqrshlq_m_<supf><mode>): Likewise.
(mve_vqrshlq_m_n_<supf><mode>): Likewise.
(mve_vqrshlq_n_<supf><mode>): Likewise.
(mve_vqrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqrshrnbq_n_<supf><mode>): Likewise.
(mve_vqrshrntq_m_n_<supf><mode>): Likewise.
(mve_vqrshrntq_n_<supf><mode>): Likewise.
(mve_vqrshrunbq_m_n_s<mode>): Likewise.
(mve_vqrshrunbq_n_s<mode>): Likewise.
(mve_vqrshruntq_m_n_s<mode>): Likewise.
(mve_vqrshruntq_n_s<mode>): Likewise.
(mve_vqshlq_<supf><mode>): Likewise.
(mve_vqshlq_m_<supf><mode>): Likewise.
(mve_vqshlq_m_n_<supf><mode>): Likewise.
(mve_vqshlq_m_r_<supf><mode>): Likewise.
(mve_vqshlq_n_<supf><mode>): Likewise.
(mve_vqshlq_r_<supf><mode>): Likewise.
(mve_vqshluq_m_n_s<mode>): Likewise.
(mve_vqshluq_n_s<mode>): Likewise.
(mve_vqshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqshrnbq_n_<supf><mode>): Likewise.
(mve_vqshrntq_m_n_<supf><mode>): Likewise.
(mve_vqshrntq_n_<supf><mode>): Likewise.
(mve_vqshrunbq_m_n_s<mode>): Likewise.
(mve_vqshrunbq_n_s<mode>): Likewise.
(mve_vqshruntq_m_n_s<mode>): Likewise.
(mve_vqshruntq_n_s<mode>): Likewise.
(mve_vqsubq_<supf><mode>): Likewise.
(mve_vqsubq_m_<supf><mode>): Likewise.
(mve_vqsubq_m_n_<supf><mode>): Likewise.
(mve_vqsubq_n_<supf><mode>): Likewise.
(mve_vrev16q_<supf>v16qi): Likewise.
(mve_vrev16q_m_<supf>v16qi): Likewise.
(mve_vrev32q_<supf><mode>): Likewise.
(mve_vrev32q_fv8hf): Likewise.
(mve_vrev32q_m_<supf><mode>): Likewise.
(mve_vrev32q_m_fv8hf): Likewise.
(mve_vrev64q_<supf><mode>): Likewise.
(mve_vrev64q_f<mode>): Likewise.
(mve_vrev64q_m_<supf><mode>): Likewise.
(mve_vrev64q_m_f<mode>): Likewise.
(mve_vrhaddq_<supf><mode>): Likewise.
(mve_vrhaddq_m_<supf><mode>): Likewise.
(mve_vrmlaldavhaq_<supf>v4si): Likewise.
(mve_vrmlaldavhaq_p_sv4si): Likewise.
(mve_vrmlaldavhaq_p_uv4si): Likewise.
(mve_vrmlaldavhaq_sv4si): Likewise.
(mve_vrmlaldavhaq_uv4si): Likewise.
(mve_vrmlaldavhaxq_p_sv4si): Likewise.
(mve_vrmlaldavhaxq_sv4si): Likewise.
(mve_vrmlaldavhq_<supf>v4si): Likewise.
(mve_vrmlaldavhq_p_<supf>v4si): Likewise.
(mve_vrmlaldavhxq_p_sv4si): Likewise.
(mve_vrmlaldavhxq_sv4si): Likewise.
(mve_vrmlsldavhaq_p_sv4si): Likewise.
(mve_vrmlsldavhaq_sv4si): Likewise.
(mve_vrmlsldavhaxq_p_sv4si): Likewise.
(mve_vrmlsldavhaxq_sv4si): Likewise.
(mve_vrmlsldavhq_p_sv4si): Likewise.
(mve_vrmlsldavhq_sv4si): Likewise.
(mve_vrmlsldavhxq_p_sv4si): Likewise.
(mve_vrmlsldavhxq_sv4si): Likewise.
(mve_vrmulhq_<supf><mode>): Likewise.
(mve_vrmulhq_m_<supf><mode>): Likewise.
(mve_vrndaq_f<mode>): Likewise.
(mve_vrndaq_m_f<mode>): Likewise.
(mve_vrndmq_f<mode>): Likewise.
(mve_vrndmq_m_f<mode>): Likewise.
(mve_vrndnq_f<mode>): Likewise.
(mve_vrndnq_m_f<mode>): Likewise.
(mve_vrndpq_f<mode>): Likewise.
(mve_vrndpq_m_f<mode>): Likewise.
(mve_vrndq_f<mode>): Likewise.
(mve_vrndq_m_f<mode>): Likewise.
(mve_vrndxq_f<mode>): Likewise.
(mve_vrndxq_m_f<mode>): Likewise.
(mve_vrshlq_<supf><mode>): Likewise.
(mve_vrshlq_m_<supf><mode>): Likewise.
(mve_vrshlq_m_n_<supf><mode>): Likewise.
(mve_vrshlq_n_<supf><mode>): Likewise.
(mve_vrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vrshrnbq_n_<supf><mode>): Likewise.
(mve_vrshrntq_m_n_<supf><mode>): Likewise.
(mve_vrshrntq_n_<supf><mode>): Likewise.
(mve_vrshrq_m_n_<supf><mode>): Likewise.
(mve_vrshrq_n_<supf><mode>): Likewise.
(mve_vsbciq_<supf>v4si): Likewise.
(mve_vsbciq_m_<supf>v4si): Likewise.
(mve_vsbcq_<supf>v4si): Likewise.
(mve_vsbcq_m_<supf>v4si): Likewise.
(mve_vshlcq_<supf><mode>): Likewise.
(mve_vshlcq_m_<supf><mode>): Likewise.
(mve_vshllbq_m_n_<supf><mode>): Likewise.
(mve_vshllbq_n_<supf><mode>): Likewise.
(mve_vshlltq_m_n_<supf><mode>): Likewise.
(mve_vshlltq_n_<supf><mode>): Likewise.
(mve_vshlq_<supf><mode>): Likewise.
(mve_vshlq_<supf><mode>): Likewise.
(mve_vshlq_m_<supf><mode>): Likewise.
(mve_vshlq_m_n_<supf><mode>): Likewise.
(mve_vshlq_m_r_<supf><mode>): Likewise.
(mve_vshlq_n_<supf><mode>): Likewise.
(mve_vshlq_r_<supf><mode>): Likewise.
(mve_vshrnbq_m_n_<supf><mode>): Likewise.
(mve_vshrnbq_n_<supf><mode>): Likewise.
(mve_vshrntq_m_n_<supf><mode>): Likewise.
(mve_vshrntq_n_<supf><mode>): Likewise.
(mve_vshrq_m_n_<supf><mode>): Likewise.
(mve_vshrq_n_<supf><mode>): Likewise.
(mve_vsliq_m_n_<supf><mode>): Likewise.
(mve_vsliq_n_<supf><mode>): Likewise.
(mve_vsriq_m_n_<supf><mode>): Likewise.
(mve_vsriq_n_<supf><mode>): Likewise.
(mve_vstrbq_<supf><mode>): Likewise.
(mve_vstrbq_p_<supf><mode>): Likewise.
(mve_vstrbq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrbq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrdq_scatter_base_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrhq_<supf><mode>): Likewise.
(mve_vstrhq_fv8hf): Likewise.
(mve_vstrhq_p_<supf><mode>): Likewise.
(mve_vstrhq_p_fv8hf): Likewise.
(mve_vstrhq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_p_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn): Likewise.
(mve_vstrwq_<supf>v4si): Likewise.
(mve_vstrwq_fv4sf): Likewise.
(mve_vstrwq_p_<supf>v4si): Likewise.
(mve_vstrwq_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_fv4sf): Likewise.
(mve_vstrwq_scatter_base_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_p_fv4sf): Likewise.
(mve_vstrwq_scatter_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_p_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn): Likewise.
(mve_vsubq_<supf><mode>): Likewise.
(mve_vsubq_f<mode>): Likewise.
(mve_vsubq_m_<supf><mode>): Likewise.
(mve_vsubq_m_f<mode>): Likewise.
(mve_vsubq_m_n_<supf><mode>): Likewise.
(mve_vsubq_m_n_f<mode>): Likewise.
(mve_vsubq_n_<supf><mode>): Likewise.
(mve_vsubq_n_f<mode>): Likewise.
diff --git a/gcc/config/arm/arm.h b/gcc/config/arm/arm.h
index a9c2752c0ea..0b0e8620717 100644
--- a/gcc/config/arm/arm.h
+++ b/gcc/config/arm/arm.h
@@ -2375,6 +2375,21 @@ extern int making_const_table;
else if (TARGET_THUMB1) \
thumb1_final_prescan_insn (INSN)
+/* These defines are useful to refer to the value of the mve_unpredicated_insn
+ insn attribute. Note that, because these use the get_attr_* function, these
+ will change recog_data if (INSN) isn't current_insn. */
+#define MVE_VPT_PREDICABLE_INSN_P(INSN) \
+ (recog_memoized (INSN) >= 0 \
+ && get_attr_mve_unpredicated_insn (INSN) != 0) \
+
+#define MVE_VPT_PREDICATED_INSN_P(INSN) \
+ (MVE_VPT_PREDICABLE_INSN_P (INSN) \
+ && recog_memoized (INSN) != get_attr_mve_unpredicated_insn (INSN)) \
+
+#define MVE_VPT_UNPREDICATED_INSN_P(INSN) \
+ (MVE_VPT_PREDICABLE_INSN_P (INSN) \
+ && recog_memoized (INSN) == get_attr_mve_unpredicated_insn (INSN)) \
+
#define ARM_SIGN_EXTEND(x) ((HOST_WIDE_INT) \
(HOST_BITS_PER_WIDE_INT <= 32 ? (unsigned HOST_WIDE_INT) (x) \
: ((((unsigned HOST_WIDE_INT)(x)) & (unsigned HOST_WIDE_INT) 0xffffffff) |\
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index 07eaf06cdea..8efdebecc3c 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -124,6 +124,8 @@
; and not all ARM insns do.
(define_attr "predicated" "yes,no" (const_string "no"))
+(define_attr "mve_unpredicated_insn" "" (const_int 0))
+
; LENGTH of an instruction (in bytes)
(define_attr "length" ""
(const_int 4))
diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md
index a9803538101..5ea2d9e8668 100644
--- a/gcc/config/arm/iterators.md
+++ b/gcc/config/arm/iterators.md
@@ -2305,6 +2305,7 @@
(define_int_attr mmla_sfx [(UNSPEC_MATMUL_S "s8") (UNSPEC_MATMUL_U "u8")
(UNSPEC_MATMUL_US "s8")])
+
;;MVE int attribute.
(define_int_attr supf [(VCVTQ_TO_F_S "s") (VCVTQ_TO_F_U "u") (VREV16Q_S "s")
(VREV16Q_U "u") (VMVNQ_N_S "s") (VMVNQ_N_U "u")
diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md
index 366cec0812a..44a04b86cb5 100644
--- a/gcc/config/arm/mve.md
+++ b/gcc/config/arm/mve.md
@@ -17,7 +17,7 @@
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
-(define_insn "*mve_mov<mode>"
+(define_insn "mve_mov<mode>"
[(set (match_operand:MVE_types 0 "nonimmediate_operand" "=w,w,r,w , w, r,Ux,w")
(match_operand:MVE_types 1 "general_operand" " w,r,w,DnDm,UxUi,r,w, Ul"))]
"TARGET_HAVE_MVE || TARGET_HAVE_MVE_FLOAT"
@@ -81,18 +81,27 @@
return "";
}
}
- [(set_attr "type" "mve_move,mve_move,mve_move,mve_move,mve_load,multiple,mve_store,mve_load")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")])
+ (set_attr "type" "mve_move,mve_move,mve_move,mve_move,mve_load,multiple,mve_store,mve_load")
(set_attr "length" "4,8,8,4,4,8,4,8")
(set_attr "thumb2_pool_range" "*,*,*,*,1018,*,*,*")
(set_attr "neg_pool_range" "*,*,*,*,996,*,*,*")])
-(define_insn "*mve_vdup<mode>"
+(define_insn "mve_vdup<mode>"
[(set (match_operand:MVE_vecs 0 "s_register_operand" "=w")
(vec_duplicate:MVE_vecs
(match_operand:<V_elem> 1 "s_register_operand" "r")))]
"TARGET_HAVE_MVE || TARGET_HAVE_MVE_FLOAT"
"vdup.<V_sz_elem>\t%q0, %1"
- [(set_attr "length" "4")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdup<mode>"))
+ (set_attr "length" "4")
(set_attr "type" "mve_move")])
;;
@@ -145,7 +154,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_mnemo>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -159,7 +169,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -173,7 +184,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"v<absneg_str>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_v<absneg_str>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -187,7 +199,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -201,7 +214,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
;; [vcvttq_f32_f16])
@@ -214,7 +228,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtt.f32.f16\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -228,7 +243,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtb.f32.f16\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -242,7 +258,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -256,7 +273,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -270,7 +288,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -284,7 +303,8 @@
]
"TARGET_HAVE_MVE"
"v<absneg_str>.s%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_v<absneg_str>q_s<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -297,7 +317,8 @@
]
"TARGET_HAVE_MVE"
"vmvn\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmvnq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vmvnq_s<mode>"
[
@@ -318,7 +339,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -331,7 +353,8 @@
]
"TARGET_HAVE_MVE"
"vclz.i%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vclzq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vclzq_u<mode>"
[
@@ -354,7 +377,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -368,7 +392,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -382,7 +407,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -397,7 +423,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -411,7 +438,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtp.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -425,7 +453,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtn.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -439,7 +468,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtm.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -453,7 +483,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvta.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -467,7 +498,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -481,7 +513,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -495,7 +528,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -509,7 +543,8 @@
]
"TARGET_HAVE_MVE"
"vctp.<MVE_vctp>\t%1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctp<MVE_vctp>q<MVE_vpred>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -523,7 +558,8 @@
]
"TARGET_HAVE_MVE"
"vpnot"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vpnotv16bi"))
+ (set_attr "type" "mve_move")
])
;;
@@ -538,7 +574,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -553,7 +590,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.f<V_sz_elem>.<supf><V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; [vcreateq_f])
@@ -599,7 +637,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf><V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; Versions that take constant vectors as operand 2 (with all elements
@@ -617,7 +656,8 @@
VALID_NEON_QREG_MODE (<MODE>mode),
true);
}
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_s<mode>_imm"))
+ (set_attr "type" "mve_move")
])
(define_insn "mve_vshrq_n_u<mode>_imm"
[
@@ -632,7 +672,8 @@
VALID_NEON_QREG_MODE (<MODE>mode),
true);
}
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_u<mode>_imm"))
+ (set_attr "type" "mve_move")
])
;;
@@ -647,7 +688,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.<supf><V_sz_elem>.f<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -662,8 +704,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vcmpneq_, vcmpcsq_, vcmpeqq_, vcmpgeq_, vcmpgtq_, vcmphiq_, vcmpleq_, vcmpltq_])
@@ -676,7 +719,8 @@
]
"TARGET_HAVE_MVE"
"vcmp.<mve_cmp_type>%#<V_sz_elem>\t<mve_cmp_op>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -691,7 +735,8 @@
]
"TARGET_HAVE_MVE"
"vcmp.<mve_cmp_type>%#<V_sz_elem> <mve_cmp_op>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_n_<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -722,7 +767,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -739,7 +785,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -754,7 +801,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -769,7 +817,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -789,8 +838,11 @@
"@
vand\t%q0, %q1, %q2
* return neon_output_logic_immediate (\"vand\", &operands[2], <MODE>mode, 1, VALID_NEON_QREG_MODE (<MODE>mode));"
- [(set_attr "type" "mve_move")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_vandq_u<mode>")
+ (symbol_ref "CODE_FOR_nothing")])
+ (set_attr "type" "mve_move")
])
+
(define_expand "mve_vandq_s<mode>"
[
(set (match_operand:MVE_2 0 "s_register_operand")
@@ -811,7 +863,8 @@
]
"TARGET_HAVE_MVE"
"vbic\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vbicq_s<mode>"
@@ -835,7 +888,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -853,7 +907,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; Auto vectorizer pattern for int vcadd
@@ -876,7 +931,8 @@
]
"TARGET_HAVE_MVE"
"veor\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_veorq_s<mode>"
[
@@ -904,7 +960,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -920,7 +977,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -935,7 +993,8 @@
]
"TARGET_HAVE_MVE"
"<max_min_su_str>.<max_min_supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<max_min_su_str>q_<max_min_supf><mode>"))
+ (set_attr "type" "mve_move")
])
@@ -954,7 +1013,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -972,7 +1032,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -988,7 +1049,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1004,7 +1066,8 @@
]
"TARGET_HAVE_MVE"
"<mve_addsubmul>.i%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_addsubmul>q<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1018,7 +1081,8 @@
]
"TARGET_HAVE_MVE"
"vorn\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vornq_u<mode>"
@@ -1047,7 +1111,8 @@
"@
vorr\t%q0, %q1, %q2
* return neon_output_logic_immediate (\"vorr\", &operands[2], <MODE>mode, 0, VALID_NEON_QREG_MODE (<MODE>mode));"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vorrq_u<mode>"
[
@@ -1071,7 +1136,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1087,7 +1153,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1103,7 +1170,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1118,7 +1186,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1133,7 +1202,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1148,7 +1218,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1165,7 +1236,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1179,7 +1251,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vand\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vandq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1193,7 +1266,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vbic\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1209,7 +1283,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1223,7 +1298,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmp.f%#<V_sz_elem> <mve_cmp_op>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1238,7 +1314,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmp.f%#<V_sz_elem> <mve_cmp_op>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1253,8 +1330,10 @@
]
"TARGET_HAVE_MVE"
"vpst\;vctpt.<MVE_vctp>\t%1"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctp<MVE_vctp>q<MVE_vpred>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")
+])
;;
;; [vcvtbq_f16_f32])
@@ -1268,7 +1347,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtb.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1283,7 +1363,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1297,7 +1378,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"veor\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1313,7 +1395,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1331,7 +1414,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1346,7 +1430,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<max_min_f_str>.f%#<V_sz_elem> %q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<max_min_f_str>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1364,7 +1449,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1384,7 +1470,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1400,7 +1487,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_addsubmul>.f%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_addsubmul>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1414,7 +1502,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vorn\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1428,7 +1517,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vorr\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1444,7 +1534,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1460,7 +1551,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1476,7 +1568,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1494,7 +1587,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1510,7 +1604,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1526,7 +1621,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_poly_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1547,8 +1643,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_f<mode>"))
+ (set_attr "length""8")])
+
;;
;; [vcvtaq_m_u, vcvtaq_m_s])
;;
@@ -1562,8 +1659,10 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtat.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
+
;;
;; [vcvtq_m_to_f_s, vcvtq_m_to_f_u])
;;
@@ -1577,8 +1676,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vqrshrnbq_n_u, vqrshrnbq_n_s]
@@ -1604,7 +1704,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1623,7 +1724,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1639,7 +1741,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1685,7 +1788,10 @@
(match_dup 4)]
VSHLCQ))]
"TARGET_HAVE_MVE"
- "vshlc\t%q0, %1, %4")
+ "vshlc\t%q0, %1, %4"
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
+])
;;
;; [vabsq_m_s]
@@ -1705,7 +1811,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1721,7 +1828,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1744,7 +1852,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.<isu>%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1767,7 +1876,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.<isu>%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1783,7 +1893,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1800,7 +1911,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1819,7 +1931,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1838,7 +1951,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1857,7 +1971,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1878,7 +1993,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1894,7 +2010,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1910,7 +2027,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1933,7 +2051,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1950,7 +2069,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1967,7 +2087,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1983,7 +2104,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1999,7 +2121,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2015,7 +2138,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2038,7 +2162,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_mnemo>t.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2054,7 +2179,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcmlaq, vcmlaq_rot90, vcmlaq_rot180, vcmlaq_rot270])
@@ -2072,7 +2198,9 @@
"@
vcmul.f%#<V_sz_elem> %q0, %q2, %q3, #<rot>
vcmla.f%#<V_sz_elem> %q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>")
+ (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>")])
+ (set_attr "type" "mve_move")
])
;;
@@ -2093,7 +2221,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2109,7 +2238,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2125,7 +2255,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f32.f16\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2141,7 +2272,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2157,8 +2289,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f32.f16\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vdupq_m_n_f])
@@ -2173,7 +2306,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2190,7 +2324,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2207,7 +2342,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2224,7 +2360,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2243,7 +2380,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2262,7 +2400,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2281,7 +2420,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2298,7 +2438,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2319,7 +2460,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2335,7 +2477,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2352,7 +2495,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2368,7 +2512,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2384,7 +2529,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2400,7 +2546,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2416,7 +2563,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2435,7 +2583,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2451,7 +2600,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2467,7 +2617,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2483,7 +2634,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2500,7 +2652,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2516,7 +2669,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2532,8 +2686,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vabavq_p_s, vabavq_p_u])
@@ -2549,7 +2704,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -2566,8 +2722,9 @@
]
"TARGET_HAVE_MVE"
"vpst\n\t<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length" "8")])
;;
;; [vsriq_m_n_s, vsriq_m_n_u])
@@ -2583,8 +2740,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length" "8")])
;;
;; [vcvtq_m_n_to_f_u, vcvtq_m_n_to_f_s])
@@ -2600,7 +2758,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2640,7 +2799,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2659,8 +2819,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vaddq_m_u, vaddq_m_s]
@@ -2678,7 +2839,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2698,7 +2860,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2715,8 +2878,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vcaddq_rot90_m_u, vcaddq_rot90_m_s]
@@ -2735,7 +2899,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2763,7 +2928,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2784,7 +2950,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2802,7 +2969,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2819,7 +2987,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vornt\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2837,7 +3006,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2855,7 +3025,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2872,7 +3043,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2892,7 +3064,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2920,7 +3093,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2940,7 +3114,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2958,7 +3133,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2976,7 +3152,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_poly_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2994,7 +3171,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3012,7 +3190,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3036,7 +3215,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3057,7 +3237,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3077,7 +3258,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3094,7 +3276,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3116,7 +3299,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3136,7 +3320,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3153,7 +3338,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vornt\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3173,7 +3359,8 @@
output_asm_insn("vstrb.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vstrbq_scatter_offset_s vstrbq_scatter_offset_u]
@@ -3201,7 +3388,8 @@
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vstrb.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_s vstrwq_scatter_base_u]
@@ -3223,7 +3411,8 @@
output_asm_insn("vstrw.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrbq_gather_offset_s vldrbq_gather_offset_u]
@@ -3246,7 +3435,8 @@
output_asm_insn ("vldrb.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrbq_s vldrbq_u]
@@ -3268,7 +3458,8 @@
output_asm_insn ("vldrb.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_base_s vldrwq_gather_base_u]
@@ -3288,7 +3479,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vstrbq_scatter_offset_p_s vstrbq_scatter_offset_p_u]
@@ -3320,7 +3512,8 @@
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrbt.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_p_s vstrwq_scatter_base_p_u]
@@ -3343,7 +3536,8 @@
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "8")])
(define_insn "mve_vstrbq_p_<supf><mode>"
[(set (match_operand:<MVE_B_ELEM> 0 "mve_memory_operand" "=Ux")
@@ -3361,7 +3555,8 @@
output_asm_insn ("vpst\;vstrbt.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_gather_offset_z_s vldrbq_gather_offset_z_u]
@@ -3386,7 +3581,8 @@
output_asm_insn ("vpst\n\tvldrbt.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_z_s vldrbq_z_u]
@@ -3409,7 +3605,8 @@
output_asm_insn ("vpst\;vldrbt.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_z_s vldrwq_gather_base_z_u]
@@ -3430,7 +3627,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_f]
@@ -3449,7 +3647,8 @@
output_asm_insn ("vldrh.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_s vldrhq_gather_offset_u]
@@ -3472,7 +3671,8 @@
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_z_s vldrhq_gather_offset_z_u]
@@ -3497,7 +3697,8 @@
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_s vldrhq_gather_shifted_offset_u]
@@ -3520,7 +3721,8 @@
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_shifted_offset_z_s vldrhq_gather_shited_offset_z_u]
@@ -3545,7 +3747,8 @@
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_s, vldrhq_u]
@@ -3567,7 +3770,8 @@
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_z_f]
@@ -3587,7 +3791,8 @@
output_asm_insn ("vpst\;vldrht.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_z_s vldrhq_z_u]
@@ -3610,7 +3815,8 @@
output_asm_insn ("vpst\;vldrht.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_f]
@@ -3629,7 +3835,8 @@
output_asm_insn ("vldrw.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_s vldrwq_u]
@@ -3648,7 +3855,8 @@
output_asm_insn ("vldrw.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_z_f]
@@ -3668,7 +3876,8 @@
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_z_s vldrwq_z_u]
@@ -3688,7 +3897,8 @@
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "8")])
(define_expand "mve_vld1q_f<mode>"
[(match_operand:MVE_0 0 "s_register_operand")
@@ -3728,7 +3938,8 @@
output_asm_insn ("vldrd.64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_base_z_s vldrdq_gather_base_z_u]
@@ -3749,7 +3960,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_offset_s vldrdq_gather_offset_u]
@@ -3769,7 +3981,8 @@
output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_offset_z_s vldrdq_gather_offset_z_u]
@@ -3790,7 +4003,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_shifted_offset_s vldrdq_gather_shifted_offset_u]
@@ -3810,7 +4024,8 @@
output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_shifted_offset_z_s vldrdq_gather_shifted_offset_z_u]
@@ -3831,7 +4046,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_offset_f]
@@ -3851,7 +4067,8 @@
output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_z_f]
@@ -3873,7 +4090,8 @@
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_f]
@@ -3893,7 +4111,8 @@
output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_shifted_offset_z_f]
@@ -3915,7 +4134,8 @@
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_f]
@@ -3935,7 +4155,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_base_z_f]
@@ -3956,7 +4177,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_f]
@@ -3976,7 +4198,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_offset_s vldrwq_gather_offset_u]
@@ -3996,7 +4219,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_offset_z_f]
@@ -4018,7 +4242,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_z_s vldrwq_gather_offset_z_u]
@@ -4040,7 +4265,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_f]
@@ -4060,7 +4286,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_shifted_offset_s vldrwq_gather_shifted_offset_u]
@@ -4080,7 +4307,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_shifted_offset_z_f]
@@ -4102,7 +4330,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_z_s vldrwq_gather_shifted_offset_z_u]
@@ -4124,7 +4353,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_f]
@@ -4143,7 +4373,8 @@
output_asm_insn ("vstrh.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_p_f]
@@ -4164,7 +4395,8 @@
output_asm_insn ("vpst\;vstrht.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_p_s vstrhq_p_u]
@@ -4186,7 +4418,8 @@
output_asm_insn ("vpst\;vstrht.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_p_s vstrhq_scatter_offset_p_u]
@@ -4218,7 +4451,8 @@
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_s vstrhq_scatter_offset_u]
@@ -4246,7 +4480,8 @@
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vstrh.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_shifted_offset_p_s vstrhq_scatter_shifted_offset_p_u]
@@ -4278,7 +4513,8 @@
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_s vstrhq_scatter_shifted_offset_u]
@@ -4307,7 +4543,8 @@
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vstrh.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_s, vstrhq_u]
@@ -4326,7 +4563,8 @@
output_asm_insn ("vstrh.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_f]
@@ -4345,7 +4583,8 @@
output_asm_insn ("vstrw.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_p_f]
@@ -4366,7 +4605,8 @@
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_p_s vstrwq_p_u]
@@ -4387,7 +4627,8 @@
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_s vstrwq_u]
@@ -4406,7 +4647,8 @@
output_asm_insn ("vstrw.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "4")])
(define_expand "mve_vst1q_f<mode>"
[(match_operand:<MVE_CNVT> 0 "mve_memory_operand")
@@ -4449,7 +4691,8 @@
output_asm_insn ("vpst\;\tvstrdt.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_s vstrdq_scatter_base_u]
@@ -4471,7 +4714,8 @@
output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_offset_p_s vstrdq_scatter_offset_p_u]
@@ -4502,7 +4746,8 @@
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_offset_s vstrdq_scatter_offset_u]
@@ -4530,7 +4775,8 @@
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vstrd.64\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_shifted_offset_p_s vstrdq_scatter_shifted_offset_p_u]
@@ -4562,7 +4808,8 @@
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1, uxtw #3]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_shifted_offset_s vstrdq_scatter_shifted_offset_u]
@@ -4591,7 +4838,8 @@
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vstrd.64\t%q2, [%0, %q1, uxtw #3]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_offset_f]
@@ -4619,7 +4867,8 @@
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrh.16\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_offset_p_f]
@@ -4650,7 +4899,8 @@
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_f]
@@ -4678,7 +4928,8 @@
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrh.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_shifted_offset_p_f]
@@ -4710,7 +4961,8 @@
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_f]
@@ -4732,7 +4984,8 @@
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_p_f]
@@ -4755,7 +5008,8 @@
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_f]
@@ -4783,7 +5037,8 @@
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrw.32\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_offset_p_f]
@@ -4814,7 +5069,8 @@
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -4845,7 +5101,8 @@
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -4873,7 +5130,8 @@
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vstrw.32\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_shifted_offset_f]
@@ -4901,7 +5159,8 @@
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrw.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_f]
@@ -4933,7 +5192,8 @@
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_s vstrwq_scatter_shifted_offset_p_u]
@@ -4965,7 +5225,8 @@
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_s vstrwq_scatter_shifted_offset_u]
@@ -4994,7 +5255,8 @@
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vstrw.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "4")])
;;
;; [vidupq_n_u])
@@ -5062,7 +5324,8 @@
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;\tvidupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vidupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vddupq_n_u])
@@ -5130,7 +5393,8 @@
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;vddupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vddupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vdwdupq_n_u])
@@ -5246,8 +5510,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;vdwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [viwdupq_n_u])
@@ -5363,7 +5628,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;\tviwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_viwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5389,7 +5655,8 @@
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_wb_p_s vstrwq_scatter_base_wb_p_u]
@@ -5415,7 +5682,8 @@
output_asm_insn ("vpst\;\tvstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_wb_f]
@@ -5440,7 +5708,8 @@
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_wb_p_f]
@@ -5466,7 +5735,8 @@
output_asm_insn ("vpst\;vstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_wb_s vstrdq_scatter_base_wb_u]
@@ -5491,7 +5761,8 @@
output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_base_wb_p_s vstrdq_scatter_base_wb_p_u]
@@ -5517,7 +5788,8 @@
output_asm_insn ("vpst\;vstrdt.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5569,7 +5841,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrwq_gather_base_wb_z_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5625,7 +5898,8 @@
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5677,7 +5951,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrwq_gather_base_wb_z_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5734,7 +6009,8 @@
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrdq_gather_base_wb_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -5787,7 +6063,8 @@
output_asm_insn ("vldrd.64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrdq_gather_base_wb_z_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -5826,7 +6103,7 @@
(unspec_volatile:SI [(reg:SI VFPCC_REGNUM)] UNSPEC_GET_FPSCR_NZCVQC))]
"TARGET_HAVE_MVE"
"vmrs\\t%0, FPSCR_nzcvqc"
- [(set_attr "type" "mve_move")])
+ [(set_attr "type" "mve_move")])
(define_insn "set_fpscr_nzcvqc"
[(set (reg:SI VFPCC_REGNUM)
@@ -5834,7 +6111,7 @@
VUNSPEC_SET_FPSCR_NZCVQC))]
"TARGET_HAVE_MVE"
"vmsr\\tFPSCR_nzcvqc, %0"
- [(set_attr "type" "mve_move")])
+ [(set_attr "type" "mve_move")])
;;
;; [vldrdq_gather_base_wb_z_s vldrdq_gather_base_wb_z_u]
@@ -5859,7 +6136,8 @@
output_asm_insn ("vpst\;vldrdt.u64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vadciq_m_s, vadciq_m_u])
;;
@@ -5876,7 +6154,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vadcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5893,7 +6172,8 @@
]
"TARGET_HAVE_MVE"
"vadci.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -5912,7 +6192,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vadct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5929,7 +6210,8 @@
]
"TARGET_HAVE_MVE"
"vadc.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")
(set_attr "conds" "set")])
@@ -5949,7 +6231,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsbcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5966,7 +6249,8 @@
]
"TARGET_HAVE_MVE"
"vsbci.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -5985,7 +6269,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsbct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -6002,7 +6287,8 @@
]
"TARGET_HAVE_MVE"
"vsbc.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -6031,7 +6317,7 @@
"vst21.<V_sz_elem>\t{%q0, %q1}, %3", ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set_attr "length" "8")])
;;
;; [vld2q])
@@ -6059,7 +6345,7 @@
"vld21.<V_sz_elem>\t{%q0, %q1}, %3", ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set_attr "length" "8")])
;;
;; [vld4q])
@@ -6402,7 +6688,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlct\t%q0, %1, %4"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;; CDE instructions on MVE registers.
@@ -6414,7 +6701,8 @@
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx1\\tp%c1, %q0, #%c2"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx1qav16qi"
@@ -6425,7 +6713,8 @@
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx1a\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx2qv16qi"
@@ -6436,7 +6725,8 @@
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx2\\tp%c1, %q0, %q2, #%c3"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx2qav16qi"
@@ -6448,7 +6738,8 @@
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx2a\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx3qv16qi"
@@ -6460,7 +6751,8 @@
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx3\\tp%c1, %q0, %q2, %q3, #%c4"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx3qav16qi"
@@ -6473,7 +6765,8 @@
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx3a\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx1q<a>_p_v16qi"
@@ -6485,7 +6778,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx1<a>t\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -6499,7 +6793,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx2<a>t\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -6514,11 +6809,12 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx3<a>t\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
-(define_insn "*movmisalign<mode>_mve_store"
+(define_insn "movmisalign<mode>_mve_store"
[(set (match_operand:MVE_VLD_ST 0 "mve_memory_operand" "=Ux")
(unspec:MVE_VLD_ST [(match_operand:MVE_VLD_ST 1 "s_register_operand" " w")]
UNSPEC_MISALIGNED_ACCESS))]
@@ -6526,11 +6822,12 @@
|| (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE (<MODE>mode)))
&& !BYTES_BIG_ENDIAN && unaligned_access"
"vstr<V_sz_elem1>.<V_sz_elem>\t%q1, %E0"
- [(set_attr "type" "mve_store")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_movmisalign<mode>_mve_store"))
+ (set_attr "type" "mve_store")]
)
-(define_insn "*movmisalign<mode>_mve_load"
+(define_insn "movmisalign<mode>_mve_load"
[(set (match_operand:MVE_VLD_ST 0 "s_register_operand" "=w")
(unspec:MVE_VLD_ST [(match_operand:MVE_VLD_ST 1 "mve_memory_operand" " Ux")]
UNSPEC_MISALIGNED_ACCESS))]
@@ -6538,7 +6835,8 @@
|| (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE (<MODE>mode)))
&& !BYTES_BIG_ENDIAN && unaligned_access"
"vldr<V_sz_elem1>.<V_sz_elem>\t%q0, %E1"
- [(set_attr "type" "mve_load")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_movmisalign<mode>_mve_load"))
+ (set_attr "type" "mve_load")]
)
;; Expander for VxBI moves
@@ -6620,3 +6918,40 @@
}
}
)
+
+;; Originally expanded by 'predicated_doloop_end'.
+;; In the rare situation where the branch is too far, we do also need to
+;; revert FPSCR.LTPSIZE back to 0x100 after the last iteration.
+(define_insn "*predicated_doloop_end_internal"
+ [(set (pc)
+ (if_then_else
+ (ge (plus:SI (reg:SI LR_REGNUM)
+ (match_operand:SI 0 "const_int_operand" ""))
+ (const_int 0))
+ (label_ref (match_operand 1 "" ""))
+ (pc)))
+ (set (reg:SI LR_REGNUM)
+ (plus:SI (reg:SI LR_REGNUM) (match_dup 0)))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2"
+ {
+ if (get_attr_length (insn) == 4)
+ return "letp\t%|lr, %l1";
+ else
+ return "subs\t%|lr, #%n0\n\tbgt\t%l1\n\tlctp";
+ }
+ [(set (attr "length")
+ (if_then_else
+ (ltu (minus (pc) (match_dup 1)) (const_int 1024))
+ (const_int 4)
+ (const_int 6)))
+ (set_attr "type" "branch")])
+
+(define_insn "dlstp<mode1>_insn"
+ [
+ (set (reg:SI LR_REGNUM)
+ (unspec:SI [(match_operand:SI 0 "s_register_operand" "r")]
+ DLSTP))
+ ]
+ "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2"
+ "dlstp.<mode1>\t%|lr, %0")
diff --git a/gcc/config/arm/vec-common.md b/gcc/config/arm/vec-common.md
index 9af8429968d..74871cb984b 100644
--- a/gcc/config/arm/vec-common.md
+++ b/gcc/config/arm/vec-common.md
@@ -366,7 +366,8 @@
"@
<mve_insn>.<supf>%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2
* return neon_output_shift_immediate (\"vshl\", 'i', &operands[2], <MODE>mode, VALID_NEON_QREG_MODE (<MODE>mode), true);"
- [(set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
)
(define_expand "vashl<mode>3"
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns
2023-11-06 11:20 [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns Stamatis Markianos-Wright
@ 2023-12-12 10:33 ` Richard Earnshaw
0 siblings, 0 replies; 10+ messages in thread
From: Richard Earnshaw @ 2023-12-12 10:33 UTC (permalink / raw)
To: Stamatis Markianos-Wright, gcc-patches
Cc: Kyrylo Tkachov, Richard Earnshaw, richard.sandiford, ramana.gcc
On 06/11/2023 11:20, Stamatis Markianos-Wright wrote:
> Patch has already been approved at:
>
> https://gcc.gnu.org/pipermail/gcc-patches/2023-September/630326.html
>
>
> ... But I'm sending this again for archiving on the list after rebasing
A couple of minor nits:
1)
+#define MVE_VPT_PREDICABLE_INSN_P(INSN) \
+ (recog_memoized (INSN) >= 0 \
+ && get_attr_mve_unpredicated_insn (INSN) != 0) \
I think it's better to write "!= CODE_FOR_nothing".
+(define_attr "mve_unpredicated_insn" "" (const_int 0))
+
And the default value here should similarly be 'symbol_ref
"CODE_FOR_nothing"'.
So that the style matches the symbol refs elsewhere.
2)
+(define_insn "*predicated_doloop_end_internal"
+ [(set (pc)
+ (if_then_else
+ (ge (plus:SI (reg:SI LR_REGNUM)
+ (match_operand:SI 0 "const_int_operand" ""))
+ (const_int 0))
+ (label_ref (match_operand 1 "" ""))
+ (pc)))
+ (set (reg:SI LR_REGNUM)
+ (plus:SI (reg:SI LR_REGNUM) (match_dup 0)))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2"
TARGET_THUMB2 => TARGET_32BIT, so the first test is redundant. In fact,
given that TARGET_HAVE_LOB => armv8.1-m.main => thumb2, why do we need
either?
So
TARGET_HAVE_LOB && TARGET_HAVE_MVE
should be sufficient.
+(define_insn "dlstp<mode1>_insn"
+ [
+ (set (reg:SI LR_REGNUM)
+ (unspec:SI [(match_operand:SI 0 "s_register_operand" "r")]
+ DLSTP))
+ ]
+ "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2"
Same here.
Otherwise, OK.
R.
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns
@ 2023-08-17 10:30 Stamatis Markianos-Wright
0 siblings, 0 replies; 10+ messages in thread
From: Stamatis Markianos-Wright @ 2023-08-17 10:30 UTC (permalink / raw)
To: gcc-patches; +Cc: Kyrylo Tkachov, Richard Earnshaw
[-- Attachment #1: Type: text/plain, Size: 33653 bytes --]
Hi all,
I'd like to submit two patches that add support for Arm's MVE
Tail Predicated Low Overhead Loop feature.
--- Introduction ---
The M-class Arm-ARM:
https://developer.arm.com/documentation/ddi0553/bu/?lang=en
Section B5.5.1 "Loop tail predication" describes the feature
we are adding support for with this patch (although
we only add codegen for DLSTP/LETP instruction loops).
Previously with commit d2ed233cb94 we'd added support for
non-MVE DLS/LE loops through the loop-doloop pass, which, given
a standard MVE loop like:
```
void __attribute__ ((noinline)) test (int16_t *a, int16_t *b, int16_t
*c, int n)
{
while (n > 0)
{
mve_pred16_t p = vctp16q (n);
int16x8_t va = vldrhq_z_s16 (a, p);
int16x8_t vb = vldrhq_z_s16 (b, p);
int16x8_t vc = vaddq_x_s16 (va, vb, p);
vstrhq_p_s16 (c, vc, p);
c+=8;
a+=8;
b+=8;
n-=8;
}
}
```
.. would output:
```
<pre-calculate the number of iterations and place it into lr>
dls lr, lr
.L3:
vctp.16 r3
vmrs ip, P0 @ movhi
sxth ip, ip
vmsr P0, ip @ movhi
mov r4, r0
vpst
vldrht.16 q2, [r4]
mov r4, r1
vmov q3, q0
vpst
vldrht.16 q1, [r4]
mov r4, r2
vpst
vaddt.i16 q3, q2, q1
subs r3, r3, #8
vpst
vstrht.16 q3, [r4]
adds r0, r0, #16
adds r1, r1, #16
adds r2, r2, #16
le lr, .L3
```
where the LE instruction will decrement LR by 1, compare and
branch if needed.
(there are also other inefficiencies with the above code, like the
pointless vmrs/sxth/vmsr on the VPR and the adds not being merged
into the vldrht/vstrht as a #16 offsets and some random movs!
But that's different problems...)
The MVE version is similar, except that:
* Instead of DLS/LE the instructions are DLSTP/LETP.
* Instead of pre-calculating the number of iterations of the
loop, we place the number of elements to be processed by the
loop into LR.
* Instead of decrementing the LR by one, LETP will decrement it
by FPSCR.LTPSIZE, which is the number of elements being
processed in each iteration: 16 for 8-bit elements, 5 for 16-bit
elements, etc.
* On the final iteration, automatic Loop Tail Predication is
performed, as if the instructions within the loop had been VPT
predicated with a VCTP generating the VPR predicate in every
loop iteration.
The dlstp/letp loop now looks like:
```
<place n into r3>
dlstp.16 lr, r3
.L14:
mov r3, r0
vldrh.16 q3, [r3]
mov r3, r1
vldrh.16 q2, [r3]
mov r3, r2
vadd.i16 q3, q3, q2
adds r0, r0, #16
vstrh.16 q3, [r3]
adds r1, r1, #16
adds r2, r2, #16
letp lr, .L14
```
Since the loop tail predication is automatic, we have eliminated
the VCTP that had been specified by the user in the intrinsic
and converted the VPT-predicated instructions into their
unpredicated equivalents (which also saves us from VPST insns).
The LE instruction here decrements LR by 8 in each iteration.
--- This 1/2 patch ---
This first patch lays some groundwork by adding an attribute to
md patterns, and then the second patch contains the functional
changes.
One major difficulty in implementing MVE Tail-Predicated Low
Overhead Loops was the need to transform VPT-predicated insns
in the insn chain into their unpredicated equivalents, like:
`mve_vldrbq_z_<supf><mode> -> mve_vldrbq_<supf><mode>`.
This requires us to have a deterministic link between two
different patterns in mve.md -- this _could_ be done by
re-ordering the entirety of mve.md such that the patterns are
at some constant icode proximity (e.g. having the _z immediately
after the unpredicated version would mean that to map from the
former to the latter you could use icode-1), but that is a very
messy solution that would lead to complex unknown dependencies
between the ordering of patterns.
This patch proves an alternative way of doing that: using an insn
attribute to encode the icode of the unpredicated instruction.
No regressions on arm-none-eabi with an MVE target.
Thank you,
Stam Markianos-Wright
gcc/ChangeLog:
* config/arm/arm.md (mve_unpredicated_insn): New attribute.
* config/arm/arm.h (MVE_VPT_PREDICATED_INSN_P): New define.
(MVE_VPT_UNPREDICATED_INSN_P): Likewise.
(MVE_VPT_PREDICABLE_INSN_P): Likewise.
* config/arm/vec-common.md (mve_vshlq_<supf><mode>): Add attribute.
* config/arm/mve.md (arm_vcx1q<a>_p_v16qi): Add attribute.
(arm_vcx1q<a>v16qi): Likewise.
(arm_vcx1qav16qi): Likewise.
(arm_vcx1qv16qi): Likewise.
(arm_vcx2q<a>_p_v16qi): Likewise.
(arm_vcx2q<a>v16qi): Likewise.
(arm_vcx2qav16qi): Likewise.
(arm_vcx2qv16qi): Likewise.
(arm_vcx3q<a>_p_v16qi): Likewise.
(arm_vcx3q<a>v16qi): Likewise.
(arm_vcx3qav16qi): Likewise.
(arm_vcx3qv16qi): Likewise.
(mve_vabavq_<supf><mode>): Likewise.
(mve_vabavq_p_<supf><mode>): Likewise.
(mve_vabdq_<supf><mode>): Likewise.
(mve_vabdq_f<mode>): Likewise.
(mve_vabdq_m_<supf><mode>): Likewise.
(mve_vabdq_m_f<mode>): Likewise.
(mve_vabsq_f<mode>): Likewise.
(mve_vabsq_m_f<mode>): Likewise.
(mve_vabsq_m_s<mode>): Likewise.
(mve_vabsq_s<mode>): Likewise.
(mve_vadciq_<supf>v4si): Likewise.
(mve_vadciq_m_<supf>v4si): Likewise.
(mve_vadcq_<supf>v4si): Likewise.
(mve_vadcq_m_<supf>v4si): Likewise.
(mve_vaddlvaq_<supf>v4si): Likewise.
(mve_vaddlvaq_p_<supf>v4si): Likewise.
(mve_vaddlvq_<supf>v4si): Likewise.
(mve_vaddlvq_p_<supf>v4si): Likewise.
(mve_vaddq_f<mode>): Likewise.
(mve_vaddq_m_<supf><mode>): Likewise.
(mve_vaddq_m_f<mode>): Likewise.
(mve_vaddq_m_n_<supf><mode>): Likewise.
(mve_vaddq_m_n_f<mode>): Likewise.
(mve_vaddq_n_<supf><mode>): Likewise.
(mve_vaddq_n_f<mode>): Likewise.
(mve_vaddq<mode>): Likewise.
(mve_vaddvaq_<supf><mode>): Likewise.
(mve_vaddvaq_p_<supf><mode>): Likewise.
(mve_vaddvq_<supf><mode>): Likewise.
(mve_vaddvq_p_<supf><mode>): Likewise.
(mve_vandq_<supf><mode>): Likewise.
(mve_vandq_f<mode>): Likewise.
(mve_vandq_m_<supf><mode>): Likewise.
(mve_vandq_m_f<mode>): Likewise.
(mve_vandq_s<mode>): Likewise.
(mve_vandq_u<mode>): Likewise.
(mve_vbicq_<supf><mode>): Likewise.
(mve_vbicq_f<mode>): Likewise.
(mve_vbicq_m_<supf><mode>): Likewise.
(mve_vbicq_m_f<mode>): Likewise.
(mve_vbicq_m_n_<supf><mode>): Likewise.
(mve_vbicq_n_<supf><mode>): Likewise.
(mve_vbicq_s<mode>): Likewise.
(mve_vbicq_u<mode>): Likewise.
(mve_vbrsrq_m_n_<supf><mode>): Likewise.
(mve_vbrsrq_m_n_f<mode>): Likewise.
(mve_vbrsrq_n_<supf><mode>): Likewise.
(mve_vbrsrq_n_f<mode>): Likewise.
(mve_vcaddq_rot270_m_<supf><mode>): Likewise.
(mve_vcaddq_rot270_m_f<mode>): Likewise.
(mve_vcaddq_rot270<mode>): Likewise.
(mve_vcaddq_rot270<mode>): Likewise.
(mve_vcaddq_rot90_m_<supf><mode>): Likewise.
(mve_vcaddq_rot90_m_f<mode>): Likewise.
(mve_vcaddq_rot90<mode>): Likewise.
(mve_vcaddq_rot90<mode>): Likewise.
(mve_vcaddq<mve_rot><mode>): Likewise.
(mve_vcaddq<mve_rot><mode>): Likewise.
(mve_vclsq_m_s<mode>): Likewise.
(mve_vclsq_s<mode>): Likewise.
(mve_vclzq_<supf><mode>): Likewise.
(mve_vclzq_m_<supf><mode>): Likewise.
(mve_vclzq_s<mode>): Likewise.
(mve_vclzq_u<mode>): Likewise.
(mve_vcmlaq_m_f<mode>): Likewise.
(mve_vcmlaq_rot180_m_f<mode>): Likewise.
(mve_vcmlaq_rot180<mode>): Likewise.
(mve_vcmlaq_rot270_m_f<mode>): Likewise.
(mve_vcmlaq_rot270<mode>): Likewise.
(mve_vcmlaq_rot90_m_f<mode>): Likewise.
(mve_vcmlaq_rot90<mode>): Likewise.
(mve_vcmlaq<mode>): Likewise.
(mve_vcmlaq<mve_rot><mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_f<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_n_<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_n_f<mode>): Likewise.
(mve_vcmpcsq_<mode>): Likewise.
(mve_vcmpcsq_m_n_u<mode>): Likewise.
(mve_vcmpcsq_m_u<mode>): Likewise.
(mve_vcmpcsq_n_<mode>): Likewise.
(mve_vcmpeqq_<mode>): Likewise.
(mve_vcmpeqq_f<mode>): Likewise.
(mve_vcmpeqq_m_<supf><mode>): Likewise.
(mve_vcmpeqq_m_f<mode>): Likewise.
(mve_vcmpeqq_m_n_<supf><mode>): Likewise.
(mve_vcmpeqq_m_n_f<mode>): Likewise.
(mve_vcmpeqq_n_<mode>): Likewise.
(mve_vcmpeqq_n_f<mode>): Likewise.
(mve_vcmpgeq_<mode>): Likewise.
(mve_vcmpgeq_f<mode>): Likewise.
(mve_vcmpgeq_m_f<mode>): Likewise.
(mve_vcmpgeq_m_n_f<mode>): Likewise.
(mve_vcmpgeq_m_n_s<mode>): Likewise.
(mve_vcmpgeq_m_s<mode>): Likewise.
(mve_vcmpgeq_n_<mode>): Likewise.
(mve_vcmpgeq_n_f<mode>): Likewise.
(mve_vcmpgtq_<mode>): Likewise.
(mve_vcmpgtq_f<mode>): Likewise.
(mve_vcmpgtq_m_f<mode>): Likewise.
(mve_vcmpgtq_m_n_f<mode>): Likewise.
(mve_vcmpgtq_m_n_s<mode>): Likewise.
(mve_vcmpgtq_m_s<mode>): Likewise.
(mve_vcmpgtq_n_<mode>): Likewise.
(mve_vcmpgtq_n_f<mode>): Likewise.
(mve_vcmphiq_<mode>): Likewise.
(mve_vcmphiq_m_n_u<mode>): Likewise.
(mve_vcmphiq_m_u<mode>): Likewise.
(mve_vcmphiq_n_<mode>): Likewise.
(mve_vcmpleq_<mode>): Likewise.
(mve_vcmpleq_f<mode>): Likewise.
(mve_vcmpleq_m_f<mode>): Likewise.
(mve_vcmpleq_m_n_f<mode>): Likewise.
(mve_vcmpleq_m_n_s<mode>): Likewise.
(mve_vcmpleq_m_s<mode>): Likewise.
(mve_vcmpleq_n_<mode>): Likewise.
(mve_vcmpleq_n_f<mode>): Likewise.
(mve_vcmpltq_<mode>): Likewise.
(mve_vcmpltq_f<mode>): Likewise.
(mve_vcmpltq_m_f<mode>): Likewise.
(mve_vcmpltq_m_n_f<mode>): Likewise.
(mve_vcmpltq_m_n_s<mode>): Likewise.
(mve_vcmpltq_m_s<mode>): Likewise.
(mve_vcmpltq_n_<mode>): Likewise.
(mve_vcmpltq_n_f<mode>): Likewise.
(mve_vcmpneq_<mode>): Likewise.
(mve_vcmpneq_f<mode>): Likewise.
(mve_vcmpneq_m_<supf><mode>): Likewise.
(mve_vcmpneq_m_f<mode>): Likewise.
(mve_vcmpneq_m_n_<supf><mode>): Likewise.
(mve_vcmpneq_m_n_f<mode>): Likewise.
(mve_vcmpneq_n_<mode>): Likewise.
(mve_vcmpneq_n_f<mode>): Likewise.
(mve_vcmulq_m_f<mode>): Likewise.
(mve_vcmulq_rot180_m_f<mode>): Likewise.
(mve_vcmulq_rot180<mode>): Likewise.
(mve_vcmulq_rot270_m_f<mode>): Likewise.
(mve_vcmulq_rot270<mode>): Likewise.
(mve_vcmulq_rot90_m_f<mode>): Likewise.
(mve_vcmulq_rot90<mode>): Likewise.
(mve_vcmulq<mode>): Likewise.
(mve_vcmulq<mve_rot><mode>): Likewise.
(mve_vctp<mode1>q_mhi): Likewise.
(mve_vctp<mode1>qhi): Likewise.
(mve_vcvtaq_<supf><mode>): Likewise.
(mve_vcvtaq_m_<supf><mode>): Likewise.
(mve_vcvtbq_f16_f32v8hf): Likewise.
(mve_vcvtbq_f32_f16v4sf): Likewise.
(mve_vcvtbq_m_f16_f32v8hf): Likewise.
(mve_vcvtbq_m_f32_f16v4sf): Likewise.
(mve_vcvtmq_<supf><mode>): Likewise.
(mve_vcvtmq_m_<supf><mode>): Likewise.
(mve_vcvtnq_<supf><mode>): Likewise.
(mve_vcvtnq_m_<supf><mode>): Likewise.
(mve_vcvtpq_<supf><mode>): Likewise.
(mve_vcvtpq_m_<supf><mode>): Likewise.
(mve_vcvtq_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_m_to_f_<supf><mode>): Likewise.
(mve_vcvtq_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_to_f_<supf><mode>): Likewise.
(mve_vcvttq_f16_f32v8hf): Likewise.
(mve_vcvttq_f32_f16v4sf): Likewise.
(mve_vcvttq_m_f16_f32v8hf): Likewise.
(mve_vcvttq_m_f32_f16v4sf): Likewise.
(mve_vddupq_m_wb_u<mode>_insn): Likewise.
(mve_vddupq_u<mode>_insn): Likewise.
(mve_vdupq_m_n_<supf><mode>): Likewise.
(mve_vdupq_m_n_f<mode>): Likewise.
(mve_vdupq_n_<supf><mode>): Likewise.
(mve_vdupq_n_f<mode>): Likewise.
(mve_vdwdupq_m_wb_u<mode>_insn): Likewise.
(mve_vdwdupq_wb_u<mode>_insn): Likewise.
(mve_veorq_<supf><mode>): Likewise.
(mve_veorq_f<mode>): Likewise.
(mve_veorq_m_<supf><mode>): Likewise.
(mve_veorq_m_f<mode>): Likewise.
(mve_veorq_s<mode>): Likewise.
(mve_veorq_u<mode>): Likewise.
(mve_vfmaq_f<mode>): Likewise.
(mve_vfmaq_m_f<mode>): Likewise.
(mve_vfmaq_m_n_f<mode>): Likewise.
(mve_vfmaq_n_f<mode>): Likewise.
(mve_vfmasq_m_n_f<mode>): Likewise.
(mve_vfmasq_n_f<mode>): Likewise.
(mve_vfmsq_f<mode>): Likewise.
(mve_vfmsq_m_f<mode>): Likewise.
(mve_vhaddq_<supf><mode>): Likewise.
(mve_vhaddq_m_<supf><mode>): Likewise.
(mve_vhaddq_m_n_<supf><mode>): Likewise.
(mve_vhaddq_n_<supf><mode>): Likewise.
(mve_vhcaddq_rot270_m_s<mode>): Likewise.
(mve_vhcaddq_rot270_s<mode>): Likewise.
(mve_vhcaddq_rot90_m_s<mode>): Likewise.
(mve_vhcaddq_rot90_s<mode>): Likewise.
(mve_vhsubq_<supf><mode>): Likewise.
(mve_vhsubq_m_<supf><mode>): Likewise.
(mve_vhsubq_m_n_<supf><mode>): Likewise.
(mve_vhsubq_n_<supf><mode>): Likewise.
(mve_vidupq_m_wb_u<mode>_insn): Likewise.
(mve_vidupq_u<mode>_insn): Likewise.
(mve_viwdupq_m_wb_u<mode>_insn): Likewise.
(mve_viwdupq_wb_u<mode>_insn): Likewise.
(mve_vldrbq_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrbq_z_<supf><mode>): Likewise.
(mve_vldrdq_gather_base_<supf>v2di): Likewise.
(mve_vldrdq_gather_base_wb_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_wb_z_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_z_<supf>v2di): Likewise.
(mve_vldrhq_<supf><mode>): Likewise.
(mve_vldrhq_fv8hf): Likewise.
(mve_vldrhq_gather_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_fv8hf): Likewise.
(mve_vldrhq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_z_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_z_fv8hf): Likewise.
(mve_vldrhq_z_<supf><mode>): Likewise.
(mve_vldrhq_z_fv8hf): Likewise.
(mve_vldrwq_<supf>v4si): Likewise.
(mve_vldrwq_fv4sf): Likewise.
(mve_vldrwq_gather_base_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_fv4sf): Likewise.
(mve_vldrwq_gather_base_wb_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_z_fv4sf): Likewise.
(mve_vldrwq_gather_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_fv4sf): Likewise.
(mve_vldrwq_gather_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_z_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_z_fv4sf): Likewise.
(mve_vldrwq_z_<supf>v4si): Likewise.
(mve_vldrwq_z_fv4sf): Likewise.
(mve_vmaxaq_m_s<mode>): Likewise.
(mve_vmaxaq_s<mode>): Likewise.
(mve_vmaxavq_p_s<mode>): Likewise.
(mve_vmaxavq_s<mode>): Likewise.
(mve_vmaxnmaq_f<mode>): Likewise.
(mve_vmaxnmaq_m_f<mode>): Likewise.
(mve_vmaxnmavq_f<mode>): Likewise.
(mve_vmaxnmavq_p_f<mode>): Likewise.
(mve_vmaxnmq_f<mode>): Likewise.
(mve_vmaxnmq_m_f<mode>): Likewise.
(mve_vmaxnmvq_f<mode>): Likewise.
(mve_vmaxnmvq_p_f<mode>): Likewise.
(mve_vmaxq_<supf><mode>): Likewise.
(mve_vmaxq_m_<supf><mode>): Likewise.
(mve_vmaxq_s<mode>): Likewise.
(mve_vmaxq_u<mode>): Likewise.
(mve_vmaxvq_<supf><mode>): Likewise.
(mve_vmaxvq_p_<supf><mode>): Likewise.
(mve_vminaq_m_s<mode>): Likewise.
(mve_vminaq_s<mode>): Likewise.
(mve_vminavq_p_s<mode>): Likewise.
(mve_vminavq_s<mode>): Likewise.
(mve_vminnmaq_f<mode>): Likewise.
(mve_vminnmaq_m_f<mode>): Likewise.
(mve_vminnmavq_f<mode>): Likewise.
(mve_vminnmavq_p_f<mode>): Likewise.
(mve_vminnmq_f<mode>): Likewise.
(mve_vminnmq_m_f<mode>): Likewise.
(mve_vminnmvq_f<mode>): Likewise.
(mve_vminnmvq_p_f<mode>): Likewise.
(mve_vminq_<supf><mode>): Likewise.
(mve_vminq_m_<supf><mode>): Likewise.
(mve_vminq_s<mode>): Likewise.
(mve_vminq_u<mode>): Likewise.
(mve_vminvq_<supf><mode>): Likewise.
(mve_vminvq_p_<supf><mode>): Likewise.
(mve_vmladavaq_<supf><mode>): Likewise.
(mve_vmladavaq_p_<supf><mode>): Likewise.
(mve_vmladavaxq_p_s<mode>): Likewise.
(mve_vmladavaxq_s<mode>): Likewise.
(mve_vmladavq_<supf><mode>): Likewise.
(mve_vmladavq_p_<supf><mode>): Likewise.
(mve_vmladavxq_p_s<mode>): Likewise.
(mve_vmladavxq_s<mode>): Likewise.
(mve_vmlaldavaq_<supf><mode>): Likewise.
(mve_vmlaldavaq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_<supf><mode>): Likewise.
(mve_vmlaldavaxq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_s<mode>): Likewise.
(mve_vmlaldavq_<supf><mode>): Likewise.
(mve_vmlaldavq_p_<supf><mode>): Likewise.
(mve_vmlaldavxq_p_s<mode>): Likewise.
(mve_vmlaldavxq_s<mode>): Likewise.
(mve_vmlaq_m_n_<supf><mode>): Likewise.
(mve_vmlaq_n_<supf><mode>): Likewise.
(mve_vmlasq_m_n_<supf><mode>): Likewise.
(mve_vmlasq_n_<supf><mode>): Likewise.
(mve_vmlsdavaq_p_s<mode>): Likewise.
(mve_vmlsdavaq_s<mode>): Likewise.
(mve_vmlsdavaxq_p_s<mode>): Likewise.
(mve_vmlsdavaxq_s<mode>): Likewise.
(mve_vmlsdavq_p_s<mode>): Likewise.
(mve_vmlsdavq_s<mode>): Likewise.
(mve_vmlsdavxq_p_s<mode>): Likewise.
(mve_vmlsdavxq_s<mode>): Likewise.
(mve_vmlsldavaq_p_s<mode>): Likewise.
(mve_vmlsldavaq_s<mode>): Likewise.
(mve_vmlsldavaxq_p_s<mode>): Likewise.
(mve_vmlsldavaxq_s<mode>): Likewise.
(mve_vmlsldavq_p_s<mode>): Likewise.
(mve_vmlsldavq_s<mode>): Likewise.
(mve_vmlsldavxq_p_s<mode>): Likewise.
(mve_vmlsldavxq_s<mode>): Likewise.
(mve_vmovlbq_<supf><mode>): Likewise.
(mve_vmovlbq_m_<supf><mode>): Likewise.
(mve_vmovltq_<supf><mode>): Likewise.
(mve_vmovltq_m_<supf><mode>): Likewise.
(mve_vmovnbq_<supf><mode>): Likewise.
(mve_vmovnbq_m_<supf><mode>): Likewise.
(mve_vmovntq_<supf><mode>): Likewise.
(mve_vmovntq_m_<supf><mode>): Likewise.
(mve_vmulhq_<supf><mode>): Likewise.
(mve_vmulhq_m_<supf><mode>): Likewise.
(mve_vmullbq_int_<supf><mode>): Likewise.
(mve_vmullbq_int_m_<supf><mode>): Likewise.
(mve_vmullbq_poly_m_p<mode>): Likewise.
(mve_vmullbq_poly_p<mode>): Likewise.
(mve_vmulltq_int_<supf><mode>): Likewise.
(mve_vmulltq_int_m_<supf><mode>): Likewise.
(mve_vmulltq_poly_m_p<mode>): Likewise.
(mve_vmulltq_poly_p<mode>): Likewise.
(mve_vmulq_<supf><mode>): Likewise.
(mve_vmulq_f<mode>): Likewise.
(mve_vmulq_m_<supf><mode>): Likewise.
(mve_vmulq_m_f<mode>): Likewise.
(mve_vmulq_m_n_<supf><mode>): Likewise.
(mve_vmulq_m_n_f<mode>): Likewise.
(mve_vmulq_n_<supf><mode>): Likewise.
(mve_vmulq_n_f<mode>): Likewise.
(mve_vmvnq_<supf><mode>): Likewise.
(mve_vmvnq_m_<supf><mode>): Likewise.
(mve_vmvnq_m_n_<supf><mode>): Likewise.
(mve_vmvnq_n_<supf><mode>): Likewise.
(mve_vmvnq_s<mode>): Likewise.
(mve_vmvnq_u<mode>): Likewise.
(mve_vnegq_f<mode>): Likewise.
(mve_vnegq_m_f<mode>): Likewise.
(mve_vnegq_m_s<mode>): Likewise.
(mve_vnegq_s<mode>): Likewise.
(mve_vornq_<supf><mode>): Likewise.
(mve_vornq_f<mode>): Likewise.
(mve_vornq_m_<supf><mode>): Likewise.
(mve_vornq_m_f<mode>): Likewise.
(mve_vornq_s<mode>): Likewise.
(mve_vornq_u<mode>): Likewise.
(mve_vorrq_<supf><mode>): Likewise.
(mve_vorrq_f<mode>): Likewise.
(mve_vorrq_m_<supf><mode>): Likewise.
(mve_vorrq_m_f<mode>): Likewise.
(mve_vorrq_m_n_<supf><mode>): Likewise.
(mve_vorrq_n_<supf><mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vqabsq_m_s<mode>): Likewise.
(mve_vqabsq_s<mode>): Likewise.
(mve_vqaddq_<supf><mode>): Likewise.
(mve_vqaddq_m_<supf><mode>): Likewise.
(mve_vqaddq_m_n_<supf><mode>): Likewise.
(mve_vqaddq_n_<supf><mode>): Likewise.
(mve_vqdmladhq_m_s<mode>): Likewise.
(mve_vqdmladhq_s<mode>): Likewise.
(mve_vqdmladhxq_m_s<mode>): Likewise.
(mve_vqdmladhxq_s<mode>): Likewise.
(mve_vqdmlahq_m_n_s<mode>): Likewise.
(mve_vqdmlahq_n_<supf><mode>): Likewise.
(mve_vqdmlahq_n_s<mode>): Likewise.
(mve_vqdmlashq_m_n_s<mode>): Likewise.
(mve_vqdmlashq_n_<supf><mode>): Likewise.
(mve_vqdmlashq_n_s<mode>): Likewise.
(mve_vqdmlsdhq_m_s<mode>): Likewise.
(mve_vqdmlsdhq_s<mode>): Likewise.
(mve_vqdmlsdhxq_m_s<mode>): Likewise.
(mve_vqdmlsdhxq_s<mode>): Likewise.
(mve_vqdmulhq_m_n_s<mode>): Likewise.
(mve_vqdmulhq_m_s<mode>): Likewise.
(mve_vqdmulhq_n_s<mode>): Likewise.
(mve_vqdmulhq_s<mode>): Likewise.
(mve_vqdmullbq_m_n_s<mode>): Likewise.
(mve_vqdmullbq_m_s<mode>): Likewise.
(mve_vqdmullbq_n_s<mode>): Likewise.
(mve_vqdmullbq_s<mode>): Likewise.
(mve_vqdmulltq_m_n_s<mode>): Likewise.
(mve_vqdmulltq_m_s<mode>): Likewise.
(mve_vqdmulltq_n_s<mode>): Likewise.
(mve_vqdmulltq_s<mode>): Likewise.
(mve_vqmovnbq_<supf><mode>): Likewise.
(mve_vqmovnbq_m_<supf><mode>): Likewise.
(mve_vqmovntq_<supf><mode>): Likewise.
(mve_vqmovntq_m_<supf><mode>): Likewise.
(mve_vqmovunbq_m_s<mode>): Likewise.
(mve_vqmovunbq_s<mode>): Likewise.
(mve_vqmovuntq_m_s<mode>): Likewise.
(mve_vqmovuntq_s<mode>): Likewise.
(mve_vqnegq_m_s<mode>): Likewise.
(mve_vqnegq_s<mode>): Likewise.
(mve_vqrdmladhq_m_s<mode>): Likewise.
(mve_vqrdmladhq_s<mode>): Likewise.
(mve_vqrdmladhxq_m_s<mode>): Likewise.
(mve_vqrdmladhxq_s<mode>): Likewise.
(mve_vqrdmlahq_m_n_s<mode>): Likewise.
(mve_vqrdmlahq_n_<supf><mode>): Likewise.
(mve_vqrdmlahq_n_s<mode>): Likewise.
(mve_vqrdmlashq_m_n_s<mode>): Likewise.
(mve_vqrdmlashq_n_<supf><mode>): Likewise.
(mve_vqrdmlashq_n_s<mode>): Likewise.
(mve_vqrdmlsdhq_m_s<mode>): Likewise.
(mve_vqrdmlsdhq_s<mode>): Likewise.
(mve_vqrdmlsdhxq_m_s<mode>): Likewise.
(mve_vqrdmlsdhxq_s<mode>): Likewise.
(mve_vqrdmulhq_m_n_s<mode>): Likewise.
(mve_vqrdmulhq_m_s<mode>): Likewise.
(mve_vqrdmulhq_n_s<mode>): Likewise.
(mve_vqrdmulhq_s<mode>): Likewise.
(mve_vqrshlq_<supf><mode>): Likewise.
(mve_vqrshlq_m_<supf><mode>): Likewise.
(mve_vqrshlq_m_n_<supf><mode>): Likewise.
(mve_vqrshlq_n_<supf><mode>): Likewise.
(mve_vqrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqrshrnbq_n_<supf><mode>): Likewise.
(mve_vqrshrntq_m_n_<supf><mode>): Likewise.
(mve_vqrshrntq_n_<supf><mode>): Likewise.
(mve_vqrshrunbq_m_n_s<mode>): Likewise.
(mve_vqrshrunbq_n_s<mode>): Likewise.
(mve_vqrshruntq_m_n_s<mode>): Likewise.
(mve_vqrshruntq_n_s<mode>): Likewise.
(mve_vqshlq_<supf><mode>): Likewise.
(mve_vqshlq_m_<supf><mode>): Likewise.
(mve_vqshlq_m_n_<supf><mode>): Likewise.
(mve_vqshlq_m_r_<supf><mode>): Likewise.
(mve_vqshlq_n_<supf><mode>): Likewise.
(mve_vqshlq_r_<supf><mode>): Likewise.
(mve_vqshluq_m_n_s<mode>): Likewise.
(mve_vqshluq_n_s<mode>): Likewise.
(mve_vqshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqshrnbq_n_<supf><mode>): Likewise.
(mve_vqshrntq_m_n_<supf><mode>): Likewise.
(mve_vqshrntq_n_<supf><mode>): Likewise.
(mve_vqshrunbq_m_n_s<mode>): Likewise.
(mve_vqshrunbq_n_s<mode>): Likewise.
(mve_vqshruntq_m_n_s<mode>): Likewise.
(mve_vqshruntq_n_s<mode>): Likewise.
(mve_vqsubq_<supf><mode>): Likewise.
(mve_vqsubq_m_<supf><mode>): Likewise.
(mve_vqsubq_m_n_<supf><mode>): Likewise.
(mve_vqsubq_n_<supf><mode>): Likewise.
(mve_vrev16q_<supf>v16qi): Likewise.
(mve_vrev16q_m_<supf>v16qi): Likewise.
(mve_vrev32q_<supf><mode>): Likewise.
(mve_vrev32q_fv8hf): Likewise.
(mve_vrev32q_m_<supf><mode>): Likewise.
(mve_vrev32q_m_fv8hf): Likewise.
(mve_vrev64q_<supf><mode>): Likewise.
(mve_vrev64q_f<mode>): Likewise.
(mve_vrev64q_m_<supf><mode>): Likewise.
(mve_vrev64q_m_f<mode>): Likewise.
(mve_vrhaddq_<supf><mode>): Likewise.
(mve_vrhaddq_m_<supf><mode>): Likewise.
(mve_vrmlaldavhaq_<supf>v4si): Likewise.
(mve_vrmlaldavhaq_p_sv4si): Likewise.
(mve_vrmlaldavhaq_p_uv4si): Likewise.
(mve_vrmlaldavhaq_sv4si): Likewise.
(mve_vrmlaldavhaq_uv4si): Likewise.
(mve_vrmlaldavhaxq_p_sv4si): Likewise.
(mve_vrmlaldavhaxq_sv4si): Likewise.
(mve_vrmlaldavhq_<supf>v4si): Likewise.
(mve_vrmlaldavhq_p_<supf>v4si): Likewise.
(mve_vrmlaldavhxq_p_sv4si): Likewise.
(mve_vrmlaldavhxq_sv4si): Likewise.
(mve_vrmlsldavhaq_p_sv4si): Likewise.
(mve_vrmlsldavhaq_sv4si): Likewise.
(mve_vrmlsldavhaxq_p_sv4si): Likewise.
(mve_vrmlsldavhaxq_sv4si): Likewise.
(mve_vrmlsldavhq_p_sv4si): Likewise.
(mve_vrmlsldavhq_sv4si): Likewise.
(mve_vrmlsldavhxq_p_sv4si): Likewise.
(mve_vrmlsldavhxq_sv4si): Likewise.
(mve_vrmulhq_<supf><mode>): Likewise.
(mve_vrmulhq_m_<supf><mode>): Likewise.
(mve_vrndaq_f<mode>): Likewise.
(mve_vrndaq_m_f<mode>): Likewise.
(mve_vrndmq_f<mode>): Likewise.
(mve_vrndmq_m_f<mode>): Likewise.
(mve_vrndnq_f<mode>): Likewise.
(mve_vrndnq_m_f<mode>): Likewise.
(mve_vrndpq_f<mode>): Likewise.
(mve_vrndpq_m_f<mode>): Likewise.
(mve_vrndq_f<mode>): Likewise.
(mve_vrndq_m_f<mode>): Likewise.
(mve_vrndxq_f<mode>): Likewise.
(mve_vrndxq_m_f<mode>): Likewise.
(mve_vrshlq_<supf><mode>): Likewise.
(mve_vrshlq_m_<supf><mode>): Likewise.
(mve_vrshlq_m_n_<supf><mode>): Likewise.
(mve_vrshlq_n_<supf><mode>): Likewise.
(mve_vrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vrshrnbq_n_<supf><mode>): Likewise.
(mve_vrshrntq_m_n_<supf><mode>): Likewise.
(mve_vrshrntq_n_<supf><mode>): Likewise.
(mve_vrshrq_m_n_<supf><mode>): Likewise.
(mve_vrshrq_n_<supf><mode>): Likewise.
(mve_vsbciq_<supf>v4si): Likewise.
(mve_vsbciq_m_<supf>v4si): Likewise.
(mve_vsbcq_<supf>v4si): Likewise.
(mve_vsbcq_m_<supf>v4si): Likewise.
(mve_vshlcq_<supf><mode>): Likewise.
(mve_vshlcq_m_<supf><mode>): Likewise.
(mve_vshllbq_m_n_<supf><mode>): Likewise.
(mve_vshllbq_n_<supf><mode>): Likewise.
(mve_vshlltq_m_n_<supf><mode>): Likewise.
(mve_vshlltq_n_<supf><mode>): Likewise.
(mve_vshlq_<supf><mode>): Likewise.
(mve_vshlq_<supf><mode>): Likewise.
(mve_vshlq_m_<supf><mode>): Likewise.
(mve_vshlq_m_n_<supf><mode>): Likewise.
(mve_vshlq_m_r_<supf><mode>): Likewise.
(mve_vshlq_n_<supf><mode>): Likewise.
(mve_vshlq_r_<supf><mode>): Likewise.
(mve_vshrnbq_m_n_<supf><mode>): Likewise.
(mve_vshrnbq_n_<supf><mode>): Likewise.
(mve_vshrntq_m_n_<supf><mode>): Likewise.
(mve_vshrntq_n_<supf><mode>): Likewise.
(mve_vshrq_m_n_<supf><mode>): Likewise.
(mve_vshrq_n_<supf><mode>): Likewise.
(mve_vsliq_m_n_<supf><mode>): Likewise.
(mve_vsliq_n_<supf><mode>): Likewise.
(mve_vsriq_m_n_<supf><mode>): Likewise.
(mve_vsriq_n_<supf><mode>): Likewise.
(mve_vstrbq_<supf><mode>): Likewise.
(mve_vstrbq_p_<supf><mode>): Likewise.
(mve_vstrbq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrbq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrdq_scatter_base_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrhq_<supf><mode>): Likewise.
(mve_vstrhq_fv8hf): Likewise.
(mve_vstrhq_p_<supf><mode>): Likewise.
(mve_vstrhq_p_fv8hf): Likewise.
(mve_vstrhq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_p_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn): Likewise.
(mve_vstrwq_<supf>v4si): Likewise.
(mve_vstrwq_fv4sf): Likewise.
(mve_vstrwq_p_<supf>v4si): Likewise.
(mve_vstrwq_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_fv4sf): Likewise.
(mve_vstrwq_scatter_base_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_p_fv4sf): Likewise.
(mve_vstrwq_scatter_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_p_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn): Likewise.
(mve_vsubq_<supf><mode>): Likewise.
(mve_vsubq_f<mode>): Likewise.
(mve_vsubq_m_<supf><mode>): Likewise.
(mve_vsubq_m_f<mode>): Likewise.
(mve_vsubq_m_n_<supf><mode>): Likewise.
(mve_vsubq_m_n_f<mode>): Likewise.
(mve_vsubq_n_<supf><mode>): Likewise.
(mve_vsubq_n_f<mode>): Likewise.
[-- Attachment #2: 1.patch --]
[-- Type: text/x-patch, Size: 128695 bytes --]
commit 7a25d85f91d84e53e707bb36d052f8196e49e147
Author: Stam Markianos-Wright <stam.markianos-wright@arm.com>
Date: Tue Oct 18 17:42:56 2022 +0100
arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns
I'd like to submit two patches that add support for Arm's MVE
Tail Predicated Low Overhead Loop feature.
--- Introduction ---
The M-class Arm-ARM:
https://developer.arm.com/documentation/ddi0553/bu/?lang=en
Section B5.5.1 "Loop tail predication" describes the feature
we are adding support for with this patch (although
we only add codegen for DLSTP/LETP instruction loops).
Previously with commit d2ed233cb94 we'd added support for
non-MVE DLS/LE loops through the loop-doloop pass, which, given
a standard MVE loop like:
```
void __attribute__ ((noinline)) test (int16_t *a, int16_t *b, int16_t *c, int n)
{
while (n > 0)
{
mve_pred16_t p = vctp16q (n);
int16x8_t va = vldrhq_z_s16 (a, p);
int16x8_t vb = vldrhq_z_s16 (b, p);
int16x8_t vc = vaddq_x_s16 (va, vb, p);
vstrhq_p_s16 (c, vc, p);
c+=8;
a+=8;
b+=8;
n-=8;
}
}
```
.. would output:
```
<pre-calculate the number of iterations and place it into lr>
dls lr, lr
.L3:
vctp.16 r3
vmrs ip, P0 @ movhi
sxth ip, ip
vmsr P0, ip @ movhi
mov r4, r0
vpst
vldrht.16 q2, [r4]
mov r4, r1
vmov q3, q0
vpst
vldrht.16 q1, [r4]
mov r4, r2
vpst
vaddt.i16 q3, q2, q1
subs r3, r3, #8
vpst
vstrht.16 q3, [r4]
adds r0, r0, #16
adds r1, r1, #16
adds r2, r2, #16
le lr, .L3
```
where the LE instruction will decrement LR by 1, compare and
branch if needed.
(there are also other inefficiencies with the above code, like the
pointless vmrs/sxth/vmsr on the VPR and the adds not being merged
into the vldrht/vstrht as a #16 offsets and some random movs!
But that's different problems...)
The MVE version is similar, except that:
* Instead of DLS/LE the instructions are DLSTP/LETP.
* Instead of pre-calculating the number of iterations of the
loop, we place the number of elements to be processed by the
loop into LR.
* Instead of decrementing the LR by one, LETP will decrement it
by FPSCR.LTPSIZE, which is the number of elements being
processed in each iteration: 16 for 8-bit elements, 5 for 16-bit
elements, etc.
* On the final iteration, automatic Loop Tail Predication is
performed, as if the instructions within the loop had been VPT
predicated with a VCTP generating the VPR predicate in every
loop iteration.
The dlstp/letp loop now looks like:
```
<place n into r3>
dlstp.16 lr, r3
.L14:
mov r3, r0
vldrh.16 q3, [r3]
mov r3, r1
vldrh.16 q2, [r3]
mov r3, r2
vadd.i16 q3, q3, q2
adds r0, r0, #16
vstrh.16 q3, [r3]
adds r1, r1, #16
adds r2, r2, #16
letp lr, .L14
```
Since the loop tail predication is automatic, we have eliminated
the VCTP that had been specified by the user in the intrinsic
and converted the VPT-predicated instructions into their
unpredicated equivalents (which also saves us from VPST insns).
The LE instruction here decrements LR by 8 in each iteration.
--- This 1/2 patch ---
This first patch lays some groundwork by adding an attribute to
md patterns, and then the second patch contains the functional
changes.
One major difficulty in implementing MVE Tail-Predicated Low
Overhead Loops was the need to transform VPT-predicated insns
in the insn chain into their unpredicated equivalents, like:
`mve_vldrbq_z_<supf><mode> -> mve_vldrbq_<supf><mode>`.
This requires us to have a deterministic link between two
different patterns in mve.md -- this _could_ be done by
re-ordering the entirety of mve.md such that the patterns are
at some constant icode proximity (e.g. having the _z immediately
after the unpredicated version would mean that to map from the
former to the latter you could use icode-1), but that is a very
messy solution that would lead to complex unknown dependencies
between the ordering of patterns.
This patch proves an alternative way of doing that: using an insn
attribute to encode the icode of the unpredicated instruction.
No regressions on arm-none-eabi with an MVE target.
Thank you,
Stam Markianos-Wright
gcc/ChangeLog:
* config/arm/arm.md (mve_unpredicated_insn): New attribute.
* config/arm/arm.h (MVE_VPT_PREDICATED_INSN_P): New define.
(MVE_VPT_UNPREDICATED_INSN_P): Likewise.
(MVE_VPT_PREDICABLE_INSN_P): Likewise.
* config/arm/vec-common.md (mve_vshlq_<supf><mode>): Add attribute.
* config/arm/mve.md (arm_vcx1q<a>_p_v16qi): Add attribute.
(arm_vcx1q<a>v16qi): Likewise.
(arm_vcx1qav16qi): Likewise.
(arm_vcx1qv16qi): Likewise.
(arm_vcx2q<a>_p_v16qi): Likewise.
(arm_vcx2q<a>v16qi): Likewise.
(arm_vcx2qav16qi): Likewise.
(arm_vcx2qv16qi): Likewise.
(arm_vcx3q<a>_p_v16qi): Likewise.
(arm_vcx3q<a>v16qi): Likewise.
(arm_vcx3qav16qi): Likewise.
(arm_vcx3qv16qi): Likewise.
(mve_vabavq_<supf><mode>): Likewise.
(mve_vabavq_p_<supf><mode>): Likewise.
(mve_vabdq_<supf><mode>): Likewise.
(mve_vabdq_f<mode>): Likewise.
(mve_vabdq_m_<supf><mode>): Likewise.
(mve_vabdq_m_f<mode>): Likewise.
(mve_vabsq_f<mode>): Likewise.
(mve_vabsq_m_f<mode>): Likewise.
(mve_vabsq_m_s<mode>): Likewise.
(mve_vabsq_s<mode>): Likewise.
(mve_vadciq_<supf>v4si): Likewise.
(mve_vadciq_m_<supf>v4si): Likewise.
(mve_vadcq_<supf>v4si): Likewise.
(mve_vadcq_m_<supf>v4si): Likewise.
(mve_vaddlvaq_<supf>v4si): Likewise.
(mve_vaddlvaq_p_<supf>v4si): Likewise.
(mve_vaddlvq_<supf>v4si): Likewise.
(mve_vaddlvq_p_<supf>v4si): Likewise.
(mve_vaddq_f<mode>): Likewise.
(mve_vaddq_m_<supf><mode>): Likewise.
(mve_vaddq_m_f<mode>): Likewise.
(mve_vaddq_m_n_<supf><mode>): Likewise.
(mve_vaddq_m_n_f<mode>): Likewise.
(mve_vaddq_n_<supf><mode>): Likewise.
(mve_vaddq_n_f<mode>): Likewise.
(mve_vaddq<mode>): Likewise.
(mve_vaddvaq_<supf><mode>): Likewise.
(mve_vaddvaq_p_<supf><mode>): Likewise.
(mve_vaddvq_<supf><mode>): Likewise.
(mve_vaddvq_p_<supf><mode>): Likewise.
(mve_vandq_<supf><mode>): Likewise.
(mve_vandq_f<mode>): Likewise.
(mve_vandq_m_<supf><mode>): Likewise.
(mve_vandq_m_f<mode>): Likewise.
(mve_vandq_s<mode>): Likewise.
(mve_vandq_u<mode>): Likewise.
(mve_vbicq_<supf><mode>): Likewise.
(mve_vbicq_f<mode>): Likewise.
(mve_vbicq_m_<supf><mode>): Likewise.
(mve_vbicq_m_f<mode>): Likewise.
(mve_vbicq_m_n_<supf><mode>): Likewise.
(mve_vbicq_n_<supf><mode>): Likewise.
(mve_vbicq_s<mode>): Likewise.
(mve_vbicq_u<mode>): Likewise.
(mve_vbrsrq_m_n_<supf><mode>): Likewise.
(mve_vbrsrq_m_n_f<mode>): Likewise.
(mve_vbrsrq_n_<supf><mode>): Likewise.
(mve_vbrsrq_n_f<mode>): Likewise.
(mve_vcaddq_rot270_m_<supf><mode>): Likewise.
(mve_vcaddq_rot270_m_f<mode>): Likewise.
(mve_vcaddq_rot270<mode>): Likewise.
(mve_vcaddq_rot270<mode>): Likewise.
(mve_vcaddq_rot90_m_<supf><mode>): Likewise.
(mve_vcaddq_rot90_m_f<mode>): Likewise.
(mve_vcaddq_rot90<mode>): Likewise.
(mve_vcaddq_rot90<mode>): Likewise.
(mve_vcaddq<mve_rot><mode>): Likewise.
(mve_vcaddq<mve_rot><mode>): Likewise.
(mve_vclsq_m_s<mode>): Likewise.
(mve_vclsq_s<mode>): Likewise.
(mve_vclzq_<supf><mode>): Likewise.
(mve_vclzq_m_<supf><mode>): Likewise.
(mve_vclzq_s<mode>): Likewise.
(mve_vclzq_u<mode>): Likewise.
(mve_vcmlaq_m_f<mode>): Likewise.
(mve_vcmlaq_rot180_m_f<mode>): Likewise.
(mve_vcmlaq_rot180<mode>): Likewise.
(mve_vcmlaq_rot270_m_f<mode>): Likewise.
(mve_vcmlaq_rot270<mode>): Likewise.
(mve_vcmlaq_rot90_m_f<mode>): Likewise.
(mve_vcmlaq_rot90<mode>): Likewise.
(mve_vcmlaq<mode>): Likewise.
(mve_vcmlaq<mve_rot><mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_f<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_n_<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_n_f<mode>): Likewise.
(mve_vcmpcsq_<mode>): Likewise.
(mve_vcmpcsq_m_n_u<mode>): Likewise.
(mve_vcmpcsq_m_u<mode>): Likewise.
(mve_vcmpcsq_n_<mode>): Likewise.
(mve_vcmpeqq_<mode>): Likewise.
(mve_vcmpeqq_f<mode>): Likewise.
(mve_vcmpeqq_m_<supf><mode>): Likewise.
(mve_vcmpeqq_m_f<mode>): Likewise.
(mve_vcmpeqq_m_n_<supf><mode>): Likewise.
(mve_vcmpeqq_m_n_f<mode>): Likewise.
(mve_vcmpeqq_n_<mode>): Likewise.
(mve_vcmpeqq_n_f<mode>): Likewise.
(mve_vcmpgeq_<mode>): Likewise.
(mve_vcmpgeq_f<mode>): Likewise.
(mve_vcmpgeq_m_f<mode>): Likewise.
(mve_vcmpgeq_m_n_f<mode>): Likewise.
(mve_vcmpgeq_m_n_s<mode>): Likewise.
(mve_vcmpgeq_m_s<mode>): Likewise.
(mve_vcmpgeq_n_<mode>): Likewise.
(mve_vcmpgeq_n_f<mode>): Likewise.
(mve_vcmpgtq_<mode>): Likewise.
(mve_vcmpgtq_f<mode>): Likewise.
(mve_vcmpgtq_m_f<mode>): Likewise.
(mve_vcmpgtq_m_n_f<mode>): Likewise.
(mve_vcmpgtq_m_n_s<mode>): Likewise.
(mve_vcmpgtq_m_s<mode>): Likewise.
(mve_vcmpgtq_n_<mode>): Likewise.
(mve_vcmpgtq_n_f<mode>): Likewise.
(mve_vcmphiq_<mode>): Likewise.
(mve_vcmphiq_m_n_u<mode>): Likewise.
(mve_vcmphiq_m_u<mode>): Likewise.
(mve_vcmphiq_n_<mode>): Likewise.
(mve_vcmpleq_<mode>): Likewise.
(mve_vcmpleq_f<mode>): Likewise.
(mve_vcmpleq_m_f<mode>): Likewise.
(mve_vcmpleq_m_n_f<mode>): Likewise.
(mve_vcmpleq_m_n_s<mode>): Likewise.
(mve_vcmpleq_m_s<mode>): Likewise.
(mve_vcmpleq_n_<mode>): Likewise.
(mve_vcmpleq_n_f<mode>): Likewise.
(mve_vcmpltq_<mode>): Likewise.
(mve_vcmpltq_f<mode>): Likewise.
(mve_vcmpltq_m_f<mode>): Likewise.
(mve_vcmpltq_m_n_f<mode>): Likewise.
(mve_vcmpltq_m_n_s<mode>): Likewise.
(mve_vcmpltq_m_s<mode>): Likewise.
(mve_vcmpltq_n_<mode>): Likewise.
(mve_vcmpltq_n_f<mode>): Likewise.
(mve_vcmpneq_<mode>): Likewise.
(mve_vcmpneq_f<mode>): Likewise.
(mve_vcmpneq_m_<supf><mode>): Likewise.
(mve_vcmpneq_m_f<mode>): Likewise.
(mve_vcmpneq_m_n_<supf><mode>): Likewise.
(mve_vcmpneq_m_n_f<mode>): Likewise.
(mve_vcmpneq_n_<mode>): Likewise.
(mve_vcmpneq_n_f<mode>): Likewise.
(mve_vcmulq_m_f<mode>): Likewise.
(mve_vcmulq_rot180_m_f<mode>): Likewise.
(mve_vcmulq_rot180<mode>): Likewise.
(mve_vcmulq_rot270_m_f<mode>): Likewise.
(mve_vcmulq_rot270<mode>): Likewise.
(mve_vcmulq_rot90_m_f<mode>): Likewise.
(mve_vcmulq_rot90<mode>): Likewise.
(mve_vcmulq<mode>): Likewise.
(mve_vcmulq<mve_rot><mode>): Likewise.
(mve_vctp<mode1>q_mhi): Likewise.
(mve_vctp<mode1>qhi): Likewise.
(mve_vcvtaq_<supf><mode>): Likewise.
(mve_vcvtaq_m_<supf><mode>): Likewise.
(mve_vcvtbq_f16_f32v8hf): Likewise.
(mve_vcvtbq_f32_f16v4sf): Likewise.
(mve_vcvtbq_m_f16_f32v8hf): Likewise.
(mve_vcvtbq_m_f32_f16v4sf): Likewise.
(mve_vcvtmq_<supf><mode>): Likewise.
(mve_vcvtmq_m_<supf><mode>): Likewise.
(mve_vcvtnq_<supf><mode>): Likewise.
(mve_vcvtnq_m_<supf><mode>): Likewise.
(mve_vcvtpq_<supf><mode>): Likewise.
(mve_vcvtpq_m_<supf><mode>): Likewise.
(mve_vcvtq_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_m_to_f_<supf><mode>): Likewise.
(mve_vcvtq_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_to_f_<supf><mode>): Likewise.
(mve_vcvttq_f16_f32v8hf): Likewise.
(mve_vcvttq_f32_f16v4sf): Likewise.
(mve_vcvttq_m_f16_f32v8hf): Likewise.
(mve_vcvttq_m_f32_f16v4sf): Likewise.
(mve_vddupq_m_wb_u<mode>_insn): Likewise.
(mve_vddupq_u<mode>_insn): Likewise.
(mve_vdupq_m_n_<supf><mode>): Likewise.
(mve_vdupq_m_n_f<mode>): Likewise.
(mve_vdupq_n_<supf><mode>): Likewise.
(mve_vdupq_n_f<mode>): Likewise.
(mve_vdwdupq_m_wb_u<mode>_insn): Likewise.
(mve_vdwdupq_wb_u<mode>_insn): Likewise.
(mve_veorq_<supf><mode>): Likewise.
(mve_veorq_f<mode>): Likewise.
(mve_veorq_m_<supf><mode>): Likewise.
(mve_veorq_m_f<mode>): Likewise.
(mve_veorq_s<mode>): Likewise.
(mve_veorq_u<mode>): Likewise.
(mve_vfmaq_f<mode>): Likewise.
(mve_vfmaq_m_f<mode>): Likewise.
(mve_vfmaq_m_n_f<mode>): Likewise.
(mve_vfmaq_n_f<mode>): Likewise.
(mve_vfmasq_m_n_f<mode>): Likewise.
(mve_vfmasq_n_f<mode>): Likewise.
(mve_vfmsq_f<mode>): Likewise.
(mve_vfmsq_m_f<mode>): Likewise.
(mve_vhaddq_<supf><mode>): Likewise.
(mve_vhaddq_m_<supf><mode>): Likewise.
(mve_vhaddq_m_n_<supf><mode>): Likewise.
(mve_vhaddq_n_<supf><mode>): Likewise.
(mve_vhcaddq_rot270_m_s<mode>): Likewise.
(mve_vhcaddq_rot270_s<mode>): Likewise.
(mve_vhcaddq_rot90_m_s<mode>): Likewise.
(mve_vhcaddq_rot90_s<mode>): Likewise.
(mve_vhsubq_<supf><mode>): Likewise.
(mve_vhsubq_m_<supf><mode>): Likewise.
(mve_vhsubq_m_n_<supf><mode>): Likewise.
(mve_vhsubq_n_<supf><mode>): Likewise.
(mve_vidupq_m_wb_u<mode>_insn): Likewise.
(mve_vidupq_u<mode>_insn): Likewise.
(mve_viwdupq_m_wb_u<mode>_insn): Likewise.
(mve_viwdupq_wb_u<mode>_insn): Likewise.
(mve_vldrbq_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrbq_z_<supf><mode>): Likewise.
(mve_vldrdq_gather_base_<supf>v2di): Likewise.
(mve_vldrdq_gather_base_wb_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_wb_z_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_z_<supf>v2di): Likewise.
(mve_vldrhq_<supf><mode>): Likewise.
(mve_vldrhq_fv8hf): Likewise.
(mve_vldrhq_gather_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_fv8hf): Likewise.
(mve_vldrhq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_z_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_z_fv8hf): Likewise.
(mve_vldrhq_z_<supf><mode>): Likewise.
(mve_vldrhq_z_fv8hf): Likewise.
(mve_vldrwq_<supf>v4si): Likewise.
(mve_vldrwq_fv4sf): Likewise.
(mve_vldrwq_gather_base_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_fv4sf): Likewise.
(mve_vldrwq_gather_base_wb_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_z_fv4sf): Likewise.
(mve_vldrwq_gather_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_fv4sf): Likewise.
(mve_vldrwq_gather_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_z_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_z_fv4sf): Likewise.
(mve_vldrwq_z_<supf>v4si): Likewise.
(mve_vldrwq_z_fv4sf): Likewise.
(mve_vmaxaq_m_s<mode>): Likewise.
(mve_vmaxaq_s<mode>): Likewise.
(mve_vmaxavq_p_s<mode>): Likewise.
(mve_vmaxavq_s<mode>): Likewise.
(mve_vmaxnmaq_f<mode>): Likewise.
(mve_vmaxnmaq_m_f<mode>): Likewise.
(mve_vmaxnmavq_f<mode>): Likewise.
(mve_vmaxnmavq_p_f<mode>): Likewise.
(mve_vmaxnmq_f<mode>): Likewise.
(mve_vmaxnmq_m_f<mode>): Likewise.
(mve_vmaxnmvq_f<mode>): Likewise.
(mve_vmaxnmvq_p_f<mode>): Likewise.
(mve_vmaxq_<supf><mode>): Likewise.
(mve_vmaxq_m_<supf><mode>): Likewise.
(mve_vmaxq_s<mode>): Likewise.
(mve_vmaxq_u<mode>): Likewise.
(mve_vmaxvq_<supf><mode>): Likewise.
(mve_vmaxvq_p_<supf><mode>): Likewise.
(mve_vminaq_m_s<mode>): Likewise.
(mve_vminaq_s<mode>): Likewise.
(mve_vminavq_p_s<mode>): Likewise.
(mve_vminavq_s<mode>): Likewise.
(mve_vminnmaq_f<mode>): Likewise.
(mve_vminnmaq_m_f<mode>): Likewise.
(mve_vminnmavq_f<mode>): Likewise.
(mve_vminnmavq_p_f<mode>): Likewise.
(mve_vminnmq_f<mode>): Likewise.
(mve_vminnmq_m_f<mode>): Likewise.
(mve_vminnmvq_f<mode>): Likewise.
(mve_vminnmvq_p_f<mode>): Likewise.
(mve_vminq_<supf><mode>): Likewise.
(mve_vminq_m_<supf><mode>): Likewise.
(mve_vminq_s<mode>): Likewise.
(mve_vminq_u<mode>): Likewise.
(mve_vminvq_<supf><mode>): Likewise.
(mve_vminvq_p_<supf><mode>): Likewise.
(mve_vmladavaq_<supf><mode>): Likewise.
(mve_vmladavaq_p_<supf><mode>): Likewise.
(mve_vmladavaxq_p_s<mode>): Likewise.
(mve_vmladavaxq_s<mode>): Likewise.
(mve_vmladavq_<supf><mode>): Likewise.
(mve_vmladavq_p_<supf><mode>): Likewise.
(mve_vmladavxq_p_s<mode>): Likewise.
(mve_vmladavxq_s<mode>): Likewise.
(mve_vmlaldavaq_<supf><mode>): Likewise.
(mve_vmlaldavaq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_<supf><mode>): Likewise.
(mve_vmlaldavaxq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_s<mode>): Likewise.
(mve_vmlaldavq_<supf><mode>): Likewise.
(mve_vmlaldavq_p_<supf><mode>): Likewise.
(mve_vmlaldavxq_p_s<mode>): Likewise.
(mve_vmlaldavxq_s<mode>): Likewise.
(mve_vmlaq_m_n_<supf><mode>): Likewise.
(mve_vmlaq_n_<supf><mode>): Likewise.
(mve_vmlasq_m_n_<supf><mode>): Likewise.
(mve_vmlasq_n_<supf><mode>): Likewise.
(mve_vmlsdavaq_p_s<mode>): Likewise.
(mve_vmlsdavaq_s<mode>): Likewise.
(mve_vmlsdavaxq_p_s<mode>): Likewise.
(mve_vmlsdavaxq_s<mode>): Likewise.
(mve_vmlsdavq_p_s<mode>): Likewise.
(mve_vmlsdavq_s<mode>): Likewise.
(mve_vmlsdavxq_p_s<mode>): Likewise.
(mve_vmlsdavxq_s<mode>): Likewise.
(mve_vmlsldavaq_p_s<mode>): Likewise.
(mve_vmlsldavaq_s<mode>): Likewise.
(mve_vmlsldavaxq_p_s<mode>): Likewise.
(mve_vmlsldavaxq_s<mode>): Likewise.
(mve_vmlsldavq_p_s<mode>): Likewise.
(mve_vmlsldavq_s<mode>): Likewise.
(mve_vmlsldavxq_p_s<mode>): Likewise.
(mve_vmlsldavxq_s<mode>): Likewise.
(mve_vmovlbq_<supf><mode>): Likewise.
(mve_vmovlbq_m_<supf><mode>): Likewise.
(mve_vmovltq_<supf><mode>): Likewise.
(mve_vmovltq_m_<supf><mode>): Likewise.
(mve_vmovnbq_<supf><mode>): Likewise.
(mve_vmovnbq_m_<supf><mode>): Likewise.
(mve_vmovntq_<supf><mode>): Likewise.
(mve_vmovntq_m_<supf><mode>): Likewise.
(mve_vmulhq_<supf><mode>): Likewise.
(mve_vmulhq_m_<supf><mode>): Likewise.
(mve_vmullbq_int_<supf><mode>): Likewise.
(mve_vmullbq_int_m_<supf><mode>): Likewise.
(mve_vmullbq_poly_m_p<mode>): Likewise.
(mve_vmullbq_poly_p<mode>): Likewise.
(mve_vmulltq_int_<supf><mode>): Likewise.
(mve_vmulltq_int_m_<supf><mode>): Likewise.
(mve_vmulltq_poly_m_p<mode>): Likewise.
(mve_vmulltq_poly_p<mode>): Likewise.
(mve_vmulq_<supf><mode>): Likewise.
(mve_vmulq_f<mode>): Likewise.
(mve_vmulq_m_<supf><mode>): Likewise.
(mve_vmulq_m_f<mode>): Likewise.
(mve_vmulq_m_n_<supf><mode>): Likewise.
(mve_vmulq_m_n_f<mode>): Likewise.
(mve_vmulq_n_<supf><mode>): Likewise.
(mve_vmulq_n_f<mode>): Likewise.
(mve_vmvnq_<supf><mode>): Likewise.
(mve_vmvnq_m_<supf><mode>): Likewise.
(mve_vmvnq_m_n_<supf><mode>): Likewise.
(mve_vmvnq_n_<supf><mode>): Likewise.
(mve_vmvnq_s<mode>): Likewise.
(mve_vmvnq_u<mode>): Likewise.
(mve_vnegq_f<mode>): Likewise.
(mve_vnegq_m_f<mode>): Likewise.
(mve_vnegq_m_s<mode>): Likewise.
(mve_vnegq_s<mode>): Likewise.
(mve_vornq_<supf><mode>): Likewise.
(mve_vornq_f<mode>): Likewise.
(mve_vornq_m_<supf><mode>): Likewise.
(mve_vornq_m_f<mode>): Likewise.
(mve_vornq_s<mode>): Likewise.
(mve_vornq_u<mode>): Likewise.
(mve_vorrq_<supf><mode>): Likewise.
(mve_vorrq_f<mode>): Likewise.
(mve_vorrq_m_<supf><mode>): Likewise.
(mve_vorrq_m_f<mode>): Likewise.
(mve_vorrq_m_n_<supf><mode>): Likewise.
(mve_vorrq_n_<supf><mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vqabsq_m_s<mode>): Likewise.
(mve_vqabsq_s<mode>): Likewise.
(mve_vqaddq_<supf><mode>): Likewise.
(mve_vqaddq_m_<supf><mode>): Likewise.
(mve_vqaddq_m_n_<supf><mode>): Likewise.
(mve_vqaddq_n_<supf><mode>): Likewise.
(mve_vqdmladhq_m_s<mode>): Likewise.
(mve_vqdmladhq_s<mode>): Likewise.
(mve_vqdmladhxq_m_s<mode>): Likewise.
(mve_vqdmladhxq_s<mode>): Likewise.
(mve_vqdmlahq_m_n_s<mode>): Likewise.
(mve_vqdmlahq_n_<supf><mode>): Likewise.
(mve_vqdmlahq_n_s<mode>): Likewise.
(mve_vqdmlashq_m_n_s<mode>): Likewise.
(mve_vqdmlashq_n_<supf><mode>): Likewise.
(mve_vqdmlashq_n_s<mode>): Likewise.
(mve_vqdmlsdhq_m_s<mode>): Likewise.
(mve_vqdmlsdhq_s<mode>): Likewise.
(mve_vqdmlsdhxq_m_s<mode>): Likewise.
(mve_vqdmlsdhxq_s<mode>): Likewise.
(mve_vqdmulhq_m_n_s<mode>): Likewise.
(mve_vqdmulhq_m_s<mode>): Likewise.
(mve_vqdmulhq_n_s<mode>): Likewise.
(mve_vqdmulhq_s<mode>): Likewise.
(mve_vqdmullbq_m_n_s<mode>): Likewise.
(mve_vqdmullbq_m_s<mode>): Likewise.
(mve_vqdmullbq_n_s<mode>): Likewise.
(mve_vqdmullbq_s<mode>): Likewise.
(mve_vqdmulltq_m_n_s<mode>): Likewise.
(mve_vqdmulltq_m_s<mode>): Likewise.
(mve_vqdmulltq_n_s<mode>): Likewise.
(mve_vqdmulltq_s<mode>): Likewise.
(mve_vqmovnbq_<supf><mode>): Likewise.
(mve_vqmovnbq_m_<supf><mode>): Likewise.
(mve_vqmovntq_<supf><mode>): Likewise.
(mve_vqmovntq_m_<supf><mode>): Likewise.
(mve_vqmovunbq_m_s<mode>): Likewise.
(mve_vqmovunbq_s<mode>): Likewise.
(mve_vqmovuntq_m_s<mode>): Likewise.
(mve_vqmovuntq_s<mode>): Likewise.
(mve_vqnegq_m_s<mode>): Likewise.
(mve_vqnegq_s<mode>): Likewise.
(mve_vqrdmladhq_m_s<mode>): Likewise.
(mve_vqrdmladhq_s<mode>): Likewise.
(mve_vqrdmladhxq_m_s<mode>): Likewise.
(mve_vqrdmladhxq_s<mode>): Likewise.
(mve_vqrdmlahq_m_n_s<mode>): Likewise.
(mve_vqrdmlahq_n_<supf><mode>): Likewise.
(mve_vqrdmlahq_n_s<mode>): Likewise.
(mve_vqrdmlashq_m_n_s<mode>): Likewise.
(mve_vqrdmlashq_n_<supf><mode>): Likewise.
(mve_vqrdmlashq_n_s<mode>): Likewise.
(mve_vqrdmlsdhq_m_s<mode>): Likewise.
(mve_vqrdmlsdhq_s<mode>): Likewise.
(mve_vqrdmlsdhxq_m_s<mode>): Likewise.
(mve_vqrdmlsdhxq_s<mode>): Likewise.
(mve_vqrdmulhq_m_n_s<mode>): Likewise.
(mve_vqrdmulhq_m_s<mode>): Likewise.
(mve_vqrdmulhq_n_s<mode>): Likewise.
(mve_vqrdmulhq_s<mode>): Likewise.
(mve_vqrshlq_<supf><mode>): Likewise.
(mve_vqrshlq_m_<supf><mode>): Likewise.
(mve_vqrshlq_m_n_<supf><mode>): Likewise.
(mve_vqrshlq_n_<supf><mode>): Likewise.
(mve_vqrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqrshrnbq_n_<supf><mode>): Likewise.
(mve_vqrshrntq_m_n_<supf><mode>): Likewise.
(mve_vqrshrntq_n_<supf><mode>): Likewise.
(mve_vqrshrunbq_m_n_s<mode>): Likewise.
(mve_vqrshrunbq_n_s<mode>): Likewise.
(mve_vqrshruntq_m_n_s<mode>): Likewise.
(mve_vqrshruntq_n_s<mode>): Likewise.
(mve_vqshlq_<supf><mode>): Likewise.
(mve_vqshlq_m_<supf><mode>): Likewise.
(mve_vqshlq_m_n_<supf><mode>): Likewise.
(mve_vqshlq_m_r_<supf><mode>): Likewise.
(mve_vqshlq_n_<supf><mode>): Likewise.
(mve_vqshlq_r_<supf><mode>): Likewise.
(mve_vqshluq_m_n_s<mode>): Likewise.
(mve_vqshluq_n_s<mode>): Likewise.
(mve_vqshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqshrnbq_n_<supf><mode>): Likewise.
(mve_vqshrntq_m_n_<supf><mode>): Likewise.
(mve_vqshrntq_n_<supf><mode>): Likewise.
(mve_vqshrunbq_m_n_s<mode>): Likewise.
(mve_vqshrunbq_n_s<mode>): Likewise.
(mve_vqshruntq_m_n_s<mode>): Likewise.
(mve_vqshruntq_n_s<mode>): Likewise.
(mve_vqsubq_<supf><mode>): Likewise.
(mve_vqsubq_m_<supf><mode>): Likewise.
(mve_vqsubq_m_n_<supf><mode>): Likewise.
(mve_vqsubq_n_<supf><mode>): Likewise.
(mve_vrev16q_<supf>v16qi): Likewise.
(mve_vrev16q_m_<supf>v16qi): Likewise.
(mve_vrev32q_<supf><mode>): Likewise.
(mve_vrev32q_fv8hf): Likewise.
(mve_vrev32q_m_<supf><mode>): Likewise.
(mve_vrev32q_m_fv8hf): Likewise.
(mve_vrev64q_<supf><mode>): Likewise.
(mve_vrev64q_f<mode>): Likewise.
(mve_vrev64q_m_<supf><mode>): Likewise.
(mve_vrev64q_m_f<mode>): Likewise.
(mve_vrhaddq_<supf><mode>): Likewise.
(mve_vrhaddq_m_<supf><mode>): Likewise.
(mve_vrmlaldavhaq_<supf>v4si): Likewise.
(mve_vrmlaldavhaq_p_sv4si): Likewise.
(mve_vrmlaldavhaq_p_uv4si): Likewise.
(mve_vrmlaldavhaq_sv4si): Likewise.
(mve_vrmlaldavhaq_uv4si): Likewise.
(mve_vrmlaldavhaxq_p_sv4si): Likewise.
(mve_vrmlaldavhaxq_sv4si): Likewise.
(mve_vrmlaldavhq_<supf>v4si): Likewise.
(mve_vrmlaldavhq_p_<supf>v4si): Likewise.
(mve_vrmlaldavhxq_p_sv4si): Likewise.
(mve_vrmlaldavhxq_sv4si): Likewise.
(mve_vrmlsldavhaq_p_sv4si): Likewise.
(mve_vrmlsldavhaq_sv4si): Likewise.
(mve_vrmlsldavhaxq_p_sv4si): Likewise.
(mve_vrmlsldavhaxq_sv4si): Likewise.
(mve_vrmlsldavhq_p_sv4si): Likewise.
(mve_vrmlsldavhq_sv4si): Likewise.
(mve_vrmlsldavhxq_p_sv4si): Likewise.
(mve_vrmlsldavhxq_sv4si): Likewise.
(mve_vrmulhq_<supf><mode>): Likewise.
(mve_vrmulhq_m_<supf><mode>): Likewise.
(mve_vrndaq_f<mode>): Likewise.
(mve_vrndaq_m_f<mode>): Likewise.
(mve_vrndmq_f<mode>): Likewise.
(mve_vrndmq_m_f<mode>): Likewise.
(mve_vrndnq_f<mode>): Likewise.
(mve_vrndnq_m_f<mode>): Likewise.
(mve_vrndpq_f<mode>): Likewise.
(mve_vrndpq_m_f<mode>): Likewise.
(mve_vrndq_f<mode>): Likewise.
(mve_vrndq_m_f<mode>): Likewise.
(mve_vrndxq_f<mode>): Likewise.
(mve_vrndxq_m_f<mode>): Likewise.
(mve_vrshlq_<supf><mode>): Likewise.
(mve_vrshlq_m_<supf><mode>): Likewise.
(mve_vrshlq_m_n_<supf><mode>): Likewise.
(mve_vrshlq_n_<supf><mode>): Likewise.
(mve_vrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vrshrnbq_n_<supf><mode>): Likewise.
(mve_vrshrntq_m_n_<supf><mode>): Likewise.
(mve_vrshrntq_n_<supf><mode>): Likewise.
(mve_vrshrq_m_n_<supf><mode>): Likewise.
(mve_vrshrq_n_<supf><mode>): Likewise.
(mve_vsbciq_<supf>v4si): Likewise.
(mve_vsbciq_m_<supf>v4si): Likewise.
(mve_vsbcq_<supf>v4si): Likewise.
(mve_vsbcq_m_<supf>v4si): Likewise.
(mve_vshlcq_<supf><mode>): Likewise.
(mve_vshlcq_m_<supf><mode>): Likewise.
(mve_vshllbq_m_n_<supf><mode>): Likewise.
(mve_vshllbq_n_<supf><mode>): Likewise.
(mve_vshlltq_m_n_<supf><mode>): Likewise.
(mve_vshlltq_n_<supf><mode>): Likewise.
(mve_vshlq_<supf><mode>): Likewise.
(mve_vshlq_<supf><mode>): Likewise.
(mve_vshlq_m_<supf><mode>): Likewise.
(mve_vshlq_m_n_<supf><mode>): Likewise.
(mve_vshlq_m_r_<supf><mode>): Likewise.
(mve_vshlq_n_<supf><mode>): Likewise.
(mve_vshlq_r_<supf><mode>): Likewise.
(mve_vshrnbq_m_n_<supf><mode>): Likewise.
(mve_vshrnbq_n_<supf><mode>): Likewise.
(mve_vshrntq_m_n_<supf><mode>): Likewise.
(mve_vshrntq_n_<supf><mode>): Likewise.
(mve_vshrq_m_n_<supf><mode>): Likewise.
(mve_vshrq_n_<supf><mode>): Likewise.
(mve_vsliq_m_n_<supf><mode>): Likewise.
(mve_vsliq_n_<supf><mode>): Likewise.
(mve_vsriq_m_n_<supf><mode>): Likewise.
(mve_vsriq_n_<supf><mode>): Likewise.
(mve_vstrbq_<supf><mode>): Likewise.
(mve_vstrbq_p_<supf><mode>): Likewise.
(mve_vstrbq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrbq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrdq_scatter_base_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrhq_<supf><mode>): Likewise.
(mve_vstrhq_fv8hf): Likewise.
(mve_vstrhq_p_<supf><mode>): Likewise.
(mve_vstrhq_p_fv8hf): Likewise.
(mve_vstrhq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_p_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn): Likewise.
(mve_vstrwq_<supf>v4si): Likewise.
(mve_vstrwq_fv4sf): Likewise.
(mve_vstrwq_p_<supf>v4si): Likewise.
(mve_vstrwq_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_fv4sf): Likewise.
(mve_vstrwq_scatter_base_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_p_fv4sf): Likewise.
(mve_vstrwq_scatter_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_p_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn): Likewise.
(mve_vsubq_<supf><mode>): Likewise.
(mve_vsubq_f<mode>): Likewise.
(mve_vsubq_m_<supf><mode>): Likewise.
(mve_vsubq_m_f<mode>): Likewise.
(mve_vsubq_m_n_<supf><mode>): Likewise.
(mve_vsubq_m_n_f<mode>): Likewise.
(mve_vsubq_n_<supf><mode>): Likewise.
(mve_vsubq_n_f<mode>): Likewise.
diff --git a/gcc/config/arm/arm.h b/gcc/config/arm/arm.h
index 4f54530adcb..f06e5c2cda4 100644
--- a/gcc/config/arm/arm.h
+++ b/gcc/config/arm/arm.h
@@ -2358,6 +2358,21 @@ extern int making_const_table;
else if (TARGET_THUMB1) \
thumb1_final_prescan_insn (INSN)
+/* These defines are useful to refer to the value of the mve_unpredicated_insn
+ insn attribute. Note that, because these use the get_attr_* function, these
+ will change recog_data if (INSN) isn't current_insn. */
+#define MVE_VPT_PREDICABLE_INSN_P(INSN) \
+ (recog_memoized (INSN) >= 0 \
+ && get_attr_mve_unpredicated_insn (INSN) != 0) \
+
+#define MVE_VPT_PREDICATED_INSN_P(INSN) \
+ (MVE_VPT_PREDICABLE_INSN_P (INSN) \
+ && recog_memoized (INSN) != get_attr_mve_unpredicated_insn (INSN)) \
+
+#define MVE_VPT_UNPREDICATED_INSN_P(INSN) \
+ (MVE_VPT_PREDICABLE_INSN_P (INSN) \
+ && recog_memoized (INSN) == get_attr_mve_unpredicated_insn (INSN)) \
+
#define ARM_SIGN_EXTEND(x) ((HOST_WIDE_INT) \
(HOST_BITS_PER_WIDE_INT <= 32 ? (unsigned HOST_WIDE_INT) (x) \
: ((((unsigned HOST_WIDE_INT)(x)) & (unsigned HOST_WIDE_INT) 0xffffffff) |\
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index 2ac97232ffd..ee931ad6ebd 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -124,6 +124,8 @@
; and not all ARM insns do.
(define_attr "predicated" "yes,no" (const_string "no"))
+(define_attr "mve_unpredicated_insn" "" (const_int 0))
+
; LENGTH of an instruction (in bytes)
(define_attr "length" ""
(const_int 4))
diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md
index 2edd0b06370..71e43539616 100644
--- a/gcc/config/arm/iterators.md
+++ b/gcc/config/arm/iterators.md
@@ -2296,6 +2296,7 @@
(define_int_attr mmla_sfx [(UNSPEC_MATMUL_S "s8") (UNSPEC_MATMUL_U "u8")
(UNSPEC_MATMUL_US "s8")])
+
;;MVE int attribute.
(define_int_attr supf [(VCVTQ_TO_F_S "s") (VCVTQ_TO_F_U "u") (VREV16Q_S "s")
(VREV16Q_U "u") (VMVNQ_N_S "s") (VMVNQ_N_U "u")
diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md
index 6e4b143affa..87cbf6c1726 100644
--- a/gcc/config/arm/mve.md
+++ b/gcc/config/arm/mve.md
@@ -17,7 +17,7 @@
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
-(define_insn "*mve_mov<mode>"
+(define_insn "mve_mov<mode>"
[(set (match_operand:MVE_types 0 "nonimmediate_operand" "=w,w,r,w , w, r,Ux,w")
(match_operand:MVE_types 1 "general_operand" " w,r,w,DnDm,UxUi,r,w, Ul"))]
"TARGET_HAVE_MVE || TARGET_HAVE_MVE_FLOAT"
@@ -81,18 +81,27 @@
return "";
}
}
- [(set_attr "type" "mve_move,mve_move,mve_move,mve_move,mve_load,multiple,mve_store,mve_load")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")
+ (symbol_ref "CODE_FOR_mve_mov<mode>")
+ (symbol_ref "CODE_FOR_nothing")])
+ (set_attr "type" "mve_move,mve_move,mve_move,mve_move,mve_load,multiple,mve_store,mve_load")
(set_attr "length" "4,8,8,4,4,8,4,8")
(set_attr "thumb2_pool_range" "*,*,*,*,1018,*,*,*")
(set_attr "neg_pool_range" "*,*,*,*,996,*,*,*")])
-(define_insn "*mve_vdup<mode>"
+(define_insn "mve_vdup<mode>"
[(set (match_operand:MVE_vecs 0 "s_register_operand" "=w")
(vec_duplicate:MVE_vecs
(match_operand:<V_elem> 1 "s_register_operand" "r")))]
"TARGET_HAVE_MVE || TARGET_HAVE_MVE_FLOAT"
"vdup.<V_sz_elem>\t%q0, %1"
- [(set_attr "length" "4")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdup<mode>"))
+ (set_attr "length" "4")
(set_attr "type" "mve_move")])
;;
@@ -145,7 +154,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_mnemo>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -159,7 +169,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -173,7 +184,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"v<absneg_str>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_v<absneg_str>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -187,7 +199,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -201,7 +214,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
;; [vcvttq_f32_f16])
@@ -214,7 +228,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtt.f32.f16\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -228,7 +243,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtb.f32.f16\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -242,7 +258,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -256,7 +273,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -270,7 +288,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -284,7 +303,8 @@
]
"TARGET_HAVE_MVE"
"v<absneg_str>.s%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_v<absneg_str>q_s<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -297,7 +317,8 @@
]
"TARGET_HAVE_MVE"
"vmvn\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmvnq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vmvnq_s<mode>"
[
@@ -318,7 +339,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -331,7 +353,8 @@
]
"TARGET_HAVE_MVE"
"vclz.i%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vclzq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vclzq_u<mode>"
[
@@ -354,7 +377,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -368,7 +392,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -382,7 +407,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -397,7 +423,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -411,7 +438,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtp.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -425,7 +453,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtn.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -439,7 +468,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtm.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -453,7 +483,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvta.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -467,7 +498,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -481,7 +513,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -495,7 +528,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -509,7 +543,8 @@
]
"TARGET_HAVE_MVE"
"vctp.<MVE_vctp>\t%1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctp<MVE_vctp>q<MVE_vpred>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -523,7 +558,8 @@
]
"TARGET_HAVE_MVE"
"vpnot"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vpnotv16bi"))
+ (set_attr "type" "mve_move")
])
;;
@@ -538,7 +574,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -553,7 +590,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.f<V_sz_elem>.<supf><V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; [vcreateq_f])
@@ -599,7 +637,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf><V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; Versions that take constant vectors as operand 2 (with all elements
@@ -617,7 +656,8 @@
VALID_NEON_QREG_MODE (<MODE>mode),
true);
}
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_s<mode>_imm"))
+ (set_attr "type" "mve_move")
])
(define_insn "mve_vshrq_n_u<mode>_imm"
[
@@ -632,7 +672,8 @@
VALID_NEON_QREG_MODE (<MODE>mode),
true);
}
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_u<mode>_imm"))
+ (set_attr "type" "mve_move")
])
;;
@@ -647,7 +688,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.<supf><V_sz_elem>.f<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -662,8 +704,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vcmpneq_, vcmpcsq_, vcmpeqq_, vcmpgeq_, vcmpgtq_, vcmphiq_, vcmpleq_, vcmpltq_])
@@ -676,7 +719,8 @@
]
"TARGET_HAVE_MVE"
"vcmp.<mve_cmp_type>%#<V_sz_elem>\t<mve_cmp_op>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -691,7 +735,8 @@
]
"TARGET_HAVE_MVE"
"vcmp.<mve_cmp_type>%#<V_sz_elem> <mve_cmp_op>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_n_<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -722,7 +767,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -739,7 +785,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -754,7 +801,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -769,7 +817,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -789,8 +838,11 @@
"@
vand\t%q0, %q1, %q2
* return neon_output_logic_immediate (\"vand\", &operands[2], <MODE>mode, 1, VALID_NEON_QREG_MODE (<MODE>mode));"
- [(set_attr "type" "mve_move")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_vandq_u<mode>")
+ (symbol_ref "CODE_FOR_nothing")])
+ (set_attr "type" "mve_move")
])
+
(define_expand "mve_vandq_s<mode>"
[
(set (match_operand:MVE_2 0 "s_register_operand")
@@ -811,7 +863,8 @@
]
"TARGET_HAVE_MVE"
"vbic\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vbicq_s<mode>"
@@ -835,7 +888,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -853,7 +907,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; Auto vectorizer pattern for int vcadd
@@ -876,7 +931,8 @@
]
"TARGET_HAVE_MVE"
"veor\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_veorq_s<mode>"
[
@@ -904,7 +960,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -920,7 +977,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -935,7 +993,8 @@
]
"TARGET_HAVE_MVE"
"<max_min_su_str>.<max_min_supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<max_min_su_str>q_<max_min_supf><mode>"))
+ (set_attr "type" "mve_move")
])
@@ -954,7 +1013,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -972,7 +1032,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -987,7 +1048,8 @@
]
"TARGET_HAVE_MVE"
"vmullb.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1002,7 +1064,8 @@
]
"TARGET_HAVE_MVE"
"vmullt.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1018,7 +1081,8 @@
]
"TARGET_HAVE_MVE"
"<mve_addsubmul>.i%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_addsubmul>q<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1032,7 +1096,8 @@
]
"TARGET_HAVE_MVE"
"vorn\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vornq_u<mode>"
@@ -1061,7 +1126,8 @@
"@
vorr\t%q0, %q1, %q2
* return neon_output_logic_immediate (\"vorr\", &operands[2], <MODE>mode, 0, VALID_NEON_QREG_MODE (<MODE>mode));"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vorrq_u<mode>"
[
@@ -1085,7 +1151,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1101,7 +1168,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1117,7 +1185,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1132,7 +1201,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1147,7 +1217,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1162,7 +1233,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1179,7 +1251,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1193,7 +1266,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vand\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vandq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1207,7 +1281,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vbic\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1223,7 +1298,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1237,7 +1313,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmp.f%#<V_sz_elem> <mve_cmp_op>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1252,7 +1329,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmp.f%#<V_sz_elem> <mve_cmp_op>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1267,8 +1345,10 @@
]
"TARGET_HAVE_MVE"
"vpst\;vctpt.<MVE_vctp>\t%1"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctp<MVE_vctp>q<MVE_vpred>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")
+])
;;
;; [vcvtbq_f16_f32])
@@ -1282,7 +1362,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtb.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1297,7 +1378,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1311,7 +1393,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"veor\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1327,7 +1410,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1345,7 +1429,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1360,7 +1445,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<max_min_f_str>.f%#<V_sz_elem> %q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<max_min_f_str>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1378,7 +1464,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1398,7 +1485,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1414,7 +1502,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_addsubmul>.f%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_addsubmul>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1428,7 +1517,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vorn\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1442,7 +1532,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vorr\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1458,7 +1549,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1474,7 +1566,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1490,7 +1583,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1508,7 +1602,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1524,7 +1619,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1539,7 +1635,8 @@
]
"TARGET_HAVE_MVE"
"vmullt.p%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1554,7 +1651,8 @@
]
"TARGET_HAVE_MVE"
"vmullb.p%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1575,8 +1673,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_f<mode>"))
+ (set_attr "length""8")])
+
;;
;; [vcvtaq_m_u, vcvtaq_m_s])
;;
@@ -1590,8 +1689,10 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtat.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
+
;;
;; [vcvtq_m_to_f_s, vcvtq_m_to_f_u])
;;
@@ -1605,8 +1706,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vqrshrnbq_n_u, vqrshrnbq_n_s]
@@ -1632,7 +1734,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1651,7 +1754,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1667,7 +1771,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1713,7 +1818,10 @@
(match_dup 4)]
VSHLCQ))]
"TARGET_HAVE_MVE"
- "vshlc\t%q0, %1, %4")
+ "vshlc\t%q0, %1, %4"
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
+])
;;
;; [vabsq_m_s]
@@ -1733,7 +1841,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1749,7 +1858,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1772,7 +1882,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.<isu>%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1795,7 +1906,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.<isu>%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1811,7 +1923,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1828,7 +1941,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1847,7 +1961,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1866,7 +1981,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1885,7 +2001,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1906,7 +2023,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1922,7 +2040,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1938,7 +2057,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1961,7 +2081,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1978,7 +2099,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1995,7 +2117,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2011,7 +2134,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2027,7 +2151,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2043,7 +2168,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2066,7 +2192,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_mnemo>t.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2082,7 +2209,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcmlaq, vcmlaq_rot90, vcmlaq_rot180, vcmlaq_rot270])
@@ -2100,7 +2228,9 @@
"@
vcmul.f%#<V_sz_elem> %q0, %q2, %q3, #<rot>
vcmla.f%#<V_sz_elem> %q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set_attr_alternative "mve_unpredicated_insn" [(symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>")
+ (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>")])
+ (set_attr "type" "mve_move")
])
;;
@@ -2121,7 +2251,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2137,7 +2268,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2153,7 +2285,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f32.f16\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2169,7 +2302,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2185,8 +2319,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f32.f16\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vdupq_m_n_f])
@@ -2201,7 +2336,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2218,7 +2354,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2235,7 +2372,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2252,7 +2390,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2271,7 +2410,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2290,7 +2430,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2309,7 +2450,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2326,7 +2468,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2347,7 +2490,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2363,7 +2507,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2380,7 +2525,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2396,7 +2542,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2412,7 +2559,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2428,7 +2576,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2444,7 +2593,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2463,7 +2613,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2479,7 +2630,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2495,7 +2647,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2511,7 +2664,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2528,7 +2682,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2544,7 +2699,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2560,8 +2716,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vabavq_p_s, vabavq_p_u])
@@ -2577,7 +2734,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -2594,8 +2752,9 @@
]
"TARGET_HAVE_MVE"
"vpst\n\t<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length" "8")])
;;
;; [vsriq_m_n_s, vsriq_m_n_u])
@@ -2611,8 +2770,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length" "8")])
;;
;; [vcvtq_m_n_to_f_u, vcvtq_m_n_to_f_s])
@@ -2628,7 +2788,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2668,7 +2829,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2687,8 +2849,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vaddq_m_u, vaddq_m_s]
@@ -2706,7 +2869,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2726,7 +2890,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2743,8 +2908,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vcaddq_rot90_m_u, vcaddq_rot90_m_s]
@@ -2763,7 +2929,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2791,7 +2958,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2812,7 +2980,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2829,7 +2998,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmullbt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2846,7 +3016,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmulltt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2863,7 +3034,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vornt\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2881,7 +3053,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2899,7 +3072,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2916,7 +3090,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2936,7 +3111,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2964,7 +3140,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2984,7 +3161,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3002,7 +3180,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3019,7 +3198,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmullbt.p%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3036,7 +3216,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmulltt.p%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3054,7 +3235,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3072,7 +3254,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3096,7 +3279,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3117,7 +3301,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3137,7 +3322,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3154,7 +3340,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3176,7 +3363,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3196,7 +3384,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mve_rot>_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3213,7 +3402,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vornt\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3233,7 +3423,8 @@
output_asm_insn("vstrb.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vstrbq_scatter_offset_s vstrbq_scatter_offset_u]
@@ -3261,7 +3452,8 @@
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vstrb.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_s vstrwq_scatter_base_u]
@@ -3283,7 +3475,8 @@
output_asm_insn("vstrw.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrbq_gather_offset_s vldrbq_gather_offset_u]
@@ -3306,7 +3499,8 @@
output_asm_insn ("vldrb.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrbq_s vldrbq_u]
@@ -3328,7 +3522,8 @@
output_asm_insn ("vldrb.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_base_s vldrwq_gather_base_u]
@@ -3348,7 +3543,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vstrbq_scatter_offset_p_s vstrbq_scatter_offset_p_u]
@@ -3380,7 +3576,8 @@
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrbt.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_p_s vstrwq_scatter_base_p_u]
@@ -3403,7 +3600,8 @@
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "8")])
(define_insn "mve_vstrbq_p_<supf><mode>"
[(set (match_operand:<MVE_B_ELEM> 0 "mve_memory_operand" "=Ux")
@@ -3421,7 +3619,8 @@
output_asm_insn ("vpst\;vstrbt.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_gather_offset_z_s vldrbq_gather_offset_z_u]
@@ -3446,7 +3645,8 @@
output_asm_insn ("vpst\n\tvldrbt.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_z_s vldrbq_z_u]
@@ -3469,7 +3669,8 @@
output_asm_insn ("vpst\;vldrbt.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_z_s vldrwq_gather_base_z_u]
@@ -3490,7 +3691,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_f]
@@ -3509,7 +3711,8 @@
output_asm_insn ("vldrh.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_s vldrhq_gather_offset_u]
@@ -3532,7 +3735,8 @@
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_z_s vldrhq_gather_offset_z_u]
@@ -3557,7 +3761,8 @@
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_s vldrhq_gather_shifted_offset_u]
@@ -3580,7 +3785,8 @@
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_shifted_offset_z_s vldrhq_gather_shited_offset_z_u]
@@ -3605,7 +3811,8 @@
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_s, vldrhq_u]
@@ -3627,7 +3834,8 @@
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_z_f]
@@ -3647,7 +3855,8 @@
output_asm_insn ("vpst\;vldrht.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_z_s vldrhq_z_u]
@@ -3670,7 +3879,8 @@
output_asm_insn ("vpst\;vldrht.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_f]
@@ -3689,7 +3899,8 @@
output_asm_insn ("vldrw.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_s vldrwq_u]
@@ -3708,7 +3919,8 @@
output_asm_insn ("vldrw.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_z_f]
@@ -3728,7 +3940,8 @@
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_z_s vldrwq_z_u]
@@ -3748,7 +3961,8 @@
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "8")])
(define_expand "mve_vld1q_f<mode>"
[(match_operand:MVE_0 0 "s_register_operand")
@@ -3788,7 +4002,8 @@
output_asm_insn ("vldrd.64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_base_z_s vldrdq_gather_base_z_u]
@@ -3809,7 +4024,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_offset_s vldrdq_gather_offset_u]
@@ -3829,7 +4045,8 @@
output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_offset_z_s vldrdq_gather_offset_z_u]
@@ -3850,7 +4067,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_shifted_offset_s vldrdq_gather_shifted_offset_u]
@@ -3870,7 +4088,8 @@
output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_shifted_offset_z_s vldrdq_gather_shifted_offset_z_u]
@@ -3891,7 +4110,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_offset_f]
@@ -3911,7 +4131,8 @@
output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_z_f]
@@ -3933,7 +4154,8 @@
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_f]
@@ -3953,7 +4175,8 @@
output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_shifted_offset_z_f]
@@ -3975,7 +4198,8 @@
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_f]
@@ -3995,7 +4219,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_base_z_f]
@@ -4016,7 +4241,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_f]
@@ -4036,7 +4262,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_offset_s vldrwq_gather_offset_u]
@@ -4056,7 +4283,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_offset_z_f]
@@ -4078,7 +4306,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_z_s vldrwq_gather_offset_z_u]
@@ -4100,7 +4329,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_f]
@@ -4120,7 +4350,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_shifted_offset_s vldrwq_gather_shifted_offset_u]
@@ -4140,7 +4371,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_shifted_offset_z_f]
@@ -4162,7 +4394,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_z_s vldrwq_gather_shifted_offset_z_u]
@@ -4184,7 +4417,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_f]
@@ -4203,7 +4437,8 @@
output_asm_insn ("vstrh.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_p_f]
@@ -4224,7 +4459,8 @@
output_asm_insn ("vpst\;vstrht.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_p_s vstrhq_p_u]
@@ -4246,7 +4482,8 @@
output_asm_insn ("vpst\;vstrht.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_p_s vstrhq_scatter_offset_p_u]
@@ -4278,7 +4515,8 @@
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_s vstrhq_scatter_offset_u]
@@ -4306,7 +4544,8 @@
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vstrh.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_shifted_offset_p_s vstrhq_scatter_shifted_offset_p_u]
@@ -4338,7 +4577,8 @@
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_s vstrhq_scatter_shifted_offset_u]
@@ -4367,7 +4607,8 @@
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vstrh.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_s, vstrhq_u]
@@ -4386,7 +4627,8 @@
output_asm_insn ("vstrh.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_f]
@@ -4405,7 +4647,8 @@
output_asm_insn ("vstrw.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_p_f]
@@ -4426,7 +4669,8 @@
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_p_s vstrwq_p_u]
@@ -4447,7 +4691,8 @@
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_s vstrwq_u]
@@ -4466,7 +4711,8 @@
output_asm_insn ("vstrw.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "4")])
(define_expand "mve_vst1q_f<mode>"
[(match_operand:<MVE_CNVT> 0 "mve_memory_operand")
@@ -4509,7 +4755,8 @@
output_asm_insn ("vpst\;\tvstrdt.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_s vstrdq_scatter_base_u]
@@ -4531,7 +4778,8 @@
output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_offset_p_s vstrdq_scatter_offset_p_u]
@@ -4562,7 +4810,8 @@
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_offset_s vstrdq_scatter_offset_u]
@@ -4590,7 +4839,8 @@
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vstrd.64\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_shifted_offset_p_s vstrdq_scatter_shifted_offset_p_u]
@@ -4622,7 +4872,8 @@
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1, uxtw #3]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_shifted_offset_s vstrdq_scatter_shifted_offset_u]
@@ -4651,7 +4902,8 @@
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vstrd.64\t%q2, [%0, %q1, uxtw #3]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_offset_f]
@@ -4679,7 +4931,8 @@
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrh.16\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_offset_p_f]
@@ -4710,7 +4963,8 @@
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_f]
@@ -4738,7 +4992,8 @@
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrh.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_shifted_offset_p_f]
@@ -4770,7 +5025,8 @@
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_f]
@@ -4792,7 +5048,8 @@
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_p_f]
@@ -4815,7 +5072,8 @@
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_f]
@@ -4843,7 +5101,8 @@
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrw.32\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_offset_p_f]
@@ -4874,7 +5133,8 @@
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -4905,7 +5165,8 @@
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -4933,7 +5194,8 @@
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vstrw.32\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_shifted_offset_f]
@@ -4961,7 +5223,8 @@
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrw.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_f]
@@ -4993,7 +5256,8 @@
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_s vstrwq_scatter_shifted_offset_p_u]
@@ -5025,7 +5289,8 @@
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_s vstrwq_scatter_shifted_offset_u]
@@ -5054,7 +5319,8 @@
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vstrw.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "4")])
;;
;; [vidupq_n_u])
@@ -5122,7 +5388,8 @@
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;\tvidupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vidupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vddupq_n_u])
@@ -5190,7 +5457,8 @@
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;vddupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vddupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vdwdupq_n_u])
@@ -5306,8 +5574,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;vdwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [viwdupq_n_u])
@@ -5423,7 +5692,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;\tviwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_viwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5449,7 +5719,8 @@
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_wb_p_s vstrwq_scatter_base_wb_p_u]
@@ -5475,7 +5746,8 @@
output_asm_insn ("vpst\;\tvstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_wb_f]
@@ -5500,7 +5772,8 @@
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_wb_p_f]
@@ -5526,7 +5799,8 @@
output_asm_insn ("vpst\;vstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_wb_s vstrdq_scatter_base_wb_u]
@@ -5551,7 +5825,8 @@
output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_base_wb_p_s vstrdq_scatter_base_wb_p_u]
@@ -5577,7 +5852,8 @@
output_asm_insn ("vpst\;vstrdt.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5629,7 +5905,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrwq_gather_base_wb_z_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5685,7 +5962,8 @@
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5737,7 +6015,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrwq_gather_base_wb_z_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5794,7 +6073,8 @@
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrdq_gather_base_wb_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -5847,7 +6127,8 @@
output_asm_insn ("vldrd.64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrdq_gather_base_wb_z_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -5886,7 +6167,7 @@
(unspec_volatile:SI [(reg:SI VFPCC_REGNUM)] UNSPEC_GET_FPSCR_NZCVQC))]
"TARGET_HAVE_MVE"
"vmrs\\t%0, FPSCR_nzcvqc"
- [(set_attr "type" "mve_move")])
+ [(set_attr "type" "mve_move")])
(define_insn "set_fpscr_nzcvqc"
[(set (reg:SI VFPCC_REGNUM)
@@ -5894,7 +6175,7 @@
VUNSPEC_SET_FPSCR_NZCVQC))]
"TARGET_HAVE_MVE"
"vmsr\\tFPSCR_nzcvqc, %0"
- [(set_attr "type" "mve_move")])
+ [(set_attr "type" "mve_move")])
;;
;; [vldrdq_gather_base_wb_z_s vldrdq_gather_base_wb_z_u]
@@ -5919,7 +6200,8 @@
output_asm_insn ("vpst\;vldrdt.u64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vadciq_m_s, vadciq_m_u])
;;
@@ -5936,7 +6218,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vadcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5953,7 +6236,8 @@
]
"TARGET_HAVE_MVE"
"vadci.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -5972,7 +6256,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vadct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -5989,7 +6274,8 @@
]
"TARGET_HAVE_MVE"
"vadc.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")
(set_attr "conds" "set")])
@@ -6009,7 +6295,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsbcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -6026,7 +6313,8 @@
]
"TARGET_HAVE_MVE"
"vsbci.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -6045,7 +6333,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsbct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -6062,7 +6351,8 @@
]
"TARGET_HAVE_MVE"
"vsbc.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -6091,7 +6381,7 @@
"vst21.<V_sz_elem>\t{%q0, %q1}, %3", ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set_attr "length" "8")])
;;
;; [vld2q])
@@ -6119,7 +6409,7 @@
"vld21.<V_sz_elem>\t{%q0, %q1}, %3", ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set_attr "length" "8")])
;;
;; [vld4q])
@@ -6462,7 +6752,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlct\t%q0, %1, %4"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;; CDE instructions on MVE registers.
@@ -6474,7 +6765,8 @@
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx1\\tp%c1, %q0, #%c2"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx1qav16qi"
@@ -6485,7 +6777,8 @@
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx1a\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx2qv16qi"
@@ -6496,7 +6789,8 @@
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx2\\tp%c1, %q0, %q2, #%c3"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx2qav16qi"
@@ -6508,7 +6802,8 @@
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx2a\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx3qv16qi"
@@ -6520,7 +6815,8 @@
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx3\\tp%c1, %q0, %q2, %q3, #%c4"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx3qav16qi"
@@ -6533,7 +6829,8 @@
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx3a\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx1q<a>_p_v16qi"
@@ -6545,7 +6842,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx1<a>t\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -6559,7 +6857,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx2<a>t\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -6574,11 +6873,12 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx3<a>t\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
-(define_insn "*movmisalign<mode>_mve_store"
+(define_insn "movmisalign<mode>_mve_store"
[(set (match_operand:MVE_VLD_ST 0 "mve_memory_operand" "=Ux")
(unspec:MVE_VLD_ST [(match_operand:MVE_VLD_ST 1 "s_register_operand" " w")]
UNSPEC_MISALIGNED_ACCESS))]
@@ -6586,11 +6886,12 @@
|| (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE (<MODE>mode)))
&& !BYTES_BIG_ENDIAN && unaligned_access"
"vstr<V_sz_elem1>.<V_sz_elem>\t%q1, %E0"
- [(set_attr "type" "mve_store")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_movmisalign<mode>_mve_store"))
+ (set_attr "type" "mve_store")]
)
-(define_insn "*movmisalign<mode>_mve_load"
+(define_insn "movmisalign<mode>_mve_load"
[(set (match_operand:MVE_VLD_ST 0 "s_register_operand" "=w")
(unspec:MVE_VLD_ST [(match_operand:MVE_VLD_ST 1 "mve_memory_operand" " Ux")]
UNSPEC_MISALIGNED_ACCESS))]
@@ -6598,7 +6899,8 @@
|| (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE (<MODE>mode)))
&& !BYTES_BIG_ENDIAN && unaligned_access"
"vldr<V_sz_elem1>.<V_sz_elem>\t%q0, %E1"
- [(set_attr "type" "mve_load")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_movmisalign<mode>_mve_load"))
+ (set_attr "type" "mve_load")]
)
;; Expander for VxBI moves
@@ -6680,3 +6982,40 @@
}
}
)
+
+;; Originally expanded by 'predicated_doloop_end'.
+;; In the rare situation where the branch is too far, we do also need to
+;; revert FPSCR.LTPSIZE back to 0x100 after the last iteration.
+(define_insn "*predicated_doloop_end_internal"
+ [(set (pc)
+ (if_then_else
+ (ge (plus:SI (reg:SI LR_REGNUM)
+ (match_operand:SI 0 "const_int_operand" ""))
+ (const_int 0))
+ (label_ref (match_operand 1 "" ""))
+ (pc)))
+ (set (reg:SI LR_REGNUM)
+ (plus:SI (reg:SI LR_REGNUM) (match_dup 0)))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2"
+ {
+ if (get_attr_length (insn) == 4)
+ return "letp\t%|lr, %l1";
+ else
+ return "subs\t%|lr, #%n0\n\tbgt\t%l1\n\tlctp";
+ }
+ [(set (attr "length")
+ (if_then_else
+ (ltu (minus (pc) (match_dup 1)) (const_int 1024))
+ (const_int 4)
+ (const_int 6)))
+ (set_attr "type" "branch")])
+
+(define_insn "dlstp<mode1>_insn"
+ [
+ (set (reg:SI LR_REGNUM)
+ (unspec:SI [(match_operand:SI 0 "s_register_operand" "r")]
+ DLSTP))
+ ]
+ "TARGET_32BIT && TARGET_HAVE_LOB && TARGET_HAVE_MVE && TARGET_THUMB2"
+ "dlstp.<mode1>\t%|lr, %0")
diff --git a/gcc/config/arm/vec-common.md b/gcc/config/arm/vec-common.md
index 9af8429968d..74871cb984b 100644
--- a/gcc/config/arm/vec-common.md
+++ b/gcc/config/arm/vec-common.md
@@ -366,7 +366,8 @@
"@
<mve_insn>.<supf>%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2
* return neon_output_shift_immediate (\"vshl\", 'i', &operands[2], <MODE>mode, VALID_NEON_QREG_MODE (<MODE>mode), true);"
- [(set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
)
(define_expand "vashl<mode>3"
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns
@ 2023-06-15 11:47 Stamatis Markianos-Wright
0 siblings, 0 replies; 10+ messages in thread
From: Stamatis Markianos-Wright @ 2023-06-15 11:47 UTC (permalink / raw)
To: gcc-patches; +Cc: Kyrylo Tkachov, Richard Earnshaw, ramana.gcc, nickc
[-- Attachment #1: Type: text/plain, Size: 44151 bytes --]
Hi all,
I'd like to submit two patches that add support for Arm's MVE
Tail Predicated Low Overhead Loop feature.
--- Introduction ---
The M-class Arm-ARM:
https://developer.arm.com/documentation/ddi0553/bu/?lang=en
Section B5.5.1 "Loop tail predication" describes the feature
we are adding support for with this patch (although
we only add codegen for DLSTP/LETP instruction loops).
Previously with commit d2ed233cb94 we'd added support for
non-MVE DLS/LE loops through the loop-doloop pass, which, given
a standard MVE loop like:
```
void __attribute__ ((noinline)) test (int16_t *a, int16_t *b,
int16_t *c, int n)
{
while (n > 0)
{
mve_pred16_t p = vctp16q (n);
int16x8_t va = vldrhq_z_s16 (a, p);
int16x8_t vb = vldrhq_z_s16 (b, p);
int16x8_t vc = vaddq_x_s16 (va, vb, p);
vstrhq_p_s16 (c, vc, p);
c+=8;
a+=8;
b+=8;
n-=8;
}
}
```
.. would output:
```
<pre-calculate the number of iterations and place it into lr>
dls lr, lr
.L3:
vctp.16 r3
vmrs ip, P0 @ movhi
sxth ip, ip
vmsr P0, ip @ movhi
mov r4, r0
vpst
vldrht.16 q2, [r4]
mov r4, r1
vmov q3, q0
vpst
vldrht.16 q1, [r4]
mov r4, r2
vpst
vaddt.i16 q3, q2, q1
subs r3, r3, #8
vpst
vstrht.16 q3, [r4]
adds r0, r0, #16
adds r1, r1, #16
adds r2, r2, #16
le lr, .L3
```
where the LE instruction will decrement LR by 1, compare and
branch if needed.
(there are also other inefficiencies with the above code, like the
pointless vmrs/sxth/vmsr on the VPR and the adds not being merged
into the vldrht/vstrht as a #16 offsets and some random movs!
But that's different problems...)
The MVE version is similar, except that:
* Instead of DLS/LE the instructions are DLSTP/LETP.
* Instead of pre-calculating the number of iterations of the
loop, we place the number of elements to be processed by the
loop into LR.
* Instead of decrementing the LR by one, LETP will decrement it
by FPSCR.LTPSIZE, which is the number of elements being
processed in each iteration: 16 for 8-bit elements, 5 for 16-bit
elements, etc.
* On the final iteration, automatic Loop Tail Predication is
performed, as if the instructions within the loop had been VPT
predicated with a VCTP generating the VPR predicate in every
loop iteration.
The dlstp/letp loop now looks like:
```
<place n into r3>
dlstp.16 lr, r3
.L14:
mov r3, r0
vldrh.16 q3, [r3]
mov r3, r1
vldrh.16 q2, [r3]
mov r3, r2
vadd.i16 q3, q3, q2
adds r0, r0, #16
vstrh.16 q3, [r3]
adds r1, r1, #16
adds r2, r2, #16
letp lr, .L14
```
Since the loop tail predication is automatic, we have eliminated
the VCTP that had been specified by the user in the intrinsic
and converted the VPT-predicated instructions into their
unpredicated equivalents (which also saves us from VPST insns).
The LE instruction here decrements LR by 8 in each iteration.
--- This 1/2 patch ---
This first patch lays some groundwork by adding an attribute to
md patterns, and then the second patch contains the functional
changes.
One major difficulty in implementing MVE Tail-Predicated Low
Overhead Loops was the need to transform VPT-predicated insns
in the insn chain into their unpredicated equivalents, like:
`mve_vldrbq_z_<supf><mode> -> mve_vldrbq_<supf><mode>`.
This requires us to have a deterministic link between two
different patterns in mve.md -- this _could_ be done by
re-ordering the entirety of mve.md such that the patterns are
at some constant icode proximity (e.g. having the _z immediately
after the unpredicated version would mean that to map from the
former to the latter you could use icode-1), but that is a very
messy solution that would lead to complex unknown dependencies
between the ordering of patterns.
This patch proves an alternative way of doing that: using an insn
attribute to encode the icode of the unpredicated instruction.
No regressions on arm-none-eabi with an MVE target.
Thank you,
Stam Markianos-Wright
gcc/ChangeLog:
* config/arm/arm.md (mve_unpredicated_insn): New attribute.
* config/arm/arm.h (MVE_VPT_PREDICATED_INSN_P): New define.
(MVE_VPT_UNPREDICATED_INSN_P): Likewise.
(MVE_VPT_PREDICABLE_INSN_P): Likewise.
* config/arm/vec-common.md (mve_vshlq_<supf><mode>): Add
attribute.
* config/arm/mve.md (arm_vcx1q<a>_p_v16qi): Add attribute.
(arm_vcx1q<a>v16qi): Likewise.
(arm_vcx1qav16qi): Likewise.
(arm_vcx1qv16qi): Likewise.
(arm_vcx2q<a>_p_v16qi): Likewise.
(arm_vcx2q<a>v16qi): Likewise.
(arm_vcx2qav16qi): Likewise.
(arm_vcx2qv16qi): Likewise.
(arm_vcx3q<a>_p_v16qi): Likewise.
(arm_vcx3q<a>v16qi): Likewise.
(arm_vcx3qav16qi): Likewise.
(arm_vcx3qv16qi): Likewise.
(mve_vabavq_<supf><mode>): Likewise.
(mve_vabavq_p_<supf><mode>): Likewise.
(mve_vabdq_<supf><mode>): Likewise.
(mve_vabdq_f<mode>): Likewise.
(mve_vabdq_m_<supf><mode>): Likewise.
(mve_vabdq_m_f<mode>): Likewise.
(mve_vabsq_f<mode>): Likewise.
(mve_vabsq_m_f<mode>): Likewise.
(mve_vabsq_m_s<mode>): Likewise.
(mve_vabsq_s<mode>): Likewise.
(mve_vadciq_<supf>v4si): Likewise.
(mve_vadciq_m_<supf>v4si): Likewise.
(mve_vadcq_<supf>v4si): Likewise.
(mve_vadcq_m_<supf>v4si): Likewise.
(mve_vaddlvaq_<supf>v4si): Likewise.
(mve_vaddlvaq_p_<supf>v4si): Likewise.
(mve_vaddlvq_<supf>v4si): Likewise.
(mve_vaddlvq_p_<supf>v4si): Likewise.
(mve_vaddq_f<mode>): Likewise.
(mve_vaddq_m_<supf><mode>): Likewise.
(mve_vaddq_m_f<mode>): Likewise.
(mve_vaddq_m_n_<supf><mode>): Likewise.
(mve_vaddq_m_n_f<mode>): Likewise.
(mve_vaddq_n_<supf><mode>): Likewise.
(mve_vaddq_n_f<mode>): Likewise.
(mve_vaddq<mode>): Likewise.
(mve_vaddvaq_<supf><mode>): Likewise.
(mve_vaddvaq_p_<supf><mode>): Likewise.
(mve_vaddvq_<supf><mode>): Likewise.
(mve_vaddvq_p_<supf><mode>): Likewise.
(mve_vandq_<supf><mode>): Likewise.
(mve_vandq_f<mode>): Likewise.
(mve_vandq_m_<supf><mode>): Likewise.
(mve_vandq_m_f<mode>): Likewise.
(mve_vandq_s<mode>): Likewise.
(mve_vandq_u<mode>): Likewise.
(mve_vbicq_<supf><mode>): Likewise.
(mve_vbicq_f<mode>): Likewise.
(mve_vbicq_m_<supf><mode>): Likewise.
(mve_vbicq_m_f<mode>): Likewise.
(mve_vbicq_m_n_<supf><mode>): Likewise.
(mve_vbicq_n_<supf><mode>): Likewise.
(mve_vbicq_s<mode>): Likewise.
(mve_vbicq_u<mode>): Likewise.
(mve_vbrsrq_m_n_<supf><mode>): Likewise.
(mve_vbrsrq_m_n_f<mode>): Likewise.
(mve_vbrsrq_n_<supf><mode>): Likewise.
(mve_vbrsrq_n_f<mode>): Likewise.
(mve_vcaddq_rot270_m_<supf><mode>): Likewise.
(mve_vcaddq_rot270_m_f<mode>): Likewise.
(mve_vcaddq_rot270<mode>): Likewise.
(mve_vcaddq_rot270<mode>): Likewise.
(mve_vcaddq_rot90_m_<supf><mode>): Likewise.
(mve_vcaddq_rot90_m_f<mode>): Likewise.
(mve_vcaddq_rot90<mode>): Likewise.
(mve_vcaddq_rot90<mode>): Likewise.
(mve_vcaddq<mve_rot><mode>): Likewise.
(mve_vcaddq<mve_rot><mode>): Likewise.
(mve_vclsq_m_s<mode>): Likewise.
(mve_vclsq_s<mode>): Likewise.
(mve_vclzq_<supf><mode>): Likewise.
(mve_vclzq_m_<supf><mode>): Likewise.
(mve_vclzq_s<mode>): Likewise.
(mve_vclzq_u<mode>): Likewise.
(mve_vcmlaq_m_f<mode>): Likewise.
(mve_vcmlaq_rot180_m_f<mode>): Likewise.
(mve_vcmlaq_rot180<mode>): Likewise.
(mve_vcmlaq_rot270_m_f<mode>): Likewise.
(mve_vcmlaq_rot270<mode>): Likewise.
(mve_vcmlaq_rot90_m_f<mode>): Likewise.
(mve_vcmlaq_rot90<mode>): Likewise.
(mve_vcmlaq<mode>): Likewise.
(mve_vcmlaq<mve_rot><mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_f<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_n_<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_n_f<mode>): Likewise.
(mve_vcmpcsq_<mode>): Likewise.
(mve_vcmpcsq_m_n_u<mode>): Likewise.
(mve_vcmpcsq_m_u<mode>): Likewise.
(mve_vcmpcsq_n_<mode>): Likewise.
(mve_vcmpeqq_<mode>): Likewise.
(mve_vcmpeqq_f<mode>): Likewise.
(mve_vcmpeqq_m_<supf><mode>): Likewise.
(mve_vcmpeqq_m_f<mode>): Likewise.
(mve_vcmpeqq_m_n_<supf><mode>): Likewise.
(mve_vcmpeqq_m_n_f<mode>): Likewise.
(mve_vcmpeqq_n_<mode>): Likewise.
(mve_vcmpeqq_n_f<mode>): Likewise.
(mve_vcmpgeq_<mode>): Likewise.
(mve_vcmpgeq_f<mode>): Likewise.
(mve_vcmpgeq_m_f<mode>): Likewise.
(mve_vcmpgeq_m_n_f<mode>): Likewise.
(mve_vcmpgeq_m_n_s<mode>): Likewise.
(mve_vcmpgeq_m_s<mode>): Likewise.
(mve_vcmpgeq_n_<mode>): Likewise.
(mve_vcmpgeq_n_f<mode>): Likewise.
(mve_vcmpgtq_<mode>): Likewise.
(mve_vcmpgtq_f<mode>): Likewise.
(mve_vcmpgtq_m_f<mode>): Likewise.
(mve_vcmpgtq_m_n_f<mode>): Likewise.
(mve_vcmpgtq_m_n_s<mode>): Likewise.
(mve_vcmpgtq_m_s<mode>): Likewise.
(mve_vcmpgtq_n_<mode>): Likewise.
(mve_vcmpgtq_n_f<mode>): Likewise.
(mve_vcmphiq_<mode>): Likewise.
(mve_vcmphiq_m_n_u<mode>): Likewise.
(mve_vcmphiq_m_u<mode>): Likewise.
(mve_vcmphiq_n_<mode>): Likewise.
(mve_vcmpleq_<mode>): Likewise.
(mve_vcmpleq_f<mode>): Likewise.
(mve_vcmpleq_m_f<mode>): Likewise.
(mve_vcmpleq_m_n_f<mode>): Likewise.
(mve_vcmpleq_m_n_s<mode>): Likewise.
(mve_vcmpleq_m_s<mode>): Likewise.
(mve_vcmpleq_n_<mode>): Likewise.
(mve_vcmpleq_n_f<mode>): Likewise.
(mve_vcmpltq_<mode>): Likewise.
(mve_vcmpltq_f<mode>): Likewise.
(mve_vcmpltq_m_f<mode>): Likewise.
(mve_vcmpltq_m_n_f<mode>): Likewise.
(mve_vcmpltq_m_n_s<mode>): Likewise.
(mve_vcmpltq_m_s<mode>): Likewise.
(mve_vcmpltq_n_<mode>): Likewise.
(mve_vcmpltq_n_f<mode>): Likewise.
(mve_vcmpneq_<mode>): Likewise.
(mve_vcmpneq_f<mode>): Likewise.
(mve_vcmpneq_m_<supf><mode>): Likewise.
(mve_vcmpneq_m_f<mode>): Likewise.
(mve_vcmpneq_m_n_<supf><mode>): Likewise.
(mve_vcmpneq_m_n_f<mode>): Likewise.
(mve_vcmpneq_n_<mode>): Likewise.
(mve_vcmpneq_n_f<mode>): Likewise.
(mve_vcmulq_m_f<mode>): Likewise.
(mve_vcmulq_rot180_m_f<mode>): Likewise.
(mve_vcmulq_rot180<mode>): Likewise.
(mve_vcmulq_rot270_m_f<mode>): Likewise.
(mve_vcmulq_rot270<mode>): Likewise.
(mve_vcmulq_rot90_m_f<mode>): Likewise.
(mve_vcmulq_rot90<mode>): Likewise.
(mve_vcmulq<mode>): Likewise.
(mve_vcmulq<mve_rot><mode>): Likewise.
(mve_vctp<mode1>q_mhi): Likewise.
(mve_vctp<mode1>qhi): Likewise.
(mve_vcvtaq_<supf><mode>): Likewise.
(mve_vcvtaq_m_<supf><mode>): Likewise.
(mve_vcvtbq_f16_f32v8hf): Likewise.
(mve_vcvtbq_f32_f16v4sf): Likewise.
(mve_vcvtbq_m_f16_f32v8hf): Likewise.
(mve_vcvtbq_m_f32_f16v4sf): Likewise.
(mve_vcvtmq_<supf><mode>): Likewise.
(mve_vcvtmq_m_<supf><mode>): Likewise.
(mve_vcvtnq_<supf><mode>): Likewise.
(mve_vcvtnq_m_<supf><mode>): Likewise.
(mve_vcvtpq_<supf><mode>): Likewise.
(mve_vcvtpq_m_<supf><mode>): Likewise.
(mve_vcvtq_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_m_to_f_<supf><mode>): Likewise.
(mve_vcvtq_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_to_f_<supf><mode>): Likewise.
(mve_vcvttq_f16_f32v8hf): Likewise.
(mve_vcvttq_f32_f16v4sf): Likewise.
(mve_vcvttq_m_f16_f32v8hf): Likewise.
(mve_vcvttq_m_f32_f16v4sf): Likewise.
(mve_vddupq_m_wb_u<mode>_insn): Likewise.
(mve_vddupq_u<mode>_insn): Likewise.
(mve_vdupq_m_n_<supf><mode>): Likewise.
(mve_vdupq_m_n_f<mode>): Likewise.
(mve_vdupq_n_<supf><mode>): Likewise.
(mve_vdupq_n_f<mode>): Likewise.
(mve_vdwdupq_m_wb_u<mode>_insn): Likewise.
(mve_vdwdupq_wb_u<mode>_insn): Likewise.
(mve_veorq_<supf><mode>): Likewise.
(mve_veorq_f<mode>): Likewise.
(mve_veorq_m_<supf><mode>): Likewise.
(mve_veorq_m_f<mode>): Likewise.
(mve_veorq_s<mode>): Likewise.
(mve_veorq_u<mode>): Likewise.
(mve_vfmaq_f<mode>): Likewise.
(mve_vfmaq_m_f<mode>): Likewise.
(mve_vfmaq_m_n_f<mode>): Likewise.
(mve_vfmaq_n_f<mode>): Likewise.
(mve_vfmasq_m_n_f<mode>): Likewise.
(mve_vfmasq_n_f<mode>): Likewise.
(mve_vfmsq_f<mode>): Likewise.
(mve_vfmsq_m_f<mode>): Likewise.
(mve_vhaddq_<supf><mode>): Likewise.
(mve_vhaddq_m_<supf><mode>): Likewise.
(mve_vhaddq_m_n_<supf><mode>): Likewise.
(mve_vhaddq_n_<supf><mode>): Likewise.
(mve_vhcaddq_rot270_m_s<mode>): Likewise.
(mve_vhcaddq_rot270_s<mode>): Likewise.
(mve_vhcaddq_rot90_m_s<mode>): Likewise.
(mve_vhcaddq_rot90_s<mode>): Likewise.
(mve_vhsubq_<supf><mode>): Likewise.
(mve_vhsubq_m_<supf><mode>): Likewise.
(mve_vhsubq_m_n_<supf><mode>): Likewise.
(mve_vhsubq_n_<supf><mode>): Likewise.
(mve_vidupq_m_wb_u<mode>_insn): Likewise.
(mve_vidupq_u<mode>_insn): Likewise.
(mve_viwdupq_m_wb_u<mode>_insn): Likewise.
(mve_viwdupq_wb_u<mode>_insn): Likewise.
(mve_vldrbq_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrbq_z_<supf><mode>): Likewise.
(mve_vldrdq_gather_base_<supf>v2di): Likewise.
(mve_vldrdq_gather_base_wb_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_wb_z_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_z_<supf>v2di): Likewise.
(mve_vldrhq_<supf><mode>): Likewise.
(mve_vldrhq_fv8hf): Likewise.
(mve_vldrhq_gather_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_fv8hf): Likewise.
(mve_vldrhq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_z_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_z_fv8hf): Likewise.
(mve_vldrhq_z_<supf><mode>): Likewise.
(mve_vldrhq_z_fv8hf): Likewise.
(mve_vldrwq_<supf>v4si): Likewise.
(mve_vldrwq_fv4sf): Likewise.
(mve_vldrwq_gather_base_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_fv4sf): Likewise.
(mve_vldrwq_gather_base_wb_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_z_fv4sf): Likewise.
(mve_vldrwq_gather_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_fv4sf): Likewise.
(mve_vldrwq_gather_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_z_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_z_fv4sf): Likewise.
(mve_vldrwq_z_<supf>v4si): Likewise.
(mve_vldrwq_z_fv4sf): Likewise.
(mve_vmaxaq_m_s<mode>): Likewise.
(mve_vmaxaq_s<mode>): Likewise.
(mve_vmaxavq_p_s<mode>): Likewise.
(mve_vmaxavq_s<mode>): Likewise.
(mve_vmaxnmaq_f<mode>): Likewise.
(mve_vmaxnmaq_m_f<mode>): Likewise.
(mve_vmaxnmavq_f<mode>): Likewise.
(mve_vmaxnmavq_p_f<mode>): Likewise.
(mve_vmaxnmq_f<mode>): Likewise.
(mve_vmaxnmq_m_f<mode>): Likewise.
(mve_vmaxnmvq_f<mode>): Likewise.
(mve_vmaxnmvq_p_f<mode>): Likewise.
(mve_vmaxq_<supf><mode>): Likewise.
(mve_vmaxq_m_<supf><mode>): Likewise.
(mve_vmaxq_s<mode>): Likewise.
(mve_vmaxq_u<mode>): Likewise.
(mve_vmaxvq_<supf><mode>): Likewise.
(mve_vmaxvq_p_<supf><mode>): Likewise.
(mve_vminaq_m_s<mode>): Likewise.
(mve_vminaq_s<mode>): Likewise.
(mve_vminavq_p_s<mode>): Likewise.
(mve_vminavq_s<mode>): Likewise.
(mve_vminnmaq_f<mode>): Likewise.
(mve_vminnmaq_m_f<mode>): Likewise.
(mve_vminnmavq_f<mode>): Likewise.
(mve_vminnmavq_p_f<mode>): Likewise.
(mve_vminnmq_f<mode>): Likewise.
(mve_vminnmq_m_f<mode>): Likewise.
(mve_vminnmvq_f<mode>): Likewise.
(mve_vminnmvq_p_f<mode>): Likewise.
(mve_vminq_<supf><mode>): Likewise.
(mve_vminq_m_<supf><mode>): Likewise.
(mve_vminq_s<mode>): Likewise.
(mve_vminq_u<mode>): Likewise.
(mve_vminvq_<supf><mode>): Likewise.
(mve_vminvq_p_<supf><mode>): Likewise.
(mve_vmladavaq_<supf><mode>): Likewise.
(mve_vmladavaq_p_<supf><mode>): Likewise.
(mve_vmladavaxq_p_s<mode>): Likewise.
(mve_vmladavaxq_s<mode>): Likewise.
(mve_vmladavq_<supf><mode>): Likewise.
(mve_vmladavq_p_<supf><mode>): Likewise.
(mve_vmladavxq_p_s<mode>): Likewise.
(mve_vmladavxq_s<mode>): Likewise.
(mve_vmlaldavaq_<supf><mode>): Likewise.
(mve_vmlaldavaq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_<supf><mode>): Likewise.
(mve_vmlaldavaxq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_s<mode>): Likewise.
(mve_vmlaldavq_<supf><mode>): Likewise.
(mve_vmlaldavq_p_<supf><mode>): Likewise.
(mve_vmlaldavxq_p_s<mode>): Likewise.
(mve_vmlaldavxq_s<mode>): Likewise.
(mve_vmlaq_m_n_<supf><mode>): Likewise.
(mve_vmlaq_n_<supf><mode>): Likewise.
(mve_vmlasq_m_n_<supf><mode>): Likewise.
(mve_vmlasq_n_<supf><mode>): Likewise.
(mve_vmlsdavaq_p_s<mode>): Likewise.
(mve_vmlsdavaq_s<mode>): Likewise.
(mve_vmlsdavaxq_p_s<mode>): Likewise.
(mve_vmlsdavaxq_s<mode>): Likewise.
(mve_vmlsdavq_p_s<mode>): Likewise.
(mve_vmlsdavq_s<mode>): Likewise.
(mve_vmlsdavxq_p_s<mode>): Likewise.
(mve_vmlsdavxq_s<mode>): Likewise.
(mve_vmlsldavaq_p_s<mode>): Likewise.
(mve_vmlsldavaq_s<mode>): Likewise.
(mve_vmlsldavaxq_p_s<mode>): Likewise.
(mve_vmlsldavaxq_s<mode>): Likewise.
(mve_vmlsldavq_p_s<mode>): Likewise.
(mve_vmlsldavq_s<mode>): Likewise.
(mve_vmlsldavxq_p_s<mode>): Likewise.
(mve_vmlsldavxq_s<mode>): Likewise.
(mve_vmovlbq_<supf><mode>): Likewise.
(mve_vmovlbq_m_<supf><mode>): Likewise.
(mve_vmovltq_<supf><mode>): Likewise.
(mve_vmovltq_m_<supf><mode>): Likewise.
(mve_vmovnbq_<supf><mode>): Likewise.
(mve_vmovnbq_m_<supf><mode>): Likewise.
(mve_vmovntq_<supf><mode>): Likewise.
(mve_vmovntq_m_<supf><mode>): Likewise.
(mve_vmulhq_<supf><mode>): Likewise.
(mve_vmulhq_m_<supf><mode>): Likewise.
(mve_vmullbq_int_<supf><mode>): Likewise.
(mve_vmullbq_int_m_<supf><mode>): Likewise.
(mve_vmullbq_poly_m_p<mode>): Likewise.
(mve_vmullbq_poly_p<mode>): Likewise.
(mve_vmulltq_int_<supf><mode>): Likewise.
(mve_vmulltq_int_m_<supf><mode>): Likewise.
(mve_vmulltq_poly_m_p<mode>): Likewise.
(mve_vmulltq_poly_p<mode>): Likewise.
(mve_vmulq_<supf><mode>): Likewise.
(mve_vmulq_f<mode>): Likewise.
(mve_vmulq_m_<supf><mode>): Likewise.
(mve_vmulq_m_f<mode>): Likewise.
(mve_vmulq_m_n_<supf><mode>): Likewise.
(mve_vmulq_m_n_f<mode>): Likewise.
(mve_vmulq_n_<supf><mode>): Likewise.
(mve_vmulq_n_f<mode>): Likewise.
(mve_vmvnq_<supf><mode>): Likewise.
(mve_vmvnq_m_<supf><mode>): Likewise.
(mve_vmvnq_m_n_<supf><mode>): Likewise.
(mve_vmvnq_n_<supf><mode>): Likewise.
(mve_vmvnq_s<mode>): Likewise.
(mve_vmvnq_u<mode>): Likewise.
(mve_vnegq_f<mode>): Likewise.
(mve_vnegq_m_f<mode>): Likewise.
(mve_vnegq_m_s<mode>): Likewise.
(mve_vnegq_s<mode>): Likewise.
(mve_vornq_<supf><mode>): Likewise.
(mve_vornq_f<mode>): Likewise.
(mve_vornq_m_<supf><mode>): Likewise.
(mve_vornq_m_f<mode>): Likewise.
(mve_vornq_s<mode>): Likewise.
(mve_vornq_u<mode>): Likewise.
(mve_vorrq_<supf><mode>): Likewise.
(mve_vorrq_f<mode>): Likewise.
(mve_vorrq_m_<supf><mode>): Likewise.
(mve_vorrq_m_f<mode>): Likewise.
(mve_vorrq_m_n_<supf><mode>): Likewise.
(mve_vorrq_n_<supf><mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vqabsq_m_s<mode>): Likewise.
(mve_vqabsq_s<mode>): Likewise.
(mve_vqaddq_<supf><mode>): Likewise.
(mve_vqaddq_m_<supf><mode>): Likewise.
(mve_vqaddq_m_n_<supf><mode>): Likewise.
(mve_vqaddq_n_<supf><mode>): Likewise.
(mve_vqdmladhq_m_s<mode>): Likewise.
(mve_vqdmladhq_s<mode>): Likewise.
(mve_vqdmladhxq_m_s<mode>): Likewise.
(mve_vqdmladhxq_s<mode>): Likewise.
(mve_vqdmlahq_m_n_s<mode>): Likewise.
(mve_vqdmlahq_n_<supf><mode>): Likewise.
(mve_vqdmlahq_n_s<mode>): Likewise.
(mve_vqdmlashq_m_n_s<mode>): Likewise.
(mve_vqdmlashq_n_<supf><mode>): Likewise.
(mve_vqdmlashq_n_s<mode>): Likewise.
(mve_vqdmlsdhq_m_s<mode>): Likewise.
(mve_vqdmlsdhq_s<mode>): Likewise.
(mve_vqdmlsdhxq_m_s<mode>): Likewise.
(mve_vqdmlsdhxq_s<mode>): Likewise.
(mve_vqdmulhq_m_n_s<mode>): Likewise.
(mve_vqdmulhq_m_s<mode>): Likewise.
(mve_vqdmulhq_n_s<mode>): Likewise.
(mve_vqdmulhq_s<mode>): Likewise.
(mve_vqdmullbq_m_n_s<mode>): Likewise.
(mve_vqdmullbq_m_s<mode>): Likewise.
(mve_vqdmullbq_n_s<mode>): Likewise.
(mve_vqdmullbq_s<mode>): Likewise.
(mve_vqdmulltq_m_n_s<mode>): Likewise.
(mve_vqdmulltq_m_s<mode>): Likewise.
(mve_vqdmulltq_n_s<mode>): Likewise.
(mve_vqdmulltq_s<mode>): Likewise.
(mve_vqmovnbq_<supf><mode>): Likewise.
(mve_vqmovnbq_m_<supf><mode>): Likewise.
(mve_vqmovntq_<supf><mode>): Likewise.
(mve_vqmovntq_m_<supf><mode>): Likewise.
(mve_vqmovunbq_m_s<mode>): Likewise.
(mve_vqmovunbq_s<mode>): Likewise.
(mve_vqmovuntq_m_s<mode>): Likewise.
(mve_vqmovuntq_s<mode>): Likewise.
(mve_vqnegq_m_s<mode>): Likewise.
(mve_vqnegq_s<mode>): Likewise.
(mve_vqrdmladhq_m_s<mode>): Likewise.
(mve_vqrdmladhq_s<mode>): Likewise.
(mve_vqrdmladhxq_m_s<mode>): Likewise.
(mve_vqrdmladhxq_s<mode>): Likewise.
(mve_vqrdmlahq_m_n_s<mode>): Likewise.
(mve_vqrdmlahq_n_<supf><mode>): Likewise.
(mve_vqrdmlahq_n_s<mode>): Likewise.
(mve_vqrdmlashq_m_n_s<mode>): Likewise.
(mve_vqrdmlashq_n_<supf><mode>): Likewise.
(mve_vqrdmlashq_n_s<mode>): Likewise.
(mve_vqrdmlsdhq_m_s<mode>): Likewise.
(mve_vqrdmlsdhq_s<mode>): Likewise.
(mve_vqrdmlsdhxq_m_s<mode>): Likewise.
(mve_vqrdmlsdhxq_s<mode>): Likewise.
(mve_vqrdmulhq_m_n_s<mode>): Likewise.
(mve_vqrdmulhq_m_s<mode>): Likewise.
(mve_vqrdmulhq_n_s<mode>): Likewise.
(mve_vqrdmulhq_s<mode>): Likewise.
(mve_vqrshlq_<supf><mode>): Likewise.
(mve_vqrshlq_m_<supf><mode>): Likewise.
(mve_vqrshlq_m_n_<supf><mode>): Likewise.
(mve_vqrshlq_n_<supf><mode>): Likewise.
(mve_vqrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqrshrnbq_n_<supf><mode>): Likewise.
(mve_vqrshrntq_m_n_<supf><mode>): Likewise.
(mve_vqrshrntq_n_<supf><mode>): Likewise.
(mve_vqrshrunbq_m_n_s<mode>): Likewise.
(mve_vqrshrunbq_n_s<mode>): Likewise.
(mve_vqrshruntq_m_n_s<mode>): Likewise.
(mve_vqrshruntq_n_s<mode>): Likewise.
(mve_vqshlq_<supf><mode>): Likewise.
(mve_vqshlq_m_<supf><mode>): Likewise.
(mve_vqshlq_m_n_<supf><mode>): Likewise.
(mve_vqshlq_m_r_<supf><mode>): Likewise.
(mve_vqshlq_n_<supf><mode>): Likewise.
(mve_vqshlq_r_<supf><mode>): Likewise.
(mve_vqshluq_m_n_s<mode>): Likewise.
(mve_vqshluq_n_s<mode>): Likewise.
(mve_vqshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqshrnbq_n_<supf><mode>): Likewise.
(mve_vqshrntq_m_n_<supf><mode>): Likewise.
(mve_vqshrntq_n_<supf><mode>): Likewise.
(mve_vqshrunbq_m_n_s<mode>): Likewise.
(mve_vqshrunbq_n_s<mode>): Likewise.
(mve_vqshruntq_m_n_s<mode>): Likewise.
(mve_vqshruntq_n_s<mode>): Likewise.
(mve_vqsubq_<supf><mode>): Likewise.
(mve_vqsubq_m_<supf><mode>): Likewise.
(mve_vqsubq_m_n_<supf><mode>): Likewise.
(mve_vqsubq_n_<supf><mode>): Likewise.
(mve_vrev16q_<supf>v16qi): Likewise.
(mve_vrev16q_m_<supf>v16qi): Likewise.
(mve_vrev32q_<supf><mode>): Likewise.
(mve_vrev32q_fv8hf): Likewise.
(mve_vrev32q_m_<supf><mode>): Likewise.
(mve_vrev32q_m_fv8hf): Likewise.
(mve_vrev64q_<supf><mode>): Likewise.
(mve_vrev64q_f<mode>): Likewise.
(mve_vrev64q_m_<supf><mode>): Likewise.
(mve_vrev64q_m_f<mode>): Likewise.
(mve_vrhaddq_<supf><mode>): Likewise.
(mve_vrhaddq_m_<supf><mode>): Likewise.
(mve_vrmlaldavhaq_<supf>v4si): Likewise.
(mve_vrmlaldavhaq_p_sv4si): Likewise.
(mve_vrmlaldavhaq_p_uv4si): Likewise.
(mve_vrmlaldavhaq_sv4si): Likewise.
(mve_vrmlaldavhaq_uv4si): Likewise.
(mve_vrmlaldavhaxq_p_sv4si): Likewise.
(mve_vrmlaldavhaxq_sv4si): Likewise.
(mve_vrmlaldavhq_<supf>v4si): Likewise.
(mve_vrmlaldavhq_p_<supf>v4si): Likewise.
(mve_vrmlaldavhxq_p_sv4si): Likewise.
(mve_vrmlaldavhxq_sv4si): Likewise.
(mve_vrmlsldavhaq_p_sv4si): Likewise.
(mve_vrmlsldavhaq_sv4si): Likewise.
(mve_vrmlsldavhaxq_p_sv4si): Likewise.
(mve_vrmlsldavhaxq_sv4si): Likewise.
(mve_vrmlsldavhq_p_sv4si): Likewise.
(mve_vrmlsldavhq_sv4si): Likewise.
(mve_vrmlsldavhxq_p_sv4si): Likewise.
(mve_vrmlsldavhxq_sv4si): Likewise.
(mve_vrmulhq_<supf><mode>): Likewise.
(mve_vrmulhq_m_<supf><mode>): Likewise.
(mve_vrndaq_f<mode>): Likewise.
(mve_vrndaq_m_f<mode>): Likewise.
(mve_vrndmq_f<mode>): Likewise.
(mve_vrndmq_m_f<mode>): Likewise.
(mve_vrndnq_f<mode>): Likewise.
(mve_vrndnq_m_f<mode>): Likewise.
(mve_vrndpq_f<mode>): Likewise.
(mve_vrndpq_m_f<mode>): Likewise.
(mve_vrndq_f<mode>): Likewise.
(mve_vrndq_m_f<mode>): Likewise.
(mve_vrndxq_f<mode>): Likewise.
(mve_vrndxq_m_f<mode>): Likewise.
(mve_vrshlq_<supf><mode>): Likewise.
(mve_vrshlq_m_<supf><mode>): Likewise.
(mve_vrshlq_m_n_<supf><mode>): Likewise.
(mve_vrshlq_n_<supf><mode>): Likewise.
(mve_vrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vrshrnbq_n_<supf><mode>): Likewise.
(mve_vrshrntq_m_n_<supf><mode>): Likewise.
(mve_vrshrntq_n_<supf><mode>): Likewise.
(mve_vrshrq_m_n_<supf><mode>): Likewise.
(mve_vrshrq_n_<supf><mode>): Likewise.
(mve_vsbciq_<supf>v4si): Likewise.
(mve_vsbciq_m_<supf>v4si): Likewise.
(mve_vsbcq_<supf>v4si): Likewise.
(mve_vsbcq_m_<supf>v4si): Likewise.
(mve_vshlcq_<supf><mode>): Likewise.
(mve_vshlcq_m_<supf><mode>): Likewise.
(mve_vshllbq_m_n_<supf><mode>): Likewise.
(mve_vshllbq_n_<supf><mode>): Likewise.
(mve_vshlltq_m_n_<supf><mode>): Likewise.
(mve_vshlltq_n_<supf><mode>): Likewise.
(mve_vshlq_<supf><mode>): Likewise.
(mve_vshlq_<supf><mode>): Likewise.
(mve_vshlq_m_<supf><mode>): Likewise.
(mve_vshlq_m_n_<supf><mode>): Likewise.
(mve_vshlq_m_r_<supf><mode>): Likewise.
(mve_vshlq_n_<supf><mode>): Likewise.
(mve_vshlq_r_<supf><mode>): Likewise.
(mve_vshrnbq_m_n_<supf><mode>): Likewise.
(mve_vshrnbq_n_<supf><mode>): Likewise.
(mve_vshrntq_m_n_<supf><mode>): Likewise.
(mve_vshrntq_n_<supf><mode>): Likewise.
(mve_vshrq_m_n_<supf><mode>): Likewise.
(mve_vshrq_n_<supf><mode>): Likewise.
(mve_vsliq_m_n_<supf><mode>): Likewise.
(mve_vsliq_n_<supf><mode>): Likewise.
(mve_vsriq_m_n_<supf><mode>): Likewise.
(mve_vsriq_n_<supf><mode>): Likewise.
(mve_vstrbq_<supf><mode>): Likewise.
(mve_vstrbq_p_<supf><mode>): Likewise.
(mve_vstrbq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrbq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrdq_scatter_base_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrhq_<supf><mode>): Likewise.
(mve_vstrhq_fv8hf): Likewise.
(mve_vstrhq_p_<supf><mode>): Likewise.
(mve_vstrhq_p_fv8hf): Likewise.
(mve_vstrhq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_p_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn): Likewise.
(mve_vstrwq_<supf>v4si): Likewise.
(mve_vstrwq_fv4sf): Likewise.
(mve_vstrwq_p_<supf>v4si): Likewise.
(mve_vstrwq_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_fv4sf): Likewise.
(mve_vstrwq_scatter_base_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_p_fv4sf): Likewise.
(mve_vstrwq_scatter_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_p_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn): Likewise.
(mve_vsubq_<supf><mode>): Likewise.
(mve_vsubq_f<mode>): Likewise.
(mve_vsubq_m_<supf><mode>): Likewise.
(mve_vsubq_m_f<mode>): Likewise.
(mve_vsubq_m_n_<supf><mode>): Likewise.
(mve_vsubq_m_n_f<mode>): Likewise.
(mve_vsubq_n_<supf><mode>): Likewise.
(mve_vsubq_n_f<mode>): Likewise.
[-- Attachment #2: 1.patch --]
[-- Type: text/x-patch, Size: 126482 bytes --]
commit 739b52501f95fe5073967009214e55f0dba0eda2
Author: Stam Markianos-Wright <stam.markianos-wright@arm.com>
Date: Tue Oct 18 17:42:56 2022 +0100
arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns
I'd like to submit two patches that add support for Arm's MVE
Tail Predicated Low Overhead Loop feature.
--- Introduction ---
The M-class Arm-ARM:
https://developer.arm.com/documentation/ddi0553/bu/?lang=en
Section B5.5.1 "Loop tail predication" describes the feature
we are adding support for with this patch (although
we only add codegen for DLSTP/LETP instruction loops).
Previously with commit d2ed233cb94 we'd added support for
non-MVE DLS/LE loops through the loop-doloop pass, which, given
a standard MVE loop like:
```
void __attribute__ ((noinline)) test (int16_t *a, int16_t *b, int16_t *c, int n)
{
while (n > 0)
{
mve_pred16_t p = vctp16q (n);
int16x8_t va = vldrhq_z_s16 (a, p);
int16x8_t vb = vldrhq_z_s16 (b, p);
int16x8_t vc = vaddq_x_s16 (va, vb, p);
vstrhq_p_s16 (c, vc, p);
c+=8;
a+=8;
b+=8;
n-=8;
}
}
```
.. would output:
```
<pre-calculate the number of iterations and place it into lr>
dls lr, lr
.L3:
vctp.16 r3
vmrs ip, P0 @ movhi
sxth ip, ip
vmsr P0, ip @ movhi
mov r4, r0
vpst
vldrht.16 q2, [r4]
mov r4, r1
vmov q3, q0
vpst
vldrht.16 q1, [r4]
mov r4, r2
vpst
vaddt.i16 q3, q2, q1
subs r3, r3, #8
vpst
vstrht.16 q3, [r4]
adds r0, r0, #16
adds r1, r1, #16
adds r2, r2, #16
le lr, .L3
```
where the LE instruction will decrement LR by 1, compare and
branch if needed.
(there are also other inefficiencies with the above code, like the
pointless vmrs/sxth/vmsr on the VPR and the adds not being merged
into the vldrht/vstrht as a #16 offsets and some random movs!
But that's different problems...)
The MVE version is similar, except that:
* Instead of DLS/LE the instructions are DLSTP/LETP.
* Instead of pre-calculating the number of iterations of the
loop, we place the number of elements to be processed by the
loop into LR.
* Instead of decrementing the LR by one, LETP will decrement it
by FPSCR.LTPSIZE, which is the number of elements being
processed in each iteration: 16 for 8-bit elements, 5 for 16-bit
elements, etc.
* On the final iteration, automatic Loop Tail Predication is
performed, as if the instructions within the loop had been VPT
predicated with a VCTP generating the VPR predicate in every
loop iteration.
The dlstp/letp loop now looks like:
```
<place n into r3>
dlstp.16 lr, r3
.L14:
mov r3, r0
vldrh.16 q3, [r3]
mov r3, r1
vldrh.16 q2, [r3]
mov r3, r2
vadd.i16 q3, q3, q2
adds r0, r0, #16
vstrh.16 q3, [r3]
adds r1, r1, #16
adds r2, r2, #16
letp lr, .L14
```
Since the loop tail predication is automatic, we have eliminated
the VCTP that had been specified by the user in the intrinsic
and converted the VPT-predicated instructions into their
unpredicated equivalents (which also saves us from VPST insns).
The LE instruction here decrements LR by 8 in each iteration.
--- This 1/2 patch ---
This first patch lays some groundwork by adding an attribute to
md patterns, and then the second patch contains the functional
changes.
One major difficulty in implementing MVE Tail-Predicated Low
Overhead Loops was the need to transform VPT-predicated insns
in the insn chain into their unpredicated equivalents, like:
`mve_vldrbq_z_<supf><mode> -> mve_vldrbq_<supf><mode>`.
This requires us to have a deterministic link between two
different patterns in mve.md -- this _could_ be done by
re-ordering the entirety of mve.md such that the patterns are
at some constant icode proximity (e.g. having the _z immediately
after the unpredicated version would mean that to map from the
former to the latter you could use icode-1), but that is a very
messy solution that would lead to complex unknown dependencies
between the ordering of patterns.
This patch proves an alternative way of doing that: using an insn
attribute to encode the icode of the unpredicated instruction.
No regressions on arm-none-eabi with an MVE target.
Thank you,
Stam Markianos-Wright
gcc/ChangeLog:
* config/arm/arm.md (mve_unpredicated_insn): New attribute.
* config/arm/arm.h (MVE_VPT_PREDICATED_INSN_P): New define.
(MVE_VPT_UNPREDICATED_INSN_P): Likewise.
(MVE_VPT_PREDICABLE_INSN_P): Likewise.
* config/arm/vec-common.md (mve_vshlq_<supf><mode>): Add attribute.
* config/arm/mve.md (arm_vcx1q<a>_p_v16qi): Add attribute.
(arm_vcx1q<a>v16qi): Likewise.
(arm_vcx1qav16qi): Likewise.
(arm_vcx1qv16qi): Likewise.
(arm_vcx2q<a>_p_v16qi): Likewise.
(arm_vcx2q<a>v16qi): Likewise.
(arm_vcx2qav16qi): Likewise.
(arm_vcx2qv16qi): Likewise.
(arm_vcx3q<a>_p_v16qi): Likewise.
(arm_vcx3q<a>v16qi): Likewise.
(arm_vcx3qav16qi): Likewise.
(arm_vcx3qv16qi): Likewise.
(mve_vabavq_<supf><mode>): Likewise.
(mve_vabavq_p_<supf><mode>): Likewise.
(mve_vabdq_<supf><mode>): Likewise.
(mve_vabdq_f<mode>): Likewise.
(mve_vabdq_m_<supf><mode>): Likewise.
(mve_vabdq_m_f<mode>): Likewise.
(mve_vabsq_f<mode>): Likewise.
(mve_vabsq_m_f<mode>): Likewise.
(mve_vabsq_m_s<mode>): Likewise.
(mve_vabsq_s<mode>): Likewise.
(mve_vadciq_<supf>v4si): Likewise.
(mve_vadciq_m_<supf>v4si): Likewise.
(mve_vadcq_<supf>v4si): Likewise.
(mve_vadcq_m_<supf>v4si): Likewise.
(mve_vaddlvaq_<supf>v4si): Likewise.
(mve_vaddlvaq_p_<supf>v4si): Likewise.
(mve_vaddlvq_<supf>v4si): Likewise.
(mve_vaddlvq_p_<supf>v4si): Likewise.
(mve_vaddq_f<mode>): Likewise.
(mve_vaddq_m_<supf><mode>): Likewise.
(mve_vaddq_m_f<mode>): Likewise.
(mve_vaddq_m_n_<supf><mode>): Likewise.
(mve_vaddq_m_n_f<mode>): Likewise.
(mve_vaddq_n_<supf><mode>): Likewise.
(mve_vaddq_n_f<mode>): Likewise.
(mve_vaddq<mode>): Likewise.
(mve_vaddvaq_<supf><mode>): Likewise.
(mve_vaddvaq_p_<supf><mode>): Likewise.
(mve_vaddvq_<supf><mode>): Likewise.
(mve_vaddvq_p_<supf><mode>): Likewise.
(mve_vandq_<supf><mode>): Likewise.
(mve_vandq_f<mode>): Likewise.
(mve_vandq_m_<supf><mode>): Likewise.
(mve_vandq_m_f<mode>): Likewise.
(mve_vandq_s<mode>): Likewise.
(mve_vandq_u<mode>): Likewise.
(mve_vbicq_<supf><mode>): Likewise.
(mve_vbicq_f<mode>): Likewise.
(mve_vbicq_m_<supf><mode>): Likewise.
(mve_vbicq_m_f<mode>): Likewise.
(mve_vbicq_m_n_<supf><mode>): Likewise.
(mve_vbicq_n_<supf><mode>): Likewise.
(mve_vbicq_s<mode>): Likewise.
(mve_vbicq_u<mode>): Likewise.
(mve_vbrsrq_m_n_<supf><mode>): Likewise.
(mve_vbrsrq_m_n_f<mode>): Likewise.
(mve_vbrsrq_n_<supf><mode>): Likewise.
(mve_vbrsrq_n_f<mode>): Likewise.
(mve_vcaddq_rot270_m_<supf><mode>): Likewise.
(mve_vcaddq_rot270_m_f<mode>): Likewise.
(mve_vcaddq_rot270<mode>): Likewise.
(mve_vcaddq_rot270<mode>): Likewise.
(mve_vcaddq_rot90_m_<supf><mode>): Likewise.
(mve_vcaddq_rot90_m_f<mode>): Likewise.
(mve_vcaddq_rot90<mode>): Likewise.
(mve_vcaddq_rot90<mode>): Likewise.
(mve_vcaddq<mve_rot><mode>): Likewise.
(mve_vcaddq<mve_rot><mode>): Likewise.
(mve_vclsq_m_s<mode>): Likewise.
(mve_vclsq_s<mode>): Likewise.
(mve_vclzq_<supf><mode>): Likewise.
(mve_vclzq_m_<supf><mode>): Likewise.
(mve_vclzq_s<mode>): Likewise.
(mve_vclzq_u<mode>): Likewise.
(mve_vcmlaq_m_f<mode>): Likewise.
(mve_vcmlaq_rot180_m_f<mode>): Likewise.
(mve_vcmlaq_rot180<mode>): Likewise.
(mve_vcmlaq_rot270_m_f<mode>): Likewise.
(mve_vcmlaq_rot270<mode>): Likewise.
(mve_vcmlaq_rot90_m_f<mode>): Likewise.
(mve_vcmlaq_rot90<mode>): Likewise.
(mve_vcmlaq<mode>): Likewise.
(mve_vcmlaq<mve_rot><mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_f<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_n_<mode>): Likewise.
(mve_vcmp<mve_cmp_op>q_n_f<mode>): Likewise.
(mve_vcmpcsq_<mode>): Likewise.
(mve_vcmpcsq_m_n_u<mode>): Likewise.
(mve_vcmpcsq_m_u<mode>): Likewise.
(mve_vcmpcsq_n_<mode>): Likewise.
(mve_vcmpeqq_<mode>): Likewise.
(mve_vcmpeqq_f<mode>): Likewise.
(mve_vcmpeqq_m_<supf><mode>): Likewise.
(mve_vcmpeqq_m_f<mode>): Likewise.
(mve_vcmpeqq_m_n_<supf><mode>): Likewise.
(mve_vcmpeqq_m_n_f<mode>): Likewise.
(mve_vcmpeqq_n_<mode>): Likewise.
(mve_vcmpeqq_n_f<mode>): Likewise.
(mve_vcmpgeq_<mode>): Likewise.
(mve_vcmpgeq_f<mode>): Likewise.
(mve_vcmpgeq_m_f<mode>): Likewise.
(mve_vcmpgeq_m_n_f<mode>): Likewise.
(mve_vcmpgeq_m_n_s<mode>): Likewise.
(mve_vcmpgeq_m_s<mode>): Likewise.
(mve_vcmpgeq_n_<mode>): Likewise.
(mve_vcmpgeq_n_f<mode>): Likewise.
(mve_vcmpgtq_<mode>): Likewise.
(mve_vcmpgtq_f<mode>): Likewise.
(mve_vcmpgtq_m_f<mode>): Likewise.
(mve_vcmpgtq_m_n_f<mode>): Likewise.
(mve_vcmpgtq_m_n_s<mode>): Likewise.
(mve_vcmpgtq_m_s<mode>): Likewise.
(mve_vcmpgtq_n_<mode>): Likewise.
(mve_vcmpgtq_n_f<mode>): Likewise.
(mve_vcmphiq_<mode>): Likewise.
(mve_vcmphiq_m_n_u<mode>): Likewise.
(mve_vcmphiq_m_u<mode>): Likewise.
(mve_vcmphiq_n_<mode>): Likewise.
(mve_vcmpleq_<mode>): Likewise.
(mve_vcmpleq_f<mode>): Likewise.
(mve_vcmpleq_m_f<mode>): Likewise.
(mve_vcmpleq_m_n_f<mode>): Likewise.
(mve_vcmpleq_m_n_s<mode>): Likewise.
(mve_vcmpleq_m_s<mode>): Likewise.
(mve_vcmpleq_n_<mode>): Likewise.
(mve_vcmpleq_n_f<mode>): Likewise.
(mve_vcmpltq_<mode>): Likewise.
(mve_vcmpltq_f<mode>): Likewise.
(mve_vcmpltq_m_f<mode>): Likewise.
(mve_vcmpltq_m_n_f<mode>): Likewise.
(mve_vcmpltq_m_n_s<mode>): Likewise.
(mve_vcmpltq_m_s<mode>): Likewise.
(mve_vcmpltq_n_<mode>): Likewise.
(mve_vcmpltq_n_f<mode>): Likewise.
(mve_vcmpneq_<mode>): Likewise.
(mve_vcmpneq_f<mode>): Likewise.
(mve_vcmpneq_m_<supf><mode>): Likewise.
(mve_vcmpneq_m_f<mode>): Likewise.
(mve_vcmpneq_m_n_<supf><mode>): Likewise.
(mve_vcmpneq_m_n_f<mode>): Likewise.
(mve_vcmpneq_n_<mode>): Likewise.
(mve_vcmpneq_n_f<mode>): Likewise.
(mve_vcmulq_m_f<mode>): Likewise.
(mve_vcmulq_rot180_m_f<mode>): Likewise.
(mve_vcmulq_rot180<mode>): Likewise.
(mve_vcmulq_rot270_m_f<mode>): Likewise.
(mve_vcmulq_rot270<mode>): Likewise.
(mve_vcmulq_rot90_m_f<mode>): Likewise.
(mve_vcmulq_rot90<mode>): Likewise.
(mve_vcmulq<mode>): Likewise.
(mve_vcmulq<mve_rot><mode>): Likewise.
(mve_vctp<mode1>q_mhi): Likewise.
(mve_vctp<mode1>qhi): Likewise.
(mve_vcvtaq_<supf><mode>): Likewise.
(mve_vcvtaq_m_<supf><mode>): Likewise.
(mve_vcvtbq_f16_f32v8hf): Likewise.
(mve_vcvtbq_f32_f16v4sf): Likewise.
(mve_vcvtbq_m_f16_f32v8hf): Likewise.
(mve_vcvtbq_m_f32_f16v4sf): Likewise.
(mve_vcvtmq_<supf><mode>): Likewise.
(mve_vcvtmq_m_<supf><mode>): Likewise.
(mve_vcvtnq_<supf><mode>): Likewise.
(mve_vcvtnq_m_<supf><mode>): Likewise.
(mve_vcvtpq_<supf><mode>): Likewise.
(mve_vcvtpq_m_<supf><mode>): Likewise.
(mve_vcvtq_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_m_to_f_<supf><mode>): Likewise.
(mve_vcvtq_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_to_f_<supf><mode>): Likewise.
(mve_vcvttq_f16_f32v8hf): Likewise.
(mve_vcvttq_f32_f16v4sf): Likewise.
(mve_vcvttq_m_f16_f32v8hf): Likewise.
(mve_vcvttq_m_f32_f16v4sf): Likewise.
(mve_vddupq_m_wb_u<mode>_insn): Likewise.
(mve_vddupq_u<mode>_insn): Likewise.
(mve_vdupq_m_n_<supf><mode>): Likewise.
(mve_vdupq_m_n_f<mode>): Likewise.
(mve_vdupq_n_<supf><mode>): Likewise.
(mve_vdupq_n_f<mode>): Likewise.
(mve_vdwdupq_m_wb_u<mode>_insn): Likewise.
(mve_vdwdupq_wb_u<mode>_insn): Likewise.
(mve_veorq_<supf><mode>): Likewise.
(mve_veorq_f<mode>): Likewise.
(mve_veorq_m_<supf><mode>): Likewise.
(mve_veorq_m_f<mode>): Likewise.
(mve_veorq_s<mode>): Likewise.
(mve_veorq_u<mode>): Likewise.
(mve_vfmaq_f<mode>): Likewise.
(mve_vfmaq_m_f<mode>): Likewise.
(mve_vfmaq_m_n_f<mode>): Likewise.
(mve_vfmaq_n_f<mode>): Likewise.
(mve_vfmasq_m_n_f<mode>): Likewise.
(mve_vfmasq_n_f<mode>): Likewise.
(mve_vfmsq_f<mode>): Likewise.
(mve_vfmsq_m_f<mode>): Likewise.
(mve_vhaddq_<supf><mode>): Likewise.
(mve_vhaddq_m_<supf><mode>): Likewise.
(mve_vhaddq_m_n_<supf><mode>): Likewise.
(mve_vhaddq_n_<supf><mode>): Likewise.
(mve_vhcaddq_rot270_m_s<mode>): Likewise.
(mve_vhcaddq_rot270_s<mode>): Likewise.
(mve_vhcaddq_rot90_m_s<mode>): Likewise.
(mve_vhcaddq_rot90_s<mode>): Likewise.
(mve_vhsubq_<supf><mode>): Likewise.
(mve_vhsubq_m_<supf><mode>): Likewise.
(mve_vhsubq_m_n_<supf><mode>): Likewise.
(mve_vhsubq_n_<supf><mode>): Likewise.
(mve_vidupq_m_wb_u<mode>_insn): Likewise.
(mve_vidupq_u<mode>_insn): Likewise.
(mve_viwdupq_m_wb_u<mode>_insn): Likewise.
(mve_viwdupq_wb_u<mode>_insn): Likewise.
(mve_vldrbq_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrbq_z_<supf><mode>): Likewise.
(mve_vldrdq_gather_base_<supf>v2di): Likewise.
(mve_vldrdq_gather_base_wb_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_wb_z_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_z_<supf>v2di): Likewise.
(mve_vldrhq_<supf><mode>): Likewise.
(mve_vldrhq_fv8hf): Likewise.
(mve_vldrhq_gather_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_fv8hf): Likewise.
(mve_vldrhq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_z_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_z_fv8hf): Likewise.
(mve_vldrhq_z_<supf><mode>): Likewise.
(mve_vldrhq_z_fv8hf): Likewise.
(mve_vldrwq_<supf>v4si): Likewise.
(mve_vldrwq_fv4sf): Likewise.
(mve_vldrwq_gather_base_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_fv4sf): Likewise.
(mve_vldrwq_gather_base_wb_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_z_fv4sf): Likewise.
(mve_vldrwq_gather_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_fv4sf): Likewise.
(mve_vldrwq_gather_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_z_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_z_fv4sf): Likewise.
(mve_vldrwq_z_<supf>v4si): Likewise.
(mve_vldrwq_z_fv4sf): Likewise.
(mve_vmaxaq_m_s<mode>): Likewise.
(mve_vmaxaq_s<mode>): Likewise.
(mve_vmaxavq_p_s<mode>): Likewise.
(mve_vmaxavq_s<mode>): Likewise.
(mve_vmaxnmaq_f<mode>): Likewise.
(mve_vmaxnmaq_m_f<mode>): Likewise.
(mve_vmaxnmavq_f<mode>): Likewise.
(mve_vmaxnmavq_p_f<mode>): Likewise.
(mve_vmaxnmq_f<mode>): Likewise.
(mve_vmaxnmq_m_f<mode>): Likewise.
(mve_vmaxnmvq_f<mode>): Likewise.
(mve_vmaxnmvq_p_f<mode>): Likewise.
(mve_vmaxq_<supf><mode>): Likewise.
(mve_vmaxq_m_<supf><mode>): Likewise.
(mve_vmaxq_s<mode>): Likewise.
(mve_vmaxq_u<mode>): Likewise.
(mve_vmaxvq_<supf><mode>): Likewise.
(mve_vmaxvq_p_<supf><mode>): Likewise.
(mve_vminaq_m_s<mode>): Likewise.
(mve_vminaq_s<mode>): Likewise.
(mve_vminavq_p_s<mode>): Likewise.
(mve_vminavq_s<mode>): Likewise.
(mve_vminnmaq_f<mode>): Likewise.
(mve_vminnmaq_m_f<mode>): Likewise.
(mve_vminnmavq_f<mode>): Likewise.
(mve_vminnmavq_p_f<mode>): Likewise.
(mve_vminnmq_f<mode>): Likewise.
(mve_vminnmq_m_f<mode>): Likewise.
(mve_vminnmvq_f<mode>): Likewise.
(mve_vminnmvq_p_f<mode>): Likewise.
(mve_vminq_<supf><mode>): Likewise.
(mve_vminq_m_<supf><mode>): Likewise.
(mve_vminq_s<mode>): Likewise.
(mve_vminq_u<mode>): Likewise.
(mve_vminvq_<supf><mode>): Likewise.
(mve_vminvq_p_<supf><mode>): Likewise.
(mve_vmladavaq_<supf><mode>): Likewise.
(mve_vmladavaq_p_<supf><mode>): Likewise.
(mve_vmladavaxq_p_s<mode>): Likewise.
(mve_vmladavaxq_s<mode>): Likewise.
(mve_vmladavq_<supf><mode>): Likewise.
(mve_vmladavq_p_<supf><mode>): Likewise.
(mve_vmladavxq_p_s<mode>): Likewise.
(mve_vmladavxq_s<mode>): Likewise.
(mve_vmlaldavaq_<supf><mode>): Likewise.
(mve_vmlaldavaq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_<supf><mode>): Likewise.
(mve_vmlaldavaxq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_s<mode>): Likewise.
(mve_vmlaldavq_<supf><mode>): Likewise.
(mve_vmlaldavq_p_<supf><mode>): Likewise.
(mve_vmlaldavxq_p_s<mode>): Likewise.
(mve_vmlaldavxq_s<mode>): Likewise.
(mve_vmlaq_m_n_<supf><mode>): Likewise.
(mve_vmlaq_n_<supf><mode>): Likewise.
(mve_vmlasq_m_n_<supf><mode>): Likewise.
(mve_vmlasq_n_<supf><mode>): Likewise.
(mve_vmlsdavaq_p_s<mode>): Likewise.
(mve_vmlsdavaq_s<mode>): Likewise.
(mve_vmlsdavaxq_p_s<mode>): Likewise.
(mve_vmlsdavaxq_s<mode>): Likewise.
(mve_vmlsdavq_p_s<mode>): Likewise.
(mve_vmlsdavq_s<mode>): Likewise.
(mve_vmlsdavxq_p_s<mode>): Likewise.
(mve_vmlsdavxq_s<mode>): Likewise.
(mve_vmlsldavaq_p_s<mode>): Likewise.
(mve_vmlsldavaq_s<mode>): Likewise.
(mve_vmlsldavaxq_p_s<mode>): Likewise.
(mve_vmlsldavaxq_s<mode>): Likewise.
(mve_vmlsldavq_p_s<mode>): Likewise.
(mve_vmlsldavq_s<mode>): Likewise.
(mve_vmlsldavxq_p_s<mode>): Likewise.
(mve_vmlsldavxq_s<mode>): Likewise.
(mve_vmovlbq_<supf><mode>): Likewise.
(mve_vmovlbq_m_<supf><mode>): Likewise.
(mve_vmovltq_<supf><mode>): Likewise.
(mve_vmovltq_m_<supf><mode>): Likewise.
(mve_vmovnbq_<supf><mode>): Likewise.
(mve_vmovnbq_m_<supf><mode>): Likewise.
(mve_vmovntq_<supf><mode>): Likewise.
(mve_vmovntq_m_<supf><mode>): Likewise.
(mve_vmulhq_<supf><mode>): Likewise.
(mve_vmulhq_m_<supf><mode>): Likewise.
(mve_vmullbq_int_<supf><mode>): Likewise.
(mve_vmullbq_int_m_<supf><mode>): Likewise.
(mve_vmullbq_poly_m_p<mode>): Likewise.
(mve_vmullbq_poly_p<mode>): Likewise.
(mve_vmulltq_int_<supf><mode>): Likewise.
(mve_vmulltq_int_m_<supf><mode>): Likewise.
(mve_vmulltq_poly_m_p<mode>): Likewise.
(mve_vmulltq_poly_p<mode>): Likewise.
(mve_vmulq_<supf><mode>): Likewise.
(mve_vmulq_f<mode>): Likewise.
(mve_vmulq_m_<supf><mode>): Likewise.
(mve_vmulq_m_f<mode>): Likewise.
(mve_vmulq_m_n_<supf><mode>): Likewise.
(mve_vmulq_m_n_f<mode>): Likewise.
(mve_vmulq_n_<supf><mode>): Likewise.
(mve_vmulq_n_f<mode>): Likewise.
(mve_vmvnq_<supf><mode>): Likewise.
(mve_vmvnq_m_<supf><mode>): Likewise.
(mve_vmvnq_m_n_<supf><mode>): Likewise.
(mve_vmvnq_n_<supf><mode>): Likewise.
(mve_vmvnq_s<mode>): Likewise.
(mve_vmvnq_u<mode>): Likewise.
(mve_vnegq_f<mode>): Likewise.
(mve_vnegq_m_f<mode>): Likewise.
(mve_vnegq_m_s<mode>): Likewise.
(mve_vnegq_s<mode>): Likewise.
(mve_vornq_<supf><mode>): Likewise.
(mve_vornq_f<mode>): Likewise.
(mve_vornq_m_<supf><mode>): Likewise.
(mve_vornq_m_f<mode>): Likewise.
(mve_vornq_s<mode>): Likewise.
(mve_vornq_u<mode>): Likewise.
(mve_vorrq_<supf><mode>): Likewise.
(mve_vorrq_f<mode>): Likewise.
(mve_vorrq_m_<supf><mode>): Likewise.
(mve_vorrq_m_f<mode>): Likewise.
(mve_vorrq_m_n_<supf><mode>): Likewise.
(mve_vorrq_n_<supf><mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vqabsq_m_s<mode>): Likewise.
(mve_vqabsq_s<mode>): Likewise.
(mve_vqaddq_<supf><mode>): Likewise.
(mve_vqaddq_m_<supf><mode>): Likewise.
(mve_vqaddq_m_n_<supf><mode>): Likewise.
(mve_vqaddq_n_<supf><mode>): Likewise.
(mve_vqdmladhq_m_s<mode>): Likewise.
(mve_vqdmladhq_s<mode>): Likewise.
(mve_vqdmladhxq_m_s<mode>): Likewise.
(mve_vqdmladhxq_s<mode>): Likewise.
(mve_vqdmlahq_m_n_s<mode>): Likewise.
(mve_vqdmlahq_n_<supf><mode>): Likewise.
(mve_vqdmlahq_n_s<mode>): Likewise.
(mve_vqdmlashq_m_n_s<mode>): Likewise.
(mve_vqdmlashq_n_<supf><mode>): Likewise.
(mve_vqdmlashq_n_s<mode>): Likewise.
(mve_vqdmlsdhq_m_s<mode>): Likewise.
(mve_vqdmlsdhq_s<mode>): Likewise.
(mve_vqdmlsdhxq_m_s<mode>): Likewise.
(mve_vqdmlsdhxq_s<mode>): Likewise.
(mve_vqdmulhq_m_n_s<mode>): Likewise.
(mve_vqdmulhq_m_s<mode>): Likewise.
(mve_vqdmulhq_n_s<mode>): Likewise.
(mve_vqdmulhq_s<mode>): Likewise.
(mve_vqdmullbq_m_n_s<mode>): Likewise.
(mve_vqdmullbq_m_s<mode>): Likewise.
(mve_vqdmullbq_n_s<mode>): Likewise.
(mve_vqdmullbq_s<mode>): Likewise.
(mve_vqdmulltq_m_n_s<mode>): Likewise.
(mve_vqdmulltq_m_s<mode>): Likewise.
(mve_vqdmulltq_n_s<mode>): Likewise.
(mve_vqdmulltq_s<mode>): Likewise.
(mve_vqmovnbq_<supf><mode>): Likewise.
(mve_vqmovnbq_m_<supf><mode>): Likewise.
(mve_vqmovntq_<supf><mode>): Likewise.
(mve_vqmovntq_m_<supf><mode>): Likewise.
(mve_vqmovunbq_m_s<mode>): Likewise.
(mve_vqmovunbq_s<mode>): Likewise.
(mve_vqmovuntq_m_s<mode>): Likewise.
(mve_vqmovuntq_s<mode>): Likewise.
(mve_vqnegq_m_s<mode>): Likewise.
(mve_vqnegq_s<mode>): Likewise.
(mve_vqrdmladhq_m_s<mode>): Likewise.
(mve_vqrdmladhq_s<mode>): Likewise.
(mve_vqrdmladhxq_m_s<mode>): Likewise.
(mve_vqrdmladhxq_s<mode>): Likewise.
(mve_vqrdmlahq_m_n_s<mode>): Likewise.
(mve_vqrdmlahq_n_<supf><mode>): Likewise.
(mve_vqrdmlahq_n_s<mode>): Likewise.
(mve_vqrdmlashq_m_n_s<mode>): Likewise.
(mve_vqrdmlashq_n_<supf><mode>): Likewise.
(mve_vqrdmlashq_n_s<mode>): Likewise.
(mve_vqrdmlsdhq_m_s<mode>): Likewise.
(mve_vqrdmlsdhq_s<mode>): Likewise.
(mve_vqrdmlsdhxq_m_s<mode>): Likewise.
(mve_vqrdmlsdhxq_s<mode>): Likewise.
(mve_vqrdmulhq_m_n_s<mode>): Likewise.
(mve_vqrdmulhq_m_s<mode>): Likewise.
(mve_vqrdmulhq_n_s<mode>): Likewise.
(mve_vqrdmulhq_s<mode>): Likewise.
(mve_vqrshlq_<supf><mode>): Likewise.
(mve_vqrshlq_m_<supf><mode>): Likewise.
(mve_vqrshlq_m_n_<supf><mode>): Likewise.
(mve_vqrshlq_n_<supf><mode>): Likewise.
(mve_vqrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqrshrnbq_n_<supf><mode>): Likewise.
(mve_vqrshrntq_m_n_<supf><mode>): Likewise.
(mve_vqrshrntq_n_<supf><mode>): Likewise.
(mve_vqrshrunbq_m_n_s<mode>): Likewise.
(mve_vqrshrunbq_n_s<mode>): Likewise.
(mve_vqrshruntq_m_n_s<mode>): Likewise.
(mve_vqrshruntq_n_s<mode>): Likewise.
(mve_vqshlq_<supf><mode>): Likewise.
(mve_vqshlq_m_<supf><mode>): Likewise.
(mve_vqshlq_m_n_<supf><mode>): Likewise.
(mve_vqshlq_m_r_<supf><mode>): Likewise.
(mve_vqshlq_n_<supf><mode>): Likewise.
(mve_vqshlq_r_<supf><mode>): Likewise.
(mve_vqshluq_m_n_s<mode>): Likewise.
(mve_vqshluq_n_s<mode>): Likewise.
(mve_vqshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqshrnbq_n_<supf><mode>): Likewise.
(mve_vqshrntq_m_n_<supf><mode>): Likewise.
(mve_vqshrntq_n_<supf><mode>): Likewise.
(mve_vqshrunbq_m_n_s<mode>): Likewise.
(mve_vqshrunbq_n_s<mode>): Likewise.
(mve_vqshruntq_m_n_s<mode>): Likewise.
(mve_vqshruntq_n_s<mode>): Likewise.
(mve_vqsubq_<supf><mode>): Likewise.
(mve_vqsubq_m_<supf><mode>): Likewise.
(mve_vqsubq_m_n_<supf><mode>): Likewise.
(mve_vqsubq_n_<supf><mode>): Likewise.
(mve_vrev16q_<supf>v16qi): Likewise.
(mve_vrev16q_m_<supf>v16qi): Likewise.
(mve_vrev32q_<supf><mode>): Likewise.
(mve_vrev32q_fv8hf): Likewise.
(mve_vrev32q_m_<supf><mode>): Likewise.
(mve_vrev32q_m_fv8hf): Likewise.
(mve_vrev64q_<supf><mode>): Likewise.
(mve_vrev64q_f<mode>): Likewise.
(mve_vrev64q_m_<supf><mode>): Likewise.
(mve_vrev64q_m_f<mode>): Likewise.
(mve_vrhaddq_<supf><mode>): Likewise.
(mve_vrhaddq_m_<supf><mode>): Likewise.
(mve_vrmlaldavhaq_<supf>v4si): Likewise.
(mve_vrmlaldavhaq_p_sv4si): Likewise.
(mve_vrmlaldavhaq_p_uv4si): Likewise.
(mve_vrmlaldavhaq_sv4si): Likewise.
(mve_vrmlaldavhaq_uv4si): Likewise.
(mve_vrmlaldavhaxq_p_sv4si): Likewise.
(mve_vrmlaldavhaxq_sv4si): Likewise.
(mve_vrmlaldavhq_<supf>v4si): Likewise.
(mve_vrmlaldavhq_p_<supf>v4si): Likewise.
(mve_vrmlaldavhxq_p_sv4si): Likewise.
(mve_vrmlaldavhxq_sv4si): Likewise.
(mve_vrmlsldavhaq_p_sv4si): Likewise.
(mve_vrmlsldavhaq_sv4si): Likewise.
(mve_vrmlsldavhaxq_p_sv4si): Likewise.
(mve_vrmlsldavhaxq_sv4si): Likewise.
(mve_vrmlsldavhq_p_sv4si): Likewise.
(mve_vrmlsldavhq_sv4si): Likewise.
(mve_vrmlsldavhxq_p_sv4si): Likewise.
(mve_vrmlsldavhxq_sv4si): Likewise.
(mve_vrmulhq_<supf><mode>): Likewise.
(mve_vrmulhq_m_<supf><mode>): Likewise.
(mve_vrndaq_f<mode>): Likewise.
(mve_vrndaq_m_f<mode>): Likewise.
(mve_vrndmq_f<mode>): Likewise.
(mve_vrndmq_m_f<mode>): Likewise.
(mve_vrndnq_f<mode>): Likewise.
(mve_vrndnq_m_f<mode>): Likewise.
(mve_vrndpq_f<mode>): Likewise.
(mve_vrndpq_m_f<mode>): Likewise.
(mve_vrndq_f<mode>): Likewise.
(mve_vrndq_m_f<mode>): Likewise.
(mve_vrndxq_f<mode>): Likewise.
(mve_vrndxq_m_f<mode>): Likewise.
(mve_vrshlq_<supf><mode>): Likewise.
(mve_vrshlq_m_<supf><mode>): Likewise.
(mve_vrshlq_m_n_<supf><mode>): Likewise.
(mve_vrshlq_n_<supf><mode>): Likewise.
(mve_vrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vrshrnbq_n_<supf><mode>): Likewise.
(mve_vrshrntq_m_n_<supf><mode>): Likewise.
(mve_vrshrntq_n_<supf><mode>): Likewise.
(mve_vrshrq_m_n_<supf><mode>): Likewise.
(mve_vrshrq_n_<supf><mode>): Likewise.
(mve_vsbciq_<supf>v4si): Likewise.
(mve_vsbciq_m_<supf>v4si): Likewise.
(mve_vsbcq_<supf>v4si): Likewise.
(mve_vsbcq_m_<supf>v4si): Likewise.
(mve_vshlcq_<supf><mode>): Likewise.
(mve_vshlcq_m_<supf><mode>): Likewise.
(mve_vshllbq_m_n_<supf><mode>): Likewise.
(mve_vshllbq_n_<supf><mode>): Likewise.
(mve_vshlltq_m_n_<supf><mode>): Likewise.
(mve_vshlltq_n_<supf><mode>): Likewise.
(mve_vshlq_<supf><mode>): Likewise.
(mve_vshlq_<supf><mode>): Likewise.
(mve_vshlq_m_<supf><mode>): Likewise.
(mve_vshlq_m_n_<supf><mode>): Likewise.
(mve_vshlq_m_r_<supf><mode>): Likewise.
(mve_vshlq_n_<supf><mode>): Likewise.
(mve_vshlq_r_<supf><mode>): Likewise.
(mve_vshrnbq_m_n_<supf><mode>): Likewise.
(mve_vshrnbq_n_<supf><mode>): Likewise.
(mve_vshrntq_m_n_<supf><mode>): Likewise.
(mve_vshrntq_n_<supf><mode>): Likewise.
(mve_vshrq_m_n_<supf><mode>): Likewise.
(mve_vshrq_n_<supf><mode>): Likewise.
(mve_vsliq_m_n_<supf><mode>): Likewise.
(mve_vsliq_n_<supf><mode>): Likewise.
(mve_vsriq_m_n_<supf><mode>): Likewise.
(mve_vsriq_n_<supf><mode>): Likewise.
(mve_vstrbq_<supf><mode>): Likewise.
(mve_vstrbq_p_<supf><mode>): Likewise.
(mve_vstrbq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrbq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrdq_scatter_base_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrhq_<supf><mode>): Likewise.
(mve_vstrhq_fv8hf): Likewise.
(mve_vstrhq_p_<supf><mode>): Likewise.
(mve_vstrhq_p_fv8hf): Likewise.
(mve_vstrhq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_p_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn): Likewise.
(mve_vstrwq_<supf>v4si): Likewise.
(mve_vstrwq_fv4sf): Likewise.
(mve_vstrwq_p_<supf>v4si): Likewise.
(mve_vstrwq_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_fv4sf): Likewise.
(mve_vstrwq_scatter_base_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_p_fv4sf): Likewise.
(mve_vstrwq_scatter_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_p_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn): Likewise.
(mve_vsubq_<supf><mode>): Likewise.
(mve_vsubq_f<mode>): Likewise.
(mve_vsubq_m_<supf><mode>): Likewise.
(mve_vsubq_m_f<mode>): Likewise.
(mve_vsubq_m_n_<supf><mode>): Likewise.
(mve_vsubq_m_n_f<mode>): Likewise.
(mve_vsubq_n_<supf><mode>): Likewise.
(mve_vsubq_n_f<mode>): Likewise.
diff --git a/gcc/config/arm/arm.h b/gcc/config/arm/arm.h
index 7d40b8b7e00..40972c24ba1 100644
--- a/gcc/config/arm/arm.h
+++ b/gcc/config/arm/arm.h
@@ -2358,6 +2358,21 @@ extern int making_const_table;
else if (TARGET_THUMB1) \
thumb1_final_prescan_insn (INSN)
+/* These defines are useful to refer to the value of the mve_unpredicated_insn
+ insn attribute. Note that, because these use the get_attr_* function, these
+ will change recog_data if (INSN) isn't current_insn. */
+#define MVE_VPT_PREDICABLE_INSN_P(INSN) \
+ (recog_memoized (INSN) >= 0 \
+ && get_attr_mve_unpredicated_insn (INSN) != 0) \
+
+#define MVE_VPT_PREDICATED_INSN_P(INSN) \
+ (MVE_VPT_PREDICABLE_INSN_P (INSN) \
+ && recog_memoized (INSN) != get_attr_mve_unpredicated_insn (INSN)) \
+
+#define MVE_VPT_UNPREDICATED_INSN_P(INSN) \
+ (MVE_VPT_PREDICABLE_INSN_P (INSN) \
+ && recog_memoized (INSN) == get_attr_mve_unpredicated_insn (INSN)) \
+
#define ARM_SIGN_EXTEND(x) ((HOST_WIDE_INT) \
(HOST_BITS_PER_WIDE_INT <= 32 ? (unsigned HOST_WIDE_INT) (x) \
: ((((unsigned HOST_WIDE_INT)(x)) & (unsigned HOST_WIDE_INT) 0xffffffff) |\
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index cbfc4543531..e9794375187 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -124,6 +124,8 @@
; and not all ARM insns do.
(define_attr "predicated" "yes,no" (const_string "no"))
+(define_attr "mve_unpredicated_insn" "" (const_int 0))
+
; LENGTH of an instruction (in bytes)
(define_attr "length" ""
(const_int 4))
diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md
index 9e3570c5264..74b8af8d57e 100644
--- a/gcc/config/arm/mve.md
+++ b/gcc/config/arm/mve.md
@@ -145,7 +145,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_mnemo>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -159,7 +160,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -173,7 +175,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"v<absneg_str>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_v<absneg_str>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -187,7 +190,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -201,7 +205,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
;; [vcvttq_f32_f16])
@@ -214,7 +219,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtt.f32.f16\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -228,7 +234,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtb.f32.f16\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -242,7 +249,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -256,7 +264,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -270,7 +279,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -284,7 +294,8 @@
]
"TARGET_HAVE_MVE"
"v<absneg_str>.s%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_v<absneg_str>q_s<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -297,7 +308,8 @@
]
"TARGET_HAVE_MVE"
"vmvn\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmvnq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vmvnq_s<mode>"
[
@@ -318,7 +330,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -331,7 +344,8 @@
]
"TARGET_HAVE_MVE"
"vclz.i%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vclzq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vclzq_u<mode>"
[
@@ -354,7 +368,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -368,7 +383,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -382,7 +398,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -397,7 +414,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -411,7 +429,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtp.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -425,7 +444,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtn.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -439,7 +459,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtm.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -453,7 +474,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvta.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -467,7 +489,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem>\t%q0, %1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -481,7 +504,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<V_sz_elem>\t%q0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -495,7 +519,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -509,7 +534,8 @@
]
"TARGET_HAVE_MVE"
"vctp.<MVE_vctp>\t%1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctp<MVE_vctp>q<MVE_vpred>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -538,7 +564,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -553,7 +580,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.f<V_sz_elem>.<supf><V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; [vcreateq_f])
@@ -599,7 +627,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf><V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;; Versions that take constant vectors as operand 2 (with all elements
@@ -647,7 +676,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvt.<supf><V_sz_elem>.f<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -662,8 +692,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vcmpneq_, vcmpcsq_, vcmpeqq_, vcmpgeq_, vcmpgtq_, vcmphiq_, vcmpleq_, vcmpltq_])
@@ -676,7 +707,8 @@
]
"TARGET_HAVE_MVE"
"vcmp.<mve_cmp_type>%#<V_sz_elem>\t<mve_cmp_op>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -691,7 +723,8 @@
]
"TARGET_HAVE_MVE"
"vcmp.<mve_cmp_type>%#<V_sz_elem> <mve_cmp_op>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_n_<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -722,7 +755,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -739,7 +773,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -754,7 +789,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -769,7 +805,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -789,7 +826,8 @@
"@
vand\t%q0, %q1, %q2
* return neon_output_logic_immediate (\"vand\", &operands[2], <MODE>mode, 1, VALID_NEON_QREG_MODE (<MODE>mode));"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vandq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vandq_s<mode>"
[
@@ -811,7 +849,8 @@
]
"TARGET_HAVE_MVE"
"vbic\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vbicq_s<mode>"
@@ -835,7 +874,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -850,7 +890,8 @@
]
"TARGET_HAVE_MVE"
"vcadd.i%#<V_sz_elem> %q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcaddq<mve_rot><mode>"))
+ (set_attr "type" "mve_move")
])
;; Auto vectorizer pattern for int vcadd
@@ -873,7 +914,8 @@
]
"TARGET_HAVE_MVE"
"veor\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_u<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_veorq_s<mode>"
[
@@ -901,7 +943,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -916,7 +959,8 @@
]
"TARGET_HAVE_MVE"
"vhcadd.s%#<V_sz_elem>\t%q0, %q1, %q2, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vhcaddq_rot270_s<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -931,7 +975,8 @@
]
"TARGET_HAVE_MVE"
"vhcadd.s%#<V_sz_elem>\t%q0, %q1, %q2, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vhcaddq_rot90_s<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -947,7 +992,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -962,7 +1008,8 @@
]
"TARGET_HAVE_MVE"
"<max_min_su_str>.<max_min_supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<max_min_su_str>q_<max_min_supf><mode>"))
+ (set_attr "type" "mve_move")
])
@@ -981,7 +1028,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -999,7 +1047,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1014,7 +1063,8 @@
]
"TARGET_HAVE_MVE"
"vmullb.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1029,7 +1079,8 @@
]
"TARGET_HAVE_MVE"
"vmullt.<supf>%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1045,7 +1096,8 @@
]
"TARGET_HAVE_MVE"
"<mve_addsubmul>.i%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_addsubmul>q<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1059,7 +1111,8 @@
]
"TARGET_HAVE_MVE"
"vorn\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_s<mode>"))
+ (set_attr "type" "mve_move")
])
(define_expand "mve_vornq_u<mode>"
@@ -1088,8 +1141,10 @@
"@
vorr\t%q0, %q1, %q2
* return neon_output_logic_immediate (\"vorr\", &operands[2], <MODE>mode, 0, VALID_NEON_QREG_MODE (<MODE>mode));"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_s<mode>"))
+ (set_attr "type" "mve_move")
])
+
(define_expand "mve_vorrq_u<mode>"
[
(set (match_operand:MVE_2 0 "s_register_operand")
@@ -1112,7 +1167,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1128,7 +1184,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1144,7 +1201,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1159,7 +1217,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1174,7 +1233,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1189,7 +1249,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1206,7 +1267,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1220,7 +1282,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vand\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vandq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1234,7 +1297,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vbic\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1249,7 +1313,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcadd.f%#<V_sz_elem> %q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcaddq<mve_rot><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1263,7 +1328,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmp.f%#<V_sz_elem> <mve_cmp_op>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1278,7 +1344,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmp.f%#<V_sz_elem> <mve_cmp_op>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1293,7 +1360,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcmul.f%#<V_sz_elem> %q0, %q1, %q2, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmulq<mve_rot><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1308,8 +1376,10 @@
]
"TARGET_HAVE_MVE"
"vpst\;vctpt.<MVE_vctp>\t%1"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vctp<MVE_vctp>q<MVE_vpred>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")
+])
;;
;; [vcvtbq_f16_f32])
@@ -1323,7 +1393,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtb.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1338,7 +1409,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vcvtt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1386,7 +1458,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1401,7 +1474,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<max_min_f_str>.f%#<V_sz_elem> %q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<max_min_f_str>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1419,7 +1493,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1439,7 +1514,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1455,7 +1531,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_addsubmul>.f%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_addsubmul>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1469,7 +1546,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vorn\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1483,7 +1561,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vorr\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1499,7 +1578,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1515,7 +1595,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1531,7 +1612,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1549,7 +1631,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1565,7 +1648,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1580,7 +1664,8 @@
]
"TARGET_HAVE_MVE"
"vmullt.p%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1595,7 +1680,8 @@
]
"TARGET_HAVE_MVE"
"vmullb.p%#<V_sz_elem>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1616,8 +1702,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_f<mode>"))
+ (set_attr "length""8")])
+
;;
;; [vcvtaq_m_u, vcvtaq_m_s])
;;
@@ -1631,8 +1718,10 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtat.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
+
;;
;; [vcvtq_m_to_f_s, vcvtq_m_to_f_u])
;;
@@ -1646,8 +1735,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vqrshrnbq_n_u, vqrshrnbq_n_s]
@@ -1673,7 +1763,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<isu>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1692,7 +1783,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1708,7 +1800,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1754,7 +1847,10 @@
(match_dup 4)]
VSHLCQ))]
"TARGET_HAVE_MVE"
- "vshlc\t%q0, %1, %4")
+ "vshlc\t%q0, %1, %4"
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
+])
;;
;; [vabsq_m_s]
@@ -1774,7 +1870,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1790,7 +1887,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1813,7 +1911,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.<isu>%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1836,7 +1935,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.<isu>%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1852,7 +1952,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1869,7 +1970,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1888,7 +1990,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1907,7 +2010,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1926,7 +2030,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1947,7 +2052,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -1963,7 +2069,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -1979,7 +2086,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2002,7 +2110,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2019,7 +2128,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2036,7 +2146,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2052,7 +2163,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2068,7 +2180,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2084,7 +2197,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2107,7 +2221,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_mnemo>t.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2123,7 +2238,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcmlaq, vcmlaq_rot90, vcmlaq_rot180, vcmlaq_rot270])
@@ -2141,7 +2257,8 @@
"@
vcmul.f%#<V_sz_elem> %q0, %q2, %q3, #<rot>
vcmla.f%#<V_sz_elem> %q0, %q2, %q3, #<rot>"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmlaq<mve_rot><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2162,7 +2279,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem>\t<mve_cmp_op1>, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmp<mve_cmp_op1>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2178,7 +2296,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2194,7 +2313,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f32.f16\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2210,7 +2330,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f16.f32\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2226,8 +2347,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f32.f16\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vdupq_m_n_f])
@@ -2242,7 +2364,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2259,7 +2382,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2276,7 +2400,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2293,7 +2418,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2312,7 +2438,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2331,7 +2458,8 @@
]
"TARGET_HAVE_MVE"
"<mve_insn>.<supf>%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2350,7 +2478,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2367,7 +2496,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2388,7 +2518,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2404,7 +2535,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2421,7 +2553,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2437,7 +2570,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"<mve_insn>\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -2453,7 +2587,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2469,7 +2604,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2485,7 +2621,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2504,7 +2641,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2520,7 +2658,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2536,7 +2675,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2552,7 +2692,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2569,7 +2710,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2585,7 +2727,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2601,8 +2744,9 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vabavq_p_s, vabavq_p_u])
@@ -2618,7 +2762,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -2635,8 +2780,9 @@
]
"TARGET_HAVE_MVE"
"vpst\n\t<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length" "8")])
;;
;; [vsriq_m_n_s, vsriq_m_n_u])
@@ -2652,8 +2798,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length" "8")])
;;
;; [vcvtq_m_n_to_f_u, vcvtq_m_n_to_f_s])
@@ -2669,7 +2816,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2709,7 +2857,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2728,8 +2877,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vaddq_m_u, vaddq_m_s]
@@ -2747,7 +2897,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.i%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2767,7 +2918,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2784,8 +2936,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [vcaddq_rot270_m_u, vcaddq_rot270_m_s])
@@ -2801,7 +2954,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcaddt.i%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcaddq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2818,7 +2972,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcaddt.i%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcaddq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2846,7 +3001,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2867,7 +3023,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2884,7 +3041,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmullbt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2901,7 +3059,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmulltt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2918,7 +3077,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vornt\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2936,7 +3096,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2954,7 +3115,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2971,7 +3133,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2988,7 +3151,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhcaddt.s%#<V_sz_elem>\t%q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vhcaddq_rot270_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3005,7 +3169,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhcaddt.s%#<V_sz_elem>\t%q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vhcaddq_rot90_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3025,7 +3190,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3053,7 +3219,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<isu>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3073,7 +3240,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3091,7 +3259,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3125,7 +3294,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmulltt.p%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3143,7 +3313,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3161,7 +3332,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;<mve_insn>t.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3185,7 +3357,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3206,7 +3379,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3226,7 +3400,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3243,7 +3418,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;<mve_insn>t.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_<mve_insn>q_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3260,7 +3436,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcaddt.f%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcaddq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3277,7 +3454,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcaddt.f%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcaddq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3294,7 +3472,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #0"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmlaq<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3311,7 +3490,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #180"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmlaq_rot180<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3328,7 +3508,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmlaq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3345,7 +3526,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmlaq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3362,7 +3544,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #0"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmulq<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3379,7 +3562,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #180"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmulq_rot180<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3396,7 +3580,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmulq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3413,7 +3598,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmulq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3430,7 +3616,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vornt\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3450,7 +3637,8 @@
output_asm_insn("vstrb.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vstrbq_scatter_offset_s vstrbq_scatter_offset_u]
@@ -3478,7 +3666,8 @@
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vstrb.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_s vstrwq_scatter_base_u]
@@ -3500,7 +3689,8 @@
output_asm_insn("vstrw.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrbq_gather_offset_s vldrbq_gather_offset_u]
@@ -3523,7 +3713,8 @@
output_asm_insn ("vldrb.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrbq_s vldrbq_u]
@@ -3545,7 +3736,8 @@
output_asm_insn ("vldrb.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_base_s vldrwq_gather_base_u]
@@ -3565,7 +3757,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vstrbq_scatter_offset_p_s vstrbq_scatter_offset_p_u]
@@ -3597,7 +3790,8 @@
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrbt.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_p_s vstrwq_scatter_base_p_u]
@@ -3620,7 +3814,8 @@
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "8")])
(define_insn "mve_vstrbq_p_<supf><mode>"
[(set (match_operand:<MVE_B_ELEM> 0 "mve_memory_operand" "=Ux")
@@ -3638,7 +3833,8 @@
output_asm_insn ("vpst\;vstrbt.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_gather_offset_z_s vldrbq_gather_offset_z_u]
@@ -3663,7 +3859,8 @@
output_asm_insn ("vpst\n\tvldrbt.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_z_s vldrbq_z_u]
@@ -3686,7 +3883,8 @@
output_asm_insn ("vpst\;vldrbt.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_z_s vldrwq_gather_base_z_u]
@@ -3707,7 +3905,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_f]
@@ -3726,7 +3925,8 @@
output_asm_insn ("vldrh.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_s vldrhq_gather_offset_u]
@@ -3749,7 +3949,8 @@
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_z_s vldrhq_gather_offset_z_u]
@@ -3774,7 +3975,8 @@
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_s vldrhq_gather_shifted_offset_u]
@@ -3797,7 +3999,8 @@
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_shifted_offset_z_s vldrhq_gather_shited_offset_z_u]
@@ -3822,7 +4025,8 @@
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_s, vldrhq_u]
@@ -3844,7 +4048,8 @@
output_asm_insn ("vldrh.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_z_f]
@@ -3864,7 +4069,8 @@
output_asm_insn ("vpst\;vldrht.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_z_s vldrhq_z_u]
@@ -3887,7 +4093,8 @@
output_asm_insn ("vpst\;vldrht.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_f]
@@ -3906,7 +4113,8 @@
output_asm_insn ("vldrw.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_s vldrwq_u]
@@ -3925,7 +4133,8 @@
output_asm_insn ("vldrw.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_z_f]
@@ -3945,7 +4154,8 @@
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_z_s vldrwq_z_u]
@@ -3965,7 +4175,8 @@
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "8")])
(define_expand "mve_vld1q_f<mode>"
[(match_operand:MVE_0 0 "s_register_operand")
@@ -4005,7 +4216,8 @@
output_asm_insn ("vldrd.64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_base_z_s vldrdq_gather_base_z_u]
@@ -4026,7 +4238,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_offset_s vldrdq_gather_offset_u]
@@ -4046,7 +4259,8 @@
output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_offset_z_s vldrdq_gather_offset_z_u]
@@ -4067,7 +4281,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_shifted_offset_s vldrdq_gather_shifted_offset_u]
@@ -4087,7 +4302,8 @@
output_asm_insn ("vldrd.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vldrdq_gather_shifted_offset_z_s vldrdq_gather_shifted_offset_z_u]
@@ -4108,7 +4324,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_offset_f]
@@ -4128,7 +4345,8 @@
output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_offset_z_f]
@@ -4150,7 +4368,8 @@
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_f]
@@ -4170,7 +4389,8 @@
output_asm_insn ("vldrh.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vldrhq_gather_shifted_offset_z_f]
@@ -4192,7 +4412,8 @@
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_f]
@@ -4212,7 +4433,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_base_z_f]
@@ -4233,7 +4455,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_f]
@@ -4253,7 +4476,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_offset_s vldrwq_gather_offset_u]
@@ -4273,7 +4497,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_offset_z_f]
@@ -4295,7 +4520,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_z_s vldrwq_gather_offset_z_u]
@@ -4317,7 +4543,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_f]
@@ -4337,7 +4564,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_shifted_offset_s vldrwq_gather_shifted_offset_u]
@@ -4357,7 +4585,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vldrwq_gather_shifted_offset_z_f]
@@ -4379,7 +4608,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_z_s vldrwq_gather_shifted_offset_z_u]
@@ -4401,7 +4631,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_f]
@@ -4420,7 +4651,8 @@
output_asm_insn ("vstrh.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_p_f]
@@ -4441,7 +4673,8 @@
output_asm_insn ("vpst\;vstrht.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_p_s vstrhq_p_u]
@@ -4463,7 +4696,8 @@
output_asm_insn ("vpst\;vstrht.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_p_s vstrhq_scatter_offset_p_u]
@@ -4495,7 +4729,8 @@
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_s vstrhq_scatter_offset_u]
@@ -4523,7 +4758,8 @@
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vstrh.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_shifted_offset_p_s vstrhq_scatter_shifted_offset_p_u]
@@ -4555,7 +4791,8 @@
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_s vstrhq_scatter_shifted_offset_u]
@@ -4584,7 +4821,8 @@
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vstrh.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_s, vstrhq_u]
@@ -4603,7 +4841,8 @@
output_asm_insn ("vstrh.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_f]
@@ -4622,7 +4861,8 @@
output_asm_insn ("vstrw.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_p_f]
@@ -4643,7 +4883,8 @@
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_p_s vstrwq_p_u]
@@ -4664,7 +4905,8 @@
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_s vstrwq_u]
@@ -4683,7 +4925,8 @@
output_asm_insn ("vstrw.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "4")])
(define_expand "mve_vst1q_f<mode>"
[(match_operand:<MVE_CNVT> 0 "mve_memory_operand")
@@ -4726,7 +4969,8 @@
output_asm_insn ("vpst\;\tvstrdt.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_s vstrdq_scatter_base_u]
@@ -4748,7 +4992,8 @@
output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_offset_p_s vstrdq_scatter_offset_p_u]
@@ -4779,7 +5024,8 @@
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_offset_s vstrdq_scatter_offset_u]
@@ -4807,7 +5053,8 @@
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vstrd.64\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_shifted_offset_p_s vstrdq_scatter_shifted_offset_p_u]
@@ -4839,7 +5086,8 @@
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1, uxtw #3]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_shifted_offset_s vstrdq_scatter_shifted_offset_u]
@@ -4868,7 +5116,8 @@
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vstrd.64\t%q2, [%0, %q1, uxtw #3]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_offset_f]
@@ -4896,7 +5145,8 @@
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrh.16\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_offset_p_f]
@@ -4927,7 +5177,8 @@
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_f]
@@ -4955,7 +5206,8 @@
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrh.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrhq_scatter_shifted_offset_p_f]
@@ -4987,7 +5239,8 @@
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_f]
@@ -5009,7 +5262,8 @@
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_p_f]
@@ -5032,7 +5286,8 @@
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_f]
@@ -5060,7 +5315,8 @@
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrw.32\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_offset_p_f]
@@ -5091,7 +5347,8 @@
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -5122,7 +5379,8 @@
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -5150,7 +5408,8 @@
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vstrw.32\t%q2, [%0, %q1]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_shifted_offset_f]
@@ -5178,7 +5437,8 @@
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vstrw.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_f]
@@ -5210,7 +5470,8 @@
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_s vstrwq_scatter_shifted_offset_p_u]
@@ -5242,7 +5503,8 @@
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_s vstrwq_scatter_shifted_offset_u]
@@ -5271,7 +5533,8 @@
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vstrw.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "4")])
;;
;; [vidupq_n_u])
@@ -5339,7 +5602,8 @@
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;\tvidupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vidupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vddupq_n_u])
@@ -5407,7 +5671,8 @@
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;vddupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vddupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vdwdupq_n_u])
@@ -5523,8 +5788,9 @@
]
"TARGET_HAVE_MVE"
"vpst\;vdwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
- (set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
+ (set_attr "length""8")])
;;
;; [viwdupq_n_u])
@@ -5640,7 +5906,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;\tviwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_viwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5666,7 +5933,8 @@
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_wb_p_s vstrwq_scatter_base_wb_p_u]
@@ -5692,7 +5960,8 @@
output_asm_insn ("vpst\;\tvstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_wb_f]
@@ -5717,7 +5986,8 @@
output_asm_insn ("vstrw.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "4")])
;;
;; [vstrwq_scatter_base_wb_p_f]
@@ -5743,7 +6013,8 @@
output_asm_insn ("vpst\;vstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_wb_s vstrdq_scatter_base_wb_u]
@@ -5768,7 +6039,8 @@
output_asm_insn ("vstrd.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "4")])
;;
;; [vstrdq_scatter_base_wb_p_s vstrdq_scatter_base_wb_p_u]
@@ -5794,7 +6066,8 @@
output_asm_insn ("vpst\;vstrdt.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5846,7 +6119,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrwq_gather_base_wb_z_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5902,7 +6176,8 @@
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -5954,7 +6229,8 @@
output_asm_insn ("vldrw.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrwq_gather_base_wb_z_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -6011,7 +6287,8 @@
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrdq_gather_base_wb_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -6064,7 +6341,8 @@
output_asm_insn ("vldrd.64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "4")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "4")])
(define_expand "mve_vldrdq_gather_base_wb_z_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -6103,7 +6381,7 @@
(unspec_volatile:SI [(reg:SI VFPCC_REGNUM)] UNSPEC_GET_FPSCR_NZCVQC))]
"TARGET_HAVE_MVE"
"vmrs\\t%0, FPSCR_nzcvqc"
- [(set_attr "type" "mve_move")])
+ [(set_attr "type" "mve_move")])
(define_insn "set_fpscr_nzcvqc"
[(set (reg:SI VFPCC_REGNUM)
@@ -6111,7 +6389,7 @@
VUNSPEC_SET_FPSCR_NZCVQC))]
"TARGET_HAVE_MVE"
"vmsr\\tFPSCR_nzcvqc, %0"
- [(set_attr "type" "mve_move")])
+ [(set_attr "type" "mve_move")])
;;
;; [vldrdq_gather_base_wb_z_s vldrdq_gather_base_wb_z_u]
@@ -6136,7 +6414,8 @@
output_asm_insn ("vpst\;vldrdt.u64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vadciq_m_s, vadciq_m_u])
;;
@@ -6153,7 +6432,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vadcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -6170,7 +6450,8 @@
]
"TARGET_HAVE_MVE"
"vadci.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -6189,7 +6470,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vadct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -6206,7 +6488,8 @@
]
"TARGET_HAVE_MVE"
"vadc.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")
(set_attr "conds" "set")])
@@ -6226,7 +6509,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsbcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -6243,7 +6527,8 @@
]
"TARGET_HAVE_MVE"
"vsbci.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -6262,7 +6547,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsbct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -6279,7 +6565,8 @@
]
"TARGET_HAVE_MVE"
"vsbc.i32\t%q0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "4")])
;;
@@ -6308,7 +6595,7 @@
"vst21.<V_sz_elem>\t{%q0, %q1}, %3", ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set_attr "length" "8")])
;;
;; [vld2q])
@@ -6336,7 +6623,7 @@
"vld21.<V_sz_elem>\t{%q0, %q1}, %3", ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set_attr "length" "8")])
;;
;; [vld4q])
@@ -6679,7 +6966,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlct\t%q0, %1, %4"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;; CDE instructions on MVE registers.
@@ -6691,7 +6979,8 @@
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx1\\tp%c1, %q0, #%c2"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx1qav16qi"
@@ -6702,7 +6991,8 @@
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx1a\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx2qv16qi"
@@ -6713,7 +7003,8 @@
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx2\\tp%c1, %q0, %q2, #%c3"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx2qav16qi"
@@ -6725,7 +7016,8 @@
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx2a\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx3qv16qi"
@@ -6737,7 +7029,8 @@
UNSPEC_VCDE))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx3\\tp%c1, %q0, %q2, %q3, #%c4"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qv16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx3qav16qi"
@@ -6750,7 +7043,8 @@
UNSPEC_VCDEA))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vcx3a\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3qav16qi"))
+ (set_attr "type" "coproc")]
)
(define_insn "arm_vcx1q<a>_p_v16qi"
@@ -6762,7 +7056,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx1<a>t\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -6776,7 +7071,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx2<a>t\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -6791,7 +7087,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx3<a>t\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
diff --git a/gcc/config/arm/vec-common.md b/gcc/config/arm/vec-common.md
index 9af8429968d..45b6735b15c 100644
--- a/gcc/config/arm/vec-common.md
+++ b/gcc/config/arm/vec-common.md
@@ -366,7 +366,8 @@
"@
<mve_insn>.<supf>%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2
* return neon_output_shift_immediate (\"vshl\", 'i', &operands[2], <MODE>mode, VALID_NEON_QREG_MODE (<MODE>mode), true);"
- [(set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlq_<supf><mode>"))
+ (set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
)
(define_expand "vashl<mode>3"
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns
@ 2022-11-11 17:39 Stam Markianos-Wright
0 siblings, 0 replies; 10+ messages in thread
From: Stam Markianos-Wright @ 2022-11-11 17:39 UTC (permalink / raw)
To: gcc-patches
[-- Attachment #1: Type: text/plain, Size: 118267 bytes --]
Hi all,
I'd like to submit two patches that add support for Arm's MVE
Tail Predicated Low Overhead Loop feature.
--- Introduction ---
The M-class Arm-ARM:
https://developer.arm.com/documentation/ddi0553/bu/?lang=en
Section B5.5.1 "Loop tail predication" describes the feature
we are adding support for with this patch (although
we only add codegen for DLSTP/LETP instruction loops).
Previously with commit d2ed233cb94 we'd added support for
non-MVE DLS/LE loops through the loop-doloop pass, which, given
a standard MVE loop like:
```
void __attribute__ ((noinline)) test (int16_t *a, int16_t *b, int16_t
*c, int n)
{
while (n > 0)
{
mve_pred16_t p = vctp16q (n);
int16x8_t va = vldrhq_z_s16 (a, p);
int16x8_t vb = vldrhq_z_s16 (b, p);
int16x8_t vc = vaddq_x_s16 (va, vb, p);
vstrhq_p_s16 (c, vc, p);
c+=8;
a+=8;
b+=8;
n-=8;
}
}
```
.. would output:
```
<pre-calculate the number of iterations and place it into lr>
dls lr, lr
.L3:
vctp.16 r3
vmrs ip, P0 @ movhi
sxth ip, ip
vmsr P0, ip @ movhi
mov r4, r0
vpst
vldrht.16 q2, [r4]
mov r4, r1
vmov q3, q0
vpst
vldrht.16 q1, [r4]
mov r4, r2
vpst
vaddt.i16 q3, q2, q1
subs r3, r3, #8
vpst
vstrht.16 q3, [r4]
adds r0, r0, #16
adds r1, r1, #16
adds r2, r2, #16
le lr, .L3
```
where the LE instruction will decrement LR by 1, compare and
branch if needed.
(there are also other inefficiencies with the above code, like the
pointless vmrs/sxth/vmsr on the VPR and the adds not being merged
into the vldrht/vstrht as a #16 offsets and some random movs!
But that's different problems...)
The MVE version is similar, except that:
* Instead of DLS/LE the instructions are DLSTP/LETP.
* Instead of pre-calculating the number of iterations of the
loop, we place the number of elements to be processed by the
loop into LR.
* Instead of decrementing the LR by one, LETP will decrement it
by FPSCR.LTPSIZE, which is the number of elements being
processed in each iteration: 16 for 8-bit elements, 5 for 16-bit
elements, etc.
* On the final iteration, automatic Loop Tail Predication is
performed, as if the instructions within the loop had been VPT
predicated with a VCTP generating the VPR predicate in every
loop iteration.
The dlstp/letp loop now looks like:
```
<place n into r3>
dlstp.16 lr, r3
.L14:
mov r3, r0
vldrh.16 q3, [r3]
mov r3, r1
vldrh.16 q2, [r3]
mov r3, r2
vadd.i16 q3, q3, q2
adds r0, r0, #16
vstrh.16 q3, [r3]
adds r1, r1, #16
adds r2, r2, #16
letp lr, .L14
```
Since the loop tail predication is automatic, we have eliminated
the VCTP that had been specified by the user in the intrinsic
and converted the VPT-predicated instructions into their
unpredicated equivalents (which also saves us from VPST insns).
The LE instruction here decrements LR by 8 in each iteration.
--- This 1/2 patch ---
This first patch lays some groundwork by adding an attribute to
md patterns, and then the second patch contains the functional
changes.
One major difficulty in implementing MVE Tail-Predicated Low
Overhead Loops was the need to transform VPT-predicated insns
in the insn chain into their unpredicated equivalents, like:
`mve_vldrbq_z_<supf><mode> -> mve_vldrbq_<supf><mode>`.
This requires us to have a deterministic link between two
different patterns in mve.md -- this _could_ be done by
re-ordering the entirety of mve.md such that the patterns are
at some constant icode proximity (e.g. having the _z immediately
after the unpredicated version would mean that to map from the
former to the latter you could use icode-1), but that is a very
messy solution that would lead to complex unknown dependencies
between patterns.
This patch proves an alternative way of doing that: using an insn
attribute to encode the icode of the unpredicated instruction.
This was implemented by doing a find n replace across mve.md
using the following patterns:
define_insn "(.*)_p_(.*)"((.|\n)*?)\n( )*\[\(set_attr
define_insn "$1_p_$2"$3\n$5[(set (attr "mve_unpredicated_insn")
(symbol_ref "CODE_FOR_$1_$2"))\n$5 (set_attr
define_insn "(.*)_m_(.*)"((.|\n)*?)\n( )*\[\(set_attr
define_insn "$1_m_$2"$3\n$5[(set (attr "mve_unpredicated_insn")
(symbol_ref "CODE_FOR_$1_$2"))\n$5 (set_attr
define_insn "(.*)_z_(.*)"((.|\n)*?)\n( )*\[\(set_attr
define_insn "$1_z_$2"$3\n$5[(set (attr "mve_unpredicated_insn")
(symbol_ref "CODE_FOR_$1_$2"))\n$5 (set_attr
and then a number of manual fixes were needed for the md patterns
that did not conform to the above. Those changes were:
Dropped the type suffix _s/_u_f:
CODE_FOR_mve_vcmpcsq_n_<mode>
CODE_FOR_mve_vcmpcsq_<mode>
CODE_FOR_mve_vcmpeqq_n_<mode>
CODE_FOR_mve_vcmpeqq_<mode>
CODE_FOR_mve_vcmpgeq_n_<mode>
CODE_FOR_mve_vcmpgeq_<mode>
CODE_FOR_mve_vcmpgtq_n_<mode>
CODE_FOR_mve_vcmpgtq_<mode>
CODE_FOR_mve_vcmphiq_n_<mode>
CODE_FOR_mve_vcmphiq_<mode>
CODE_FOR_mve_vcmpleq_n_<mode>
CODE_FOR_mve_vcmpleq_<mode>
CODE_FOR_mve_vcmpltq_n_<mode>
CODE_FOR_mve_vcmpltq_<mode>
CODE_FOR_mve_vcmpneq_n_<mode>
CODE_FOR_mve_vcmpneq_<mode>
CODE_FOR_mve_vaddq<mode>
CODE_FOR_mve_vcaddq_rot270<mode>
CODE_FOR_mve_vcaddq_rot90<mode>
CODE_FOR_mve_vcaddq_rot270<mode>
CODE_FOR_mve_vcaddq_rot90<mode>
CODE_FOR_mve_vcmlaq<mode>
CODE_FOR_mve_vcmlaq_rot180<mode>
CODE_FOR_mve_vcmlaq_rot270<mode>
CODE_FOR_mve_vcmlaq_rot90<mode>
CODE_FOR_mve_vcmulq<mode>
CODE_FOR_mve_vcmulq_rot180<mode>
CODE_FOR_mve_vcmulq_rot270<mode>
CODE_FOR_mve_vcmulq_rot90<mode>
Dropped _wb_:
CODE_FOR_mve_vidupq_u<mode>_insn
CODE_FOR_mve_vddupq_u<mode>_insn
Dropped one underscore character:
CODE_FOR_arm_vcx1q<a>v16qi
CODE_FOR_arm_vcx2q<a>v16qi
CODE_FOR_arm_vcx3q<a>v16qi
No regressions on arm-none-eabi with an MVE target.
Thank you,
Stam Markianos-Wright
gcc/ChangeLog:
* config/arm/arm.md (mve_unpredicated_insn): New attribute.
* config/arm/mve.md (mve_vrndq_m_f<mode>): Add attribute.
(mve_vaddlvq_p_<supf>v4si): Likewise.
(mve_vaddvq_p_<supf><mode>): Likewise.
(mve_vbicq_m_n_<supf><mode>): Likewise.
(mve_vcmpeqq_m_f<mode>): Likewise.
(mve_vcvtaq_m_<supf><mode>): Likewise.
(mve_vcvtq_m_to_f_<supf><mode>): Likewise.
(mve_vabsq_m_s<mode>): Likewise.
(mve_vaddvaq_p_<supf><mode>): Likewise.
(mve_vclsq_m_s<mode>): Likewise.
(mve_vclzq_m_<supf><mode>): Likewise.
(mve_vcmpcsq_m_n_u<mode>): Likewise.
(mve_vcmpcsq_m_u<mode>): Likewise.
(mve_vcmpeqq_m_n_<supf><mode>): Likewise.
(mve_vcmpeqq_m_<supf><mode>): Likewise.
(mve_vcmpgeq_m_n_s<mode>): Likewise.
(mve_vcmpgeq_m_s<mode>): Likewise.
(mve_vcmpgtq_m_n_s<mode>): Likewise.
(mve_vcmpgtq_m_s<mode>): Likewise.
(mve_vcmphiq_m_n_u<mode>): Likewise.
(mve_vcmphiq_m_u<mode>): Likewise.
(mve_vcmpleq_m_n_s<mode>): Likewise.
(mve_vcmpleq_m_s<mode>): Likewise.
(mve_vcmpltq_m_n_s<mode>): Likewise.
(mve_vcmpltq_m_s<mode>): Likewise.
(mve_vcmpneq_m_n_<supf><mode>): Likewise.
(mve_vcmpneq_m_<supf><mode>): Likewise.
(mve_vdupq_m_n_<supf><mode>): Likewise.
(mve_vmaxaq_m_s<mode>): Likewise.
(mve_vmaxavq_p_s<mode>): Likewise.
(mve_vmaxvq_p_<supf><mode>): Likewise.
(mve_vminaq_m_s<mode>): Likewise.
(mve_vminavq_p_s<mode>): Likewise.
(mve_vminvq_p_<supf><mode>): Likewise.
(mve_vmladavq_p_<supf><mode>): Likewise.
(mve_vmladavxq_p_s<mode>): Likewise.
(mve_vmlsdavq_p_s<mode>): Likewise.
(mve_vmlsdavxq_p_s<mode>): Likewise.
(mve_vmvnq_m_<supf><mode>): Likewise.
(mve_vnegq_m_s<mode>): Likewise.
(mve_vqabsq_m_s<mode>): Likewise.
(mve_vqnegq_m_s<mode>): Likewise.
(mve_vqrshlq_m_n_<supf><mode>): Likewise.
(mve_vqshlq_m_r_<supf><mode>): Likewise.
(mve_vrev64q_m_<supf><mode>): Likewise.
(mve_vrshlq_m_n_<supf><mode>): Likewise.
(mve_vshlq_m_r_<supf><mode>): Likewise.
(mve_vabsq_m_f<mode>): Likewise.
(mve_vaddlvaq_p_<supf>v4si): Likewise.
(mve_vcmpeqq_m_n_f<mode>): Likewise.
(mve_vcmpgeq_m_f<mode>): Likewise.
(mve_vcmpgeq_m_n_f<mode>): Likewise.
(mve_vcmpgtq_m_f<mode>): Likewise.
(mve_vcmpgtq_m_n_f<mode>): Likewise.
(mve_vcmpleq_m_f<mode>): Likewise.
(mve_vcmpleq_m_n_f<mode>): Likewise.
(mve_vcmpltq_m_f<mode>): Likewise.
(mve_vcmpltq_m_n_f<mode>): Likewise.
(mve_vcmpneq_m_f<mode>): Likewise.
(mve_vcmpneq_m_n_f<mode>): Likewise.
(mve_vcvtbq_m_f16_f32v8hf): Likewise.
(mve_vcvtbq_m_f32_f16v4sf): Likewise.
(mve_vcvttq_m_f16_f32v8hf): Likewise.
(mve_vcvttq_m_f32_f16v4sf): Likewise.
(mve_vdupq_m_n_f<mode>): Likewise.
(mve_vmaxnmaq_m_f<mode>): Likewise.
(mve_vmaxnmavq_p_f<mode>): Likewise.
(mve_vmaxnmvq_p_f<mode>): Likewise.
(mve_vminnmaq_m_f<mode>): Likewise.
(mve_vminnmavq_p_f<mode>): Likewise.
(mve_vminnmvq_p_f<mode>): Likewise.
(mve_vmlaldavq_p_<supf><mode>): Likewise.
(mve_vmlaldavxq_p_s<mode>): Likewise.
(mve_vmlsldavq_p_s<mode>): Likewise.
(mve_vmlsldavxq_p_s<mode>): Likewise.
(mve_vmovlbq_m_<supf><mode>): Likewise.
(mve_vmovltq_m_<supf><mode>): Likewise.
(mve_vmovnbq_m_<supf><mode>): Likewise.
(mve_vmovntq_m_<supf><mode>): Likewise.
(mve_vmvnq_m_n_<supf><mode>): Likewise.
(mve_vnegq_m_f<mode>): Likewise.
(mve_vorrq_m_n_<supf><mode>): Likewise.
(mve_vqmovnbq_m_<supf><mode>): Likewise.
(mve_vqmovntq_m_<supf><mode>): Likewise.
(mve_vqmovunbq_m_s<mode>): Likewise.
(mve_vqmovuntq_m_s<mode>): Likewise.
(mve_vrev32q_m_fv8hf): Likewise.
(mve_vrev32q_m_<supf><mode>): Likewise.
(mve_vrev64q_m_f<mode>): Likewise.
(mve_vrmlaldavhxq_p_sv4si): Likewise.
(mve_vrmlsldavhq_p_sv4si): Likewise.
(mve_vrmlsldavhxq_p_sv4si): Likewise.
(mve_vrndaq_m_f<mode>): Likewise.
(mve_vrndmq_m_f<mode>): Likewise.
(mve_vrndnq_m_f<mode>): Likewise.
(mve_vrndpq_m_f<mode>): Likewise.
(mve_vrndxq_m_f<mode>): Likewise.
(mve_vcvtmq_m_<supf><mode>): Likewise.
(mve_vcvtpq_m_<supf><mode>): Likewise.
(mve_vcvtnq_m_<supf><mode>): Likewise.
(mve_vcvtq_m_n_from_f_<supf><mode>): Likewise.
(mve_vrev16q_m_<supf>v16qi): Likewise.
(mve_vcvtq_m_from_f_<supf><mode>): Likewise.
(mve_vrmlaldavhq_p_<supf>v4si): Likewise.
(mve_vabavq_p_<supf><mode>): Likewise.
(mve_vqshluq_m_n_s<mode>): Likewise.
(mve_vshlq_m_<supf><mode>): Likewise.
(mve_vsriq_m_n_<supf><mode>): Likewise.
(mve_vsubq_m_<supf><mode>): Likewise.
(mve_vcvtq_m_n_to_f_<supf><mode>): Likewise.
(mve_vabdq_m_<supf><mode>): Likewise.
(mve_vaddq_m_n_<supf><mode>): Likewise.
(mve_vaddq_m_<supf><mode>): Likewise.
(mve_vandq_m_<supf><mode>): Likewise.
(mve_vbicq_m_<supf><mode>): Likewise.
(mve_vbrsrq_m_n_<supf><mode>): Likewise.
(mve_vcaddq_rot270_m_<supf><mode>): Likewise.
(mve_vcaddq_rot90_m_<supf><mode>): Likewise.
(mve_veorq_m_<supf><mode>): Likewise.
(mve_vhaddq_m_n_<supf><mode>): Likewise.
(mve_vhaddq_m_<supf><mode>): Likewise.
(mve_vhsubq_m_n_<supf><mode>): Likewise.
(mve_vhsubq_m_<supf><mode>): Likewise.
(mve_vmaxq_m_<supf><mode>): Likewise.
(mve_vminq_m_<supf><mode>): Likewise.
(mve_vmladavaq_p_<supf><mode>): Likewise.
(mve_vmlaq_m_n_<supf><mode>): Likewise.
(mve_vmlasq_m_n_<supf><mode>): Likewise.
(mve_vmulhq_m_<supf><mode>): Likewise.
(mve_vmullbq_int_m_<supf><mode>): Likewise.
(mve_vmulltq_int_m_<supf><mode>): Likewise.
(mve_vmulq_m_n_<supf><mode>): Likewise.
(mve_vmulq_m_<supf><mode>): Likewise.
(mve_vornq_m_<supf><mode>): Likewise.
(mve_vorrq_m_<supf><mode>): Likewise.
(mve_vqaddq_m_n_<supf><mode>): Likewise.
(mve_vqaddq_m_<supf><mode>): Likewise.
(mve_vqdmlahq_m_n_s<mode>): Likewise.
(mve_vqdmlashq_m_n_s<mode>): Likewise.
(mve_vqrdmlahq_m_n_s<mode>): Likewise.
(mve_vqrdmlashq_m_n_s<mode>): Likewise.
(mve_vqrshlq_m_<supf><mode>): Likewise.
(mve_vqshlq_m_n_<supf><mode>): Likewise.
(mve_vqshlq_m_<supf><mode>): Likewise.
(mve_vqsubq_m_n_<supf><mode>): Likewise.
(mve_vqsubq_m_<supf><mode>): Likewise.
(mve_vrhaddq_m_<supf><mode>): Likewise.
(mve_vrmulhq_m_<supf><mode>): Likewise.
(mve_vrshlq_m_<supf><mode>): Likewise.
(mve_vrshrq_m_n_<supf><mode>): Likewise.
(mve_vshlq_m_n_<supf><mode>): Likewise.
(mve_vshrq_m_n_<supf><mode>): Likewise.
(mve_vsliq_m_n_<supf><mode>): Likewise.
(mve_vsubq_m_n_<supf><mode>): Likewise.
(mve_vhcaddq_rot270_m_s<mode>): Likewise.
(mve_vhcaddq_rot90_m_s<mode>): Likewise.
(mve_vmladavaxq_p_s<mode>): Likewise.
(mve_vmlsdavaq_p_s<mode>): Likewise.
(mve_vmlsdavaxq_p_s<mode>): Likewise.
(mve_vqdmladhq_m_s<mode>): Likewise.
(mve_vqdmladhxq_m_s<mode>): Likewise.
(mve_vqdmlsdhq_m_s<mode>): Likewise.
(mve_vqdmlsdhxq_m_s<mode>): Likewise.
(mve_vqdmulhq_m_n_s<mode>): Likewise.
(mve_vqdmulhq_m_s<mode>): Likewise.
(mve_vqrdmladhq_m_s<mode>): Likewise.
(mve_vqrdmladhxq_m_s<mode>): Likewise.
(mve_vqrdmlsdhq_m_s<mode>): Likewise.
(mve_vqrdmlsdhxq_m_s<mode>): Likewise.
(mve_vqrdmulhq_m_n_s<mode>): Likewise.
(mve_vqrdmulhq_m_s<mode>): Likewise.
(mve_vmlaldavaq_p_<supf><mode>): Likewise.
(mve_vmlaldavaxq_p_<supf><mode>): Likewise.
(mve_vqrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqrshrntq_m_n_<supf><mode>): Likewise.
(mve_vqshrnbq_m_n_<supf><mode>): Likewise.
(mve_vqshrntq_m_n_<supf><mode>): Likewise.
(mve_vrmlaldavhaq_p_sv4si): Likewise.
(mve_vrshrnbq_m_n_<supf><mode>): Likewise.
(mve_vrshrntq_m_n_<supf><mode>): Likewise.
(mve_vshllbq_m_n_<supf><mode>): Likewise.
(mve_vshlltq_m_n_<supf><mode>): Likewise.
(mve_vshrnbq_m_n_<supf><mode>): Likewise.
(mve_vshrntq_m_n_<supf><mode>): Likewise.
(mve_vmlsldavaq_p_s<mode>): Likewise.
(mve_vmlsldavaxq_p_s<mode>): Likewise.
(mve_vmullbq_poly_m_p<mode>): Likewise.
(mve_vmulltq_poly_m_p<mode>): Likewise.
(mve_vqdmullbq_m_n_s<mode>): Likewise.
(mve_vqdmullbq_m_s<mode>): Likewise.
(mve_vqdmulltq_m_n_s<mode>): Likewise.
(mve_vqdmulltq_m_s<mode>): Likewise.
(mve_vqrshrunbq_m_n_s<mode>): Likewise.
(mve_vqrshruntq_m_n_s<mode>): Likewise.
(mve_vqshrunbq_m_n_s<mode>): Likewise.
(mve_vqshruntq_m_n_s<mode>): Likewise.
(mve_vrmlaldavhaq_p_uv4si): Likewise.
(mve_vrmlaldavhaxq_p_sv4si): Likewise.
(mve_vrmlsldavhaq_p_sv4si): Likewise.
(mve_vrmlsldavhaxq_p_sv4si): Likewise.
(mve_vabdq_m_f<mode>): Likewise.
(mve_vaddq_m_f<mode>): Likewise.
(mve_vaddq_m_n_f<mode>): Likewise.
(mve_vandq_m_f<mode>): Likewise.
(mve_vbicq_m_f<mode>): Likewise.
(mve_vbrsrq_m_n_f<mode>): Likewise.
(mve_vcaddq_rot270_m_f<mode>): Likewise.
(mve_vcaddq_rot90_m_f<mode>): Likewise.
(mve_vcmlaq_m_f<mode>): Likewise.
(mve_vcmlaq_rot180_m_f<mode>): Likewise.
(mve_vcmlaq_rot270_m_f<mode>): Likewise.
(mve_vcmlaq_rot90_m_f<mode>): Likewise.
(mve_vcmulq_m_f<mode>): Likewise.
(mve_vcmulq_rot180_m_f<mode>): Likewise.
(mve_vcmulq_rot270_m_f<mode>): Likewise.
(mve_vcmulq_rot90_m_f<mode>): Likewise.
(mve_veorq_m_f<mode>): Likewise.
(mve_vfmaq_m_f<mode>): Likewise.
(mve_vfmaq_m_n_f<mode>): Likewise.
(mve_vfmasq_m_n_f<mode>): Likewise.
(mve_vfmsq_m_f<mode>): Likewise.
(mve_vmaxnmq_m_f<mode>): Likewise.
(mve_vminnmq_m_f<mode>): Likewise.
(mve_vmulq_m_f<mode>): Likewise.
(mve_vmulq_m_n_f<mode>): Likewise.
(mve_vornq_m_f<mode>): Likewise.
(mve_vorrq_m_f<mode>): Likewise.
(mve_vsubq_m_f<mode>): Likewise.
(mve_vsubq_m_n_f<mode>): Likewise.
(mve_vstrbq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrwq_scatter_base_p_<supf>v4si): Likewise.
(mve_vstrbq_p_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrbq_z_<supf><mode>): Likewise.
(mve_vldrwq_gather_base_z_<supf>v4si): Likewise.
(mve_vldrhq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_z_fv8hf): Likewise.
(mve_vldrhq_z_<supf><mode>): Likewise.
(mve_vldrwq_z_fv4sf): Likewise.
(mve_vldrwq_z_<supf>v4si): Likewise.
(mve_vldrdq_gather_base_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_z_<supf>v2di): Likewise.
(mve_vldrhq_gather_offset_z_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_z_fv8hf): Likewise.
(mve_vldrwq_gather_base_z_fv4sf): Likewise.
(mve_vldrwq_gather_offset_z_fv4sf): Likewise.
(mve_vldrwq_gather_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_z_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_z_<supf>v4si): Likewise.
(mve_vstrhq_p_fv8hf): Likewise.
(mve_vstrhq_p_<supf><mode>): Likewise.
(mve_vstrhq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrwq_p_fv4sf): Likewise.
(mve_vstrwq_p_<supf>v4si): Likewise.
(mve_vstrdq_scatter_base_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrhq_scatter_offset_p_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn): Likewise.
(mve_vstrwq_scatter_base_p_fv4sf): Likewise.
(mve_vstrwq_scatter_offset_p_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_<supf>v4si_insn): Likewise.
(mve_vidupq_m_wb_u<mode>_insn): Likewise.
(mve_vddupq_m_wb_u<mode>_insn): Likewise.
(mve_vdwdupq_m_wb_u<mode>_insn): Likewise.
(mve_viwdupq_m_wb_u<mode>_insn): Likewise.
(mve_vstrwq_scatter_base_wb_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_p_fv4sf): Likewise.
(mve_vstrdq_scatter_base_wb_p_<supf>v2di): Likewise.
(mve_vldrwq_gather_base_wb_z_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_fv4sf_insn): Likewise.
(mve_vldrdq_gather_base_wb_z_<supf>v2di_insn): Likewise.
(mve_vadciq_m_<supf>v4si): Likewise.
(mve_vadcq_m_<supf>v4si): Likewise.
(mve_vsbciq_m_<supf>v4si): Likewise.
(mve_vsbcq_m_<supf>v4si): Likewise.
(mve_vshlcq_m_<supf><mode>): Likewise.
(arm_vcx1q<a>_p_v16qi): Likewise.
(arm_vcx2q<a>_p_v16qi): Likewise.
(arm_vcx3q<a>_p_v16qi): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/arm/dlstp-compile-asm.c: New test.
#### Inline copy of patch ###
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index
69bf343fb0ed601014979cfc1803abe84c87f179..e1d2e62593085accfcc111cf6fa5795e4520f213
100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -123,6 +123,8 @@
; and not all ARM insns do.
(define_attr "predicated" "yes,no" (const_string "no"))
+(define_attr "mve_unpredicated_insn" "" (const_int 0))
+
; LENGTH of an instruction (in bytes)
(define_attr "length" ""
(const_int 4))
diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md
index
62186f124da183fe1b1eb57a1aea1e8fff680a22..b1c8c1c569f31a6cb1bfdc16394047f02d6cddf4
100644
--- a/gcc/config/arm/mve.md
+++ b/gcc/config/arm/mve.md
@@ -142,7 +142,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintzt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrndq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -818,7 +819,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddlvt.<supf>32 %Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vaddlvq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -910,7 +912,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddvt.<supf>%#<V_sz_elem> %0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vaddvq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2560,7 +2563,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vbict.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vbicq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcmpeqq_m_f])
@@ -2575,7 +2579,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> eq, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpeqq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcvtaq_m_u, vcvtaq_m_s])
@@ -2590,7 +2595,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtat.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcvtq_m_to_f_s, vcvtq_m_to_f_u])
@@ -2605,7 +2611,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vqrshrnbq_n_u, vqrshrnbq_n_s])
@@ -2727,7 +2734,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vabst.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vabsq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2743,7 +2751,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddvat.<supf>%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vaddvaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2759,7 +2768,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vclst.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vclsq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2775,7 +2785,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vclzt.i%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vclzq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2791,7 +2802,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.u%#<V_sz_elem> cs, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpcsq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2807,7 +2819,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.u%#<V_sz_elem> cs, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpcsq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2823,7 +2836,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.i%#<V_sz_elem> eq, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpeqq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2839,7 +2853,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.i%#<V_sz_elem> eq, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpeqq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2855,7 +2870,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> ge, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpgeq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2871,7 +2887,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> ge, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpgeq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2887,7 +2904,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> gt, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpgtq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2903,7 +2921,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> gt, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpgtq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2919,7 +2938,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.u%#<V_sz_elem> hi, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmphiq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2935,7 +2955,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.u%#<V_sz_elem> hi, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmphiq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2951,7 +2972,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> le, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpleq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2967,7 +2989,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> le, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpleq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2983,7 +3006,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> lt, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpltq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2999,7 +3023,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> lt, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpltq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3015,7 +3040,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.i%#<V_sz_elem> ne, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpneq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3031,7 +3057,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.i%#<V_sz_elem> ne, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpneq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3047,7 +3074,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vdupt.%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vdupq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3063,7 +3091,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmaxat.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmaxaq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3079,7 +3108,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmaxavt.s%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmaxavq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3095,7 +3125,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmaxvt.<supf>%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmaxvq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3111,7 +3142,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vminat.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vminaq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3127,7 +3159,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vminavt.s%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vminavq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3143,7 +3176,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vminvt.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vminvq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3175,7 +3209,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmladavt.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmladavq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3191,7 +3226,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmladavxt.s%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmladavxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3239,7 +3275,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsdavt.s%#<V_sz_elem> %0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlsdavq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3255,7 +3292,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsdavxt.s%#<V_sz_elem> %0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlsdavxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3271,7 +3309,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmvnt %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmvnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3287,7 +3326,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vnegt.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vnegq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3319,7 +3359,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqabst.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqabsq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3367,7 +3408,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqnegt.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqnegq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3479,7 +3521,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshlt.<supf>%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrshlq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3495,7 +3538,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshlt.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqshlq_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3511,7 +3555,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrev64t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrev64q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3527,7 +3572,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrshlt.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrshlq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3543,7 +3589,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlt.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vshlq_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3702,7 +3749,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vabst.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vabsq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3718,7 +3766,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddlvat.<supf>32 %Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vaddlvaq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcmlaq, vcmlaq_rot90, vcmlaq_rot180, vcmlaq_rot270])
@@ -3752,7 +3801,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> eq, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpeqq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3768,7 +3818,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> ge, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpgeq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3784,7 +3835,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> ge, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpgeq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3800,7 +3852,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> gt, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpgtq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3816,7 +3869,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> gt, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpgtq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3832,7 +3886,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> le, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpleq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3848,7 +3903,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> le, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpleq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3864,7 +3920,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> lt, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpltq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3880,7 +3937,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> lt, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpltq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3896,7 +3954,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> ne, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpneq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3912,7 +3971,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> ne, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmpneq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3928,7 +3988,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f16.f32 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3944,7 +4005,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f32.f16 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3960,7 +4022,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f16.f32 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3976,7 +4039,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f32.f16 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3992,7 +4056,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vdupt.%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vdupq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4071,7 +4136,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmaxnmat.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmaxnmaq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vmaxnmavq_p_f])
@@ -4086,7 +4152,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmaxnmavt.f%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmaxnmavq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4102,7 +4169,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmaxnmvt.f%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmaxnmvq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vminnmaq_m_f])
@@ -4117,7 +4185,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vminnmat.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vminnmaq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4133,7 +4202,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vminnmavt.f%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vminnmavq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vminnmvq_p_f])
@@ -4148,7 +4218,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vminnmvt.f%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vminnmvq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4196,7 +4267,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlaldavt.<supf>%#<V_sz_elem> %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlaldavq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4212,7 +4284,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlaldavxt.s%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlaldavxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vmlsldavaq_s])
@@ -4259,7 +4332,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsldavt.s%#<V_sz_elem> %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlsldavq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4275,7 +4349,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsldavxt.s%#<V_sz_elem> %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlsldavxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vmovlbq_m_u, vmovlbq_m_s])
@@ -4290,7 +4365,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmovlbt.<supf>%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmovlbq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vmovltq_m_u, vmovltq_m_s])
@@ -4305,7 +4381,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmovltt.<supf>%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmovltq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vmovnbq_m_u, vmovnbq_m_s])
@@ -4320,7 +4397,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmovnbt.i%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmovnbq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4336,7 +4414,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmovntt.i%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmovntq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4352,7 +4431,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmvnt.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmvnq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vnegq_m_f])
@@ -4367,7 +4447,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vnegt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vnegq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4383,7 +4464,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vorrt.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vorrq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vpselq_f])
@@ -4414,7 +4496,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqmovnbt.<supf>%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqmovnbq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4430,7 +4513,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqmovntt.<supf>%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqmovntq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4446,7 +4530,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqmovunbt.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqmovunbq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4462,7 +4547,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqmovuntt.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqmovuntq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4574,7 +4660,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrev32t.16 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrev32q_fv8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4590,7 +4677,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrev32t.%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrev32q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4606,7 +4694,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrev64t.%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrev64q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4638,7 +4727,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlaldavhxt.s32 %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrmlaldavhxq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4670,7 +4760,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlsldavht.s32 %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrmlsldavhq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4686,7 +4777,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlsldavhxt.s32 %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrmlsldavhxq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4702,7 +4794,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintat.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrndaq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4718,7 +4811,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintmt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrndmq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4734,7 +4828,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintnt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrndnq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4750,7 +4845,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintpt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrndpq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4766,7 +4862,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintxt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrndxq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4846,7 +4943,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4862,7 +4960,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4878,7 +4977,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4895,7 +4995,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4911,7 +5012,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrev16t.8 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrev16q_<supf>v16qi"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4927,7 +5029,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4943,7 +5046,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlaldavht.<supf>32 %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrmlaldavhq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4976,7 +5080,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vabavt.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vabavq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -4993,7 +5098,8 @@
]
"TARGET_HAVE_MVE"
"vpst\n\tvqshlut.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqshluq_n_s<mode>"))
+ (set_attr "type" "mve_move")])
;;
;; [vshlq_m_s, vshlq_m_u])
@@ -5009,7 +5115,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vshlq_<supf><mode>"))
+ (set_attr "type" "mve_move")])
;;
;; [vsriq_m_n_s, vsriq_m_n_u])
@@ -5025,7 +5132,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsrit.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vsriq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")])
;;
;; [vsubq_m_u, vsubq_m_s])
@@ -5041,7 +5149,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsubt.i%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vsubq_<supf><mode>"))
+ (set_attr "type" "mve_move")])
;;
;; [vcvtq_m_n_to_f_u, vcvtq_m_n_to_f_s])
@@ -5057,7 +5166,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vabdq_m_s, vabdq_m_u])
@@ -5073,7 +5183,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vabdt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vabdq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5090,7 +5201,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddt.i%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vaddq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5107,7 +5219,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddt.i%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vaddq<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5124,7 +5237,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vandt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vandq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5141,7 +5255,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vbict %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vbicq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5158,7 +5273,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vbrsrt.%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vbrsrq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5175,7 +5291,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcaddt.i%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcaddq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5192,7 +5309,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcaddt.i%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcaddq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5209,7 +5327,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;veort %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_veorq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5226,7 +5345,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhaddt.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vhaddq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5243,7 +5363,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhaddt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vhaddq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5260,7 +5381,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhsubt.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vhsubq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5277,7 +5399,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhsubt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vhsubq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5294,7 +5417,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmaxt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmaxq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5311,7 +5435,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmint.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vminq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5328,7 +5453,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmladavat.<supf>%#<V_sz_elem> %0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmladavaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5345,7 +5471,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlat.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlaq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5362,7 +5489,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlast.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlasq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5379,7 +5507,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmulht.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmulhq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5396,7 +5525,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmullbt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmullbq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5413,7 +5543,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmulltt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmulltq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5430,7 +5561,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmult.i%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmulq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5447,7 +5579,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmult.i%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmulq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5464,7 +5597,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vornt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vornq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5481,7 +5615,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vorrt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vorrq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5498,7 +5633,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqaddt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqaddq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5515,7 +5651,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqaddt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqaddq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5532,7 +5669,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmlaht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmlahq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5549,7 +5687,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmlasht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmlashq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5566,7 +5705,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmlaht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrdmlahq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5583,7 +5723,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmlasht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrdmlashq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5600,7 +5741,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrshlq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5617,7 +5759,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqshlq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5634,7 +5777,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqshlq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5651,7 +5795,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqsubt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqsubq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5668,7 +5813,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqsubt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqsubq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5685,7 +5831,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrhaddt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrhaddq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5702,7 +5849,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmulht.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrmulhq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5719,7 +5867,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrshlq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5736,7 +5885,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrshrt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrshrq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5753,7 +5903,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vshlq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5770,7 +5921,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshrt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vshrq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5787,7 +5939,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vslit.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vsliq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5804,7 +5957,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsubt.i%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vsubq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5821,7 +5975,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhcaddt.s%#<V_sz_elem>\t%q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vhcaddq_rot270_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5838,7 +5993,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhcaddt.s%#<V_sz_elem>\t%q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vhcaddq_rot90_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5855,7 +6011,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmladavaxt.s%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmladavaxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5872,7 +6029,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsdavat.s%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlsdavaq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5889,7 +6047,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsdavaxt.s%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlsdavaxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5906,7 +6065,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmladht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmladhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5923,7 +6083,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmladhxt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmladhxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5940,7 +6101,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmlsdht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmlsdhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5957,7 +6119,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmlsdhxt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmlsdhxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5974,7 +6137,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmulht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmulhq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5991,7 +6155,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmulht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmulhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6008,7 +6173,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmladht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrdmladhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6025,7 +6191,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmladhxt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrdmladhxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6042,7 +6209,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmlsdht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrdmlsdhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6059,7 +6227,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmlsdhxt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrdmlsdhxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6076,7 +6245,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmulht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrdmulhq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6093,7 +6263,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmulht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrdmulhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6110,7 +6281,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlaldavat.<supf>%#<V_sz_elem> %Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlaldavaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6127,7 +6299,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlaldavaxt.<supf>%#<V_sz_elem> %Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlaldavaxq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6144,7 +6317,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshrnbt.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrshrnbq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6161,7 +6335,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshrntt.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrshrntq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6178,7 +6353,8 @@
]
"TARGET_HAVE_MVE"
"vpst\n\tvqshrnbt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqshrnbq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6195,7 +6371,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshrntt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqshrntq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6212,7 +6389,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlaldavhat.s32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrmlaldavhaq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6229,7 +6407,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrshrnbt.i%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrshrnbq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6246,7 +6425,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrshrntt.i%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrshrntq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6263,7 +6443,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshllbt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vshllbq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6280,7 +6461,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlltt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vshlltq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6297,7 +6479,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshrnbt.i%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vshrnbq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6314,7 +6497,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshrntt.i%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vshrntq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6331,7 +6515,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsldavat.s%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlsldavaq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6348,7 +6533,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsldavaxt.s%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmlsldavaxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6365,7 +6551,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmullbt.p%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmullbq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6382,7 +6569,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmulltt.p%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmulltq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6399,7 +6587,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmullbt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmullbq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6416,7 +6605,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmullbt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmullbq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6433,7 +6623,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmulltt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmulltq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6450,7 +6641,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmulltt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqdmulltq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6467,7 +6659,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshrunbt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrshrunbq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6484,7 +6677,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshruntt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqrshruntq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6501,7 +6695,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshrunbt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqshrunbq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6518,7 +6713,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshruntt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vqshruntq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6535,7 +6731,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlaldavhat.u32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrmlaldavhaq_uv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6552,7 +6749,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlaldavhaxt.s32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrmlaldavhaxq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6569,7 +6767,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlsldavhat.s32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrmlsldavhaq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6586,7 +6785,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlsldavhaxt.s32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vrmlsldavhaxq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vabdq_m_f])
@@ -6602,7 +6802,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vabdt.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vabdq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6619,7 +6820,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vaddt.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vaddq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6636,7 +6838,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vaddt.f%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vaddq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6653,7 +6856,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vandt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vandq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6670,7 +6874,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vbict %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vbicq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6687,7 +6892,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vbrsrt.%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vbrsrq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6704,7 +6910,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcaddt.f%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcaddq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6721,7 +6928,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcaddt.f%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcaddq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6738,7 +6946,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #0"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmlaq<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6755,7 +6964,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #180"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmlaq_rot180<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6772,7 +6982,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmlaq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6789,7 +7000,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmlaq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6806,7 +7018,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #0"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmulq<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6823,7 +7036,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #180"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmulq_rot180<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6840,7 +7054,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmulq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6857,7 +7072,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vcmulq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6874,7 +7090,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;veort %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_veorq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6891,7 +7108,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vfmat.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vfmaq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6908,7 +7126,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vfmat.f%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vfmaq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6925,7 +7144,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vfmast.f%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vfmasq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6942,7 +7162,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vfmst.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vfmsq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6959,7 +7180,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmaxnmt.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmaxnmq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6976,7 +7198,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vminnmt.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vminnmq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6993,7 +7216,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmult.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmulq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7010,7 +7234,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmult.f%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vmulq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7027,7 +7252,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vornt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7044,7 +7270,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vorrt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vorrq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7061,7 +7288,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vsubt.f%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vsubq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7078,7 +7306,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vsubt.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vsubq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7245,7 +7474,8 @@
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrbt.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_p_s vstrwq_scatter_base_p_u]
@@ -7268,7 +7498,8 @@
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrbq_p_s vstrbq_p_u]
@@ -7288,7 +7519,8 @@
output_asm_insn ("vpst\;vstrbt.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_gather_offset_z_s vldrbq_gather_offset_z_u]
@@ -7313,7 +7545,8 @@
output_asm_insn ("vpst\n\tvldrbt.<supf><V_sz_elem>\t%q0, [%m1,
%q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_z_s vldrbq_z_u]
@@ -7336,7 +7569,8 @@
output_asm_insn ("vpst\;vldrbt.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_z_s vldrwq_gather_base_z_u]
@@ -7357,7 +7591,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_f]
@@ -7424,7 +7659,8 @@
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1,
%q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_s vldrhq_gather_shifted_offset_u]
@@ -7472,7 +7708,8 @@
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1,
%q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_s, vldrhq_u]
@@ -7514,7 +7751,8 @@
output_asm_insn ("vpst\;vldrht.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_z_s vldrhq_z_u]
@@ -7537,7 +7775,8 @@
output_asm_insn ("vpst\;vldrht.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_f]
@@ -7595,7 +7834,8 @@
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_z_s vldrwq_z_u]
@@ -7615,7 +7855,8 @@
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "8")])
(define_expand "mve_vld1q_f<mode>"
[(match_operand:MVE_0 0 "s_register_operand")
@@ -7676,7 +7917,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_offset_s vldrdq_gather_offset_u]
@@ -7717,7 +7959,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_shifted_offset_s vldrdq_gather_shifted_offset_u]
@@ -7758,7 +8001,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_offset_f]
@@ -7800,7 +8044,8 @@
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_f]
@@ -7842,7 +8087,8 @@
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_f]
@@ -7883,7 +8129,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_f]
@@ -7945,7 +8192,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_z_s vldrwq_gather_offset_z_u]
@@ -7967,7 +8215,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_f]
@@ -8029,7 +8278,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_z_s vldrwq_gather_shifted_offset_z_u]
@@ -8051,7 +8301,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_f]
@@ -8090,7 +8341,8 @@
output_asm_insn ("vpst\;vstrht.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_p_s vstrhq_p_u]
@@ -8110,7 +8362,8 @@
output_asm_insn ("vpst\;vstrht.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_p_s vstrhq_scatter_offset_p_u]
@@ -8142,7 +8395,8 @@
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_s vstrhq_scatter_offset_u]
@@ -8202,7 +8456,8 @@
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_s vstrhq_scatter_shifted_offset_u]
@@ -8289,7 +8544,8 @@
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_p_s vstrwq_p_u]
@@ -8309,7 +8565,8 @@
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_s vstrwq_u]
@@ -8371,7 +8628,8 @@
output_asm_insn ("vpst\;\tvstrdt.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_s vstrdq_scatter_base_u]
@@ -8424,7 +8682,8 @@
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_offset_s vstrdq_scatter_offset_u]
@@ -8484,7 +8743,8 @@
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1, UXTW #3]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_shifted_offset_s vstrdq_scatter_shifted_offset_u]
@@ -8572,7 +8832,8 @@
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_f]
@@ -8632,7 +8893,8 @@
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_f]
@@ -8677,7 +8939,8 @@
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_f]
@@ -8736,7 +8999,8 @@
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -8767,7 +9031,8 @@
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -8855,7 +9120,8 @@
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_s vstrwq_scatter_shifted_offset_p_u]
@@ -8887,7 +9153,8 @@
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_s vstrwq_scatter_shifted_offset_u]
@@ -9012,7 +9279,8 @@
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;\tvidupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vidupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vddupq_n_u])
@@ -9080,7 +9348,8 @@
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;\tvddupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vddupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vdwdupq_n_u])
@@ -9196,7 +9465,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;\tvdwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vdwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -9313,7 +9583,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;\tviwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_viwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -9365,7 +9636,8 @@
output_asm_insn ("vpst\;\tvstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_wb_f]
@@ -9416,7 +9688,8 @@
output_asm_insn ("vpst\;\tvstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_wb_s vstrdq_scatter_base_wb_u]
@@ -9467,7 +9740,8 @@
output_asm_insn ("vpst;vstrdt.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -9575,7 +9849,8 @@
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -9684,7 +9959,8 @@
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrdq_gather_base_wb_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -9809,7 +10085,8 @@
output_asm_insn ("vpst\;vldrdt.u64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vadciq_m_s, vadciq_m_u])
;;
@@ -9826,7 +10103,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vadcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -9862,7 +10140,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vadct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -9899,7 +10178,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsbcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -9935,7 +10215,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsbct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -10352,7 +10633,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlct\t%q0, %1, %4"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;; CDE instructions on MVE registers.
@@ -10435,7 +10717,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx1<a>t\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_arm_vcx1q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -10449,7 +10732,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx2<a>t\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_arm_vcx2q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -10464,7 +10748,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx3<a>t\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref
"CODE_FOR_arm_vcx3q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
diff --git a/gcc/testsuite/gcc.target/arm/dlstp-compile-asm.c
b/gcc/testsuite/gcc.target/arm/dlstp-compile-asm.c
new file mode 100644
index
0000000000000000000000000000000000000000..ec6ee774cbda6604c4c24b57cd4d5d3bd08e07cd
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/dlstp-compile-asm.c
@@ -0,0 +1,82 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-skip-if "avoid conflicting multilib options" { *-*-* } {
"-marm" "-mcpu=*" } } */
+/* { dg-options "-march=armv8.1-m.main+fp.dp+mve.fp -O3" } */
+
+#include <arm_mve.h>
+
+#define IMM 5
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY(BITS, LANES, LDRSTRYTPE, TYPE,
SIGN, NAME, PRED) \
+void test_##NAME##PRED##_##SIGN##BITS (TYPE##BITS##_t *a,
TYPE##BITS##_t *b, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS
(a, p); \
+ TYPE##BITS##x##LANES##_t vb = vldr##LDRSTRYTPE##q_z_##SIGN##BITS
(b, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_##SIGN##BITS (va, vb,
p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ b += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY(BITS, LANES,
LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY (BITS, LANES, LDRSTRYTPE, int, s, NAME,
PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY (BITS, LANES, LDRSTRYTPE, uint, u, NAME,
PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY(NAME,
PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (8, 16, b, NAME, PRED)
\
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (16, 8, h, NAME, PRED)
\
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (32, 4, w, NAME, PRED)
+
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vaddq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vorrq, _x)
+
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY_N(BITS, LANES, LDRSTRYTPE, TYPE,
SIGN, NAME, PRED) \
+void test_##NAME##PRED##_n_##SIGN##BITS (TYPE##BITS##_t *a,
TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS
(a, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_n_##SIGN##BITS (va,
IMM, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N(BITS, LANES,
LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_N (BITS, LANES, LDRSTRYTPE, int, s, NAME,
PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_N (BITS, LANES, LDRSTRYTPE, uint, u,
NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N(NAME,
PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (8, 16, b, NAME,
PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (16, 8, h, NAME,
PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (32, 4, w, NAME, PRED)
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vaddq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vmulq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vsubq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vhaddq, _x)
+
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vbrsrq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vshlq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vshrq, _x)
+
+
+/* The final number of DLSTPs currently is calculated by the number of
+ `TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY.*` macros * 6. */
+/* { dg-final { scan-assembler-times {\tdlstp} 54 } } */
+/* { dg-final { scan-assembler-times {\tletp} 54 } } */
+/* { dg-final { scan-assembler-not "\tvctp\t" } } */
+/* { dg-final { scan-assembler-not "\tvpst\t" } } */
+/* { dg-final { scan-assembler-not "P0" } } */
\ No newline at end of file
[-- Attachment #2: rb16364.patch --]
[-- Type: text/x-patch, Size: 89194 bytes --]
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index 69bf343fb0ed601014979cfc1803abe84c87f179..e1d2e62593085accfcc111cf6fa5795e4520f213 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -123,6 +123,8 @@
; and not all ARM insns do.
(define_attr "predicated" "yes,no" (const_string "no"))
+(define_attr "mve_unpredicated_insn" "" (const_int 0))
+
; LENGTH of an instruction (in bytes)
(define_attr "length" ""
(const_int 4))
diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md
index 62186f124da183fe1b1eb57a1aea1e8fff680a22..b1c8c1c569f31a6cb1bfdc16394047f02d6cddf4 100644
--- a/gcc/config/arm/mve.md
+++ b/gcc/config/arm/mve.md
@@ -142,7 +142,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintzt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrndq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -818,7 +819,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddlvt.<supf>32 %Q0, %R0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vaddlvq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -910,7 +912,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddvt.<supf>%#<V_sz_elem> %0, %q1"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vaddvq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2560,7 +2563,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vbict.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcmpeqq_m_f])
@@ -2575,7 +2579,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> eq, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpeqq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcvtaq_m_u, vcvtaq_m_s])
@@ -2590,7 +2595,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtat.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcvtq_m_to_f_s, vcvtq_m_to_f_u])
@@ -2605,7 +2611,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vqrshrnbq_n_u, vqrshrnbq_n_s])
@@ -2727,7 +2734,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vabst.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vabsq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2743,7 +2751,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddvat.<supf>%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vaddvaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2759,7 +2768,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vclst.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vclsq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2775,7 +2785,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vclzt.i%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vclzq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2791,7 +2802,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.u%#<V_sz_elem> cs, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpcsq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2807,7 +2819,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.u%#<V_sz_elem> cs, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpcsq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2823,7 +2836,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.i%#<V_sz_elem> eq, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpeqq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2839,7 +2853,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.i%#<V_sz_elem> eq, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpeqq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2855,7 +2870,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> ge, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpgeq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2871,7 +2887,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> ge, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpgeq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2887,7 +2904,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> gt, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpgtq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2903,7 +2921,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> gt, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpgtq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2919,7 +2938,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.u%#<V_sz_elem> hi, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmphiq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2935,7 +2955,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.u%#<V_sz_elem> hi, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmphiq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2951,7 +2972,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> le, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpleq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2967,7 +2989,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> le, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpleq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2983,7 +3006,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> lt, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpltq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -2999,7 +3023,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.s%#<V_sz_elem> lt, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpltq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3015,7 +3040,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.i%#<V_sz_elem> ne, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpneq_n_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3031,7 +3057,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcmpt.i%#<V_sz_elem> ne, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpneq_<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3047,7 +3074,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vdupt.%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdupq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3063,7 +3091,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmaxat.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmaxaq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3079,7 +3108,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmaxavt.s%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmaxavq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3095,7 +3125,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmaxvt.<supf>%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmaxvq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3111,7 +3142,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vminat.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vminaq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3127,7 +3159,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vminavt.s%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vminavq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3143,7 +3176,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vminvt.<supf>%#<V_sz_elem>\t%0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vminvq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3175,7 +3209,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmladavt.<supf>%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmladavq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3191,7 +3226,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmladavxt.s%#<V_sz_elem>\t%0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmladavxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3239,7 +3275,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsdavt.s%#<V_sz_elem> %0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlsdavq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3255,7 +3292,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsdavxt.s%#<V_sz_elem> %0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlsdavxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3271,7 +3309,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmvnt %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmvnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3287,7 +3326,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vnegt.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vnegq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3319,7 +3359,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqabst.s%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqabsq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3367,7 +3408,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqnegt.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqnegq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3479,7 +3521,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshlt.<supf>%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrshlq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3495,7 +3538,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshlt.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqshlq_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3511,7 +3555,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrev64t.%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrev64q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3527,7 +3572,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrshlt.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrshlq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3543,7 +3589,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlt.<supf>%#<V_sz_elem>\t%q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlq_r_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3702,7 +3749,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vabst.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vabsq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3718,7 +3766,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddlvat.<supf>32 %Q0, %R0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vaddlvaq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vcmlaq, vcmlaq_rot90, vcmlaq_rot180, vcmlaq_rot270])
@@ -3752,7 +3801,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> eq, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpeqq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3768,7 +3818,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> ge, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpgeq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3784,7 +3835,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> ge, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpgeq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3800,7 +3852,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> gt, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpgtq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3816,7 +3869,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> gt, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpgtq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3832,7 +3886,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> le, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpleq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3848,7 +3903,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> le, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpleq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3864,7 +3920,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> lt, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpltq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3880,7 +3937,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> lt, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpltq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3896,7 +3954,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> ne, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpneq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3912,7 +3971,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmpt.f%#<V_sz_elem> ne, %q1, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmpneq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3928,7 +3988,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f16.f32 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3944,7 +4005,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtbt.f32.f16 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtbq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3960,7 +4022,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f16.f32 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f16_f32v8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3976,7 +4039,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvttt.f32.f16 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvttq_f32_f16v4sf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -3992,7 +4056,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vdupt.%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdupq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4071,7 +4136,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmaxnmat.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmaxnmaq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vmaxnmavq_p_f])
@@ -4086,7 +4152,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmaxnmavt.f%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmaxnmavq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4102,7 +4169,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmaxnmvt.f%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmaxnmvq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vminnmaq_m_f])
@@ -4117,7 +4185,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vminnmat.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vminnmaq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4133,7 +4202,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vminnmavt.f%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vminnmavq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vminnmvq_p_f])
@@ -4148,7 +4218,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vminnmvt.f%#<V_sz_elem> %0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vminnmvq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4196,7 +4267,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlaldavt.<supf>%#<V_sz_elem> %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlaldavq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4212,7 +4284,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlaldavxt.s%#<V_sz_elem>\t%Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlaldavxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vmlsldavaq_s])
@@ -4259,7 +4332,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsldavt.s%#<V_sz_elem> %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlsldavq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4275,7 +4349,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsldavxt.s%#<V_sz_elem> %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlsldavxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vmovlbq_m_u, vmovlbq_m_s])
@@ -4290,7 +4365,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmovlbt.<supf>%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmovlbq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vmovltq_m_u, vmovltq_m_s])
@@ -4305,7 +4381,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmovltt.<supf>%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmovltq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vmovnbq_m_u, vmovnbq_m_s])
@@ -4320,7 +4397,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmovnbt.i%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmovnbq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4336,7 +4414,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmovntt.i%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmovntq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4352,7 +4431,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmvnt.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmvnq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vnegq_m_f])
@@ -4367,7 +4447,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vnegt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vnegq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4383,7 +4464,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vorrt.i%#<V_sz_elem> %q0, %2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vpselq_f])
@@ -4414,7 +4496,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqmovnbt.<supf>%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqmovnbq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4430,7 +4513,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqmovntt.<supf>%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqmovntq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4446,7 +4530,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqmovunbt.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqmovunbq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4462,7 +4547,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqmovuntt.s%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqmovuntq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4574,7 +4660,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrev32t.16 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrev32q_fv8hf"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4590,7 +4677,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrev32t.%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrev32q_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4606,7 +4694,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrev64t.%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrev64q_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4638,7 +4727,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlaldavhxt.s32 %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrmlaldavhxq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4670,7 +4760,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlsldavht.s32 %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrmlsldavhq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4686,7 +4777,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlsldavhxt.s32 %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrmlsldavhxq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4702,7 +4794,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintat.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrndaq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4718,7 +4811,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintmt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrndmq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4734,7 +4828,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintnt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrndnq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4750,7 +4845,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintpt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrndpq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4766,7 +4862,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vrintxt.f%#<V_sz_elem> %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrndxq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4846,7 +4943,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtmt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtmq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4862,7 +4960,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtpt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtpq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4878,7 +4977,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtnt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtnq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4895,7 +4995,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4911,7 +5012,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrev16t.8 %q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrev16q_<supf>v16qi"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4927,7 +5029,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.<supf>%#<V_sz_elem>.f%#<V_sz_elem>\t%q0, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_from_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4943,7 +5046,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlaldavht.<supf>32 %Q0, %R0, %q1, %q2"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrmlaldavhq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -4976,7 +5080,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vabavt.<supf>%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vabavq_<supf><mode>"))
+ (set_attr "type" "mve_move")
])
;;
@@ -4993,7 +5098,8 @@
]
"TARGET_HAVE_MVE"
"vpst\n\tvqshlut.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqshluq_n_s<mode>"))
+ (set_attr "type" "mve_move")])
;;
;; [vshlq_m_s, vshlq_m_u])
@@ -5009,7 +5115,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlq_<supf><mode>"))
+ (set_attr "type" "mve_move")])
;;
;; [vsriq_m_n_s, vsriq_m_n_u])
@@ -5025,7 +5132,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsrit.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsriq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")])
;;
;; [vsubq_m_u, vsubq_m_s])
@@ -5041,7 +5149,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsubt.i%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsubq_<supf><mode>"))
+ (set_attr "type" "mve_move")])
;;
;; [vcvtq_m_n_to_f_u, vcvtq_m_n_to_f_s])
@@ -5057,7 +5166,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcvtt.f%#<V_sz_elem>.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcvtq_n_to_f_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vabdq_m_s, vabdq_m_u])
@@ -5073,7 +5183,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vabdt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vabdq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5090,7 +5201,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddt.i%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vaddq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5107,7 +5219,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vaddt.i%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vaddq<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5124,7 +5237,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vandt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vandq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5141,7 +5255,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vbict %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5158,7 +5273,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vbrsrt.%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbrsrq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5175,7 +5291,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcaddt.i%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcaddq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5192,7 +5309,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vcaddt.i%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcaddq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5209,7 +5327,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;veort %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5226,7 +5345,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhaddt.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vhaddq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5243,7 +5363,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhaddt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vhaddq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5260,7 +5381,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhsubt.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vhsubq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5277,7 +5399,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhsubt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vhsubq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5294,7 +5417,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmaxt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmaxq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5311,7 +5435,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmint.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vminq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5328,7 +5453,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmladavat.<supf>%#<V_sz_elem> %0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmladavaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5345,7 +5471,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlat.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlaq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5362,7 +5489,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlast.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlasq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5379,7 +5507,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmulht.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulhq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5396,7 +5525,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmullbt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5413,7 +5543,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmulltt.<supf>%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_int_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5430,7 +5561,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmult.i%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5447,7 +5579,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmult.i%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5464,7 +5597,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vornt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5481,7 +5615,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vorrt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5498,7 +5633,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqaddt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqaddq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5515,7 +5651,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqaddt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqaddq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5532,7 +5669,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmlaht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmlahq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5549,7 +5687,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmlasht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmlashq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5566,7 +5705,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmlaht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrdmlahq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5583,7 +5723,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmlasht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrdmlashq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5600,7 +5741,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrshlq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5617,7 +5759,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqshlq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5634,7 +5777,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqshlq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5651,7 +5795,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqsubt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqsubq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5668,7 +5813,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqsubt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqsubq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5685,7 +5831,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrhaddt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrhaddq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5702,7 +5849,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmulht.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrmulhq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5719,7 +5867,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrshlq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5736,7 +5885,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrshrt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrshrq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5753,7 +5903,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5770,7 +5921,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshrt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5787,7 +5939,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vslit.%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsliq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5804,7 +5957,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsubt.i%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsubq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5821,7 +5975,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhcaddt.s%#<V_sz_elem>\t%q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vhcaddq_rot270_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5838,7 +5993,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vhcaddt.s%#<V_sz_elem>\t%q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vhcaddq_rot90_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5855,7 +6011,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmladavaxt.s%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmladavaxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5872,7 +6029,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsdavat.s%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlsdavaq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5889,7 +6047,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsdavaxt.s%#<V_sz_elem>\t%0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlsdavaxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5906,7 +6065,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmladht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmladhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5923,7 +6083,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmladhxt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmladhxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5940,7 +6101,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmlsdht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmlsdhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5957,7 +6119,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmlsdhxt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmlsdhxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5974,7 +6137,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmulht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmulhq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -5991,7 +6155,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmulht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmulhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6008,7 +6173,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmladht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrdmladhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6025,7 +6191,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmladhxt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrdmladhxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6042,7 +6209,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmlsdht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrdmlsdhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6059,7 +6227,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmlsdhxt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrdmlsdhxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6076,7 +6245,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmulht.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrdmulhq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6093,7 +6263,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrdmulht.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrdmulhq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6110,7 +6281,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlaldavat.<supf>%#<V_sz_elem> %Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlaldavaq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6127,7 +6299,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlaldavaxt.<supf>%#<V_sz_elem> %Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlaldavaxq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6144,7 +6317,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshrnbt.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrshrnbq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6161,7 +6335,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshrntt.<supf>%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrshrntq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6178,7 +6353,8 @@
]
"TARGET_HAVE_MVE"
"vpst\n\tvqshrnbt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqshrnbq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6195,7 +6371,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshrntt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqshrntq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6212,7 +6389,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlaldavhat.s32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrmlaldavhaq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6229,7 +6407,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrshrnbt.i%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrshrnbq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6246,7 +6425,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrshrntt.i%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrshrntq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6263,7 +6443,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshllbt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshllbq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6280,7 +6461,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlltt.<supf>%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlltq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6297,7 +6479,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshrnbt.i%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrnbq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6314,7 +6497,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshrntt.i%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshrntq_n_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6331,7 +6515,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsldavat.s%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlsldavaq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6348,7 +6533,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmlsldavaxt.s%#<V_sz_elem>\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmlsldavaxq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6365,7 +6551,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmullbt.p%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmullbq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6382,7 +6569,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vmulltt.p%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulltq_poly_p<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6399,7 +6587,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmullbt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmullbq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6416,7 +6605,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmullbt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmullbq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6433,7 +6623,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmulltt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmulltq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6450,7 +6641,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqdmulltt.s%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqdmulltq_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6467,7 +6659,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshrunbt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrshrunbq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6484,7 +6677,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqrshruntt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqrshruntq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6501,7 +6695,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshrunbt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqshrunbq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6518,7 +6713,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vqshruntt.s%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vqshruntq_n_s<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6535,7 +6731,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlaldavhat.u32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrmlaldavhaq_uv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6552,7 +6749,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlaldavhaxt.s32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrmlaldavhaxq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6569,7 +6767,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlsldavhat.s32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrmlsldavhaq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6586,7 +6785,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vrmlsldavhaxt.s32\t%Q0, %R0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vrmlsldavhaxq_sv4si"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
;; [vabdq_m_f])
@@ -6602,7 +6802,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vabdt.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vabdq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6619,7 +6820,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vaddt.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vaddq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6636,7 +6838,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vaddt.f%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vaddq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6653,7 +6856,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vandt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vandq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6670,7 +6874,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vbict %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbicq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6687,7 +6892,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vbrsrt.%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vbrsrq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6704,7 +6910,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcaddt.f%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcaddq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6721,7 +6928,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcaddt.f%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcaddq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6738,7 +6946,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #0"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmlaq<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6755,7 +6964,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #180"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmlaq_rot180<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6772,7 +6982,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmlaq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6789,7 +7000,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmlat.f%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmlaq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6806,7 +7018,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #0"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmulq<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6823,7 +7036,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #180"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmulq_rot180<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6840,7 +7054,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #270"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmulq_rot270<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6857,7 +7072,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vcmult.f%#<V_sz_elem> %q0, %q2, %q3, #90"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vcmulq_rot90<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6874,7 +7090,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;veort %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_veorq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6891,7 +7108,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vfmat.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vfmaq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6908,7 +7126,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vfmat.f%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vfmaq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6925,7 +7144,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vfmast.f%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vfmasq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6942,7 +7162,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vfmst.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vfmsq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6959,7 +7180,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmaxnmt.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmaxnmq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6976,7 +7198,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vminnmt.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vminnmq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -6993,7 +7216,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmult.f%#<V_sz_elem> %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7010,7 +7234,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vmult.f%#<V_sz_elem> %q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vmulq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7027,7 +7252,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vornt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vornq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7044,7 +7270,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vorrt %q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vorrq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7061,7 +7288,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vsubt.f%#<V_sz_elem>\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsubq_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7078,7 +7306,8 @@
]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vsubt.f%#<V_sz_elem>\t%q0, %q2, %3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsubq_n_f<mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -7245,7 +7474,8 @@
VSTRBSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrbt.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_p_s vstrwq_scatter_base_p_u]
@@ -7268,7 +7498,8 @@
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrbq_p_s vstrbq_p_u]
@@ -7288,7 +7519,8 @@
output_asm_insn ("vpst\;vstrbt.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_gather_offset_z_s vldrbq_gather_offset_z_u]
@@ -7313,7 +7545,8 @@
output_asm_insn ("vpst\n\tvldrbt.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrbq_z_s vldrbq_z_u]
@@ -7336,7 +7569,8 @@
output_asm_insn ("vpst\;vldrbt.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrbq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_z_s vldrwq_gather_base_z_u]
@@ -7357,7 +7591,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_f]
@@ -7424,7 +7659,8 @@
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_s vldrhq_gather_shifted_offset_u]
@@ -7472,7 +7708,8 @@
output_asm_insn ("vpst\n\tvldrht.<supf><V_sz_elem>\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_s, vldrhq_u]
@@ -7514,7 +7751,8 @@
output_asm_insn ("vpst\;vldrht.16\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_z_s vldrhq_z_u]
@@ -7537,7 +7775,8 @@
output_asm_insn ("vpst\;vldrht.<supf><V_sz_elem>\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_f]
@@ -7595,7 +7834,8 @@
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_z_s vldrwq_z_u]
@@ -7615,7 +7855,8 @@
output_asm_insn ("vpst\;vldrwt.32\t%q0, %E1",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_<supf>v4si"))
+ (set_attr "length" "8")])
(define_expand "mve_vld1q_f<mode>"
[(match_operand:MVE_0 0 "s_register_operand")
@@ -7676,7 +7917,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_offset_s vldrdq_gather_offset_u]
@@ -7717,7 +7959,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrdq_gather_shifted_offset_s vldrdq_gather_shifted_offset_u]
@@ -7758,7 +8001,8 @@
output_asm_insn ("vpst\n\tvldrdt.u64\t%q0, [%m1, %q2, uxtw #3]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_shifted_offset_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_offset_f]
@@ -7800,7 +8044,8 @@
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrhq_gather_shifted_offset_f]
@@ -7842,7 +8087,8 @@
output_asm_insn ("vpst\n\tvldrht.f16\t%q0, [%m1, %q2, uxtw #1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrhq_gather_shifted_offset_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_base_f]
@@ -7883,7 +8129,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%q1, %2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_f]
@@ -7945,7 +8192,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_offset_z_s vldrwq_gather_offset_z_u]
@@ -7967,7 +8215,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_f]
@@ -8029,7 +8278,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vldrwq_gather_shifted_offset_z_s vldrwq_gather_shifted_offset_z_u]
@@ -8051,7 +8301,8 @@
output_asm_insn ("vpst\n\tvldrwt.u32\t%q0, [%m1, %q2, uxtw #2]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_shifted_offset_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_f]
@@ -8090,7 +8341,8 @@
output_asm_insn ("vpst\;vstrht.16\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_fv8hf"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_p_s vstrhq_p_u]
@@ -8110,7 +8362,8 @@
output_asm_insn ("vpst\;vstrht.<V_sz_elem>\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_<supf><mode>"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_p_s vstrhq_scatter_offset_p_u]
@@ -8142,7 +8395,8 @@
VSTRHSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_offset_s vstrhq_scatter_offset_u]
@@ -8202,7 +8456,8 @@
VSTRHSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrht.<V_sz_elem>\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_s vstrhq_scatter_shifted_offset_u]
@@ -8289,7 +8544,8 @@
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_p_s vstrwq_p_u]
@@ -8309,7 +8565,8 @@
output_asm_insn ("vpst\;vstrwt.32\t%q1, %E0",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_s vstrwq_u]
@@ -8371,7 +8628,8 @@
output_asm_insn ("vpst\;\tvstrdt.u64\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_<supf>v2di"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_s vstrdq_scatter_base_u]
@@ -8424,7 +8682,8 @@
VSTRDSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_offset_s vstrdq_scatter_offset_u]
@@ -8484,7 +8743,8 @@
VSTRDSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrdt.64\t%q2, [%0, %q1, UXTW #3]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_shifted_offset_s vstrdq_scatter_shifted_offset_u]
@@ -8572,7 +8832,8 @@
VSTRHQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrhq_scatter_shifted_offset_f]
@@ -8632,7 +8893,8 @@
VSTRHQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrht.16\t%q2, [%0, %q1, uxtw #1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrhq_scatter_shifted_offset_fv8hf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_f]
@@ -8677,7 +8939,8 @@
output_asm_insn ("vpst\n\tvstrwt.u32\t%q2, [%q0, %1]",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_f]
@@ -8736,7 +8999,8 @@
VSTRWQSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -8767,7 +9031,8 @@
VSTRWSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_offset_s vstrwq_scatter_offset_u]
@@ -8855,7 +9120,8 @@
VSTRWQSSO_F))]
"TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_fv4sf_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_p_s vstrwq_scatter_shifted_offset_p_u]
@@ -8887,7 +9153,8 @@
VSTRWSSOQ))]
"TARGET_HAVE_MVE"
"vpst\;vstrwt.32\t%q2, [%0, %q1, uxtw #2]"
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_shifted_offset_s vstrwq_scatter_shifted_offset_u]
@@ -9012,7 +9279,8 @@
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;\tvidupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vidupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vddupq_n_u])
@@ -9080,7 +9348,8 @@
(match_operand:SI 6 "immediate_operand" "i")))]
"TARGET_HAVE_MVE"
"vpst\;\tvddupt.u%#<V_sz_elem>\t%q0, %2, %4"
- [(set_attr "length""8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vddupq_u<mode>_insn"))
+ (set_attr "length""8")])
;;
;; [vdwdupq_n_u])
@@ -9196,7 +9465,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;\tvdwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vdwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -9313,7 +9583,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;\tviwdupt.u%#<V_sz_elem>\t%q2, %3, %R4, %5"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_viwdupq_wb_u<mode>_insn"))
+ (set_attr "type" "mve_move")
(set_attr "length""8")])
;;
@@ -9365,7 +9636,8 @@
output_asm_insn ("vpst\;\tvstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_<supf>v4si"))
+ (set_attr "length" "8")])
;;
;; [vstrwq_scatter_base_wb_f]
@@ -9416,7 +9688,8 @@
output_asm_insn ("vpst\;\tvstrwt.u32\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrwq_scatter_base_wb_fv4sf"))
+ (set_attr "length" "8")])
;;
;; [vstrdq_scatter_base_wb_s vstrdq_scatter_base_wb_u]
@@ -9467,7 +9740,8 @@
output_asm_insn ("vpst;vstrdt.u64\t%q2, [%q0, %1]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vstrdq_scatter_base_wb_<supf>v2di"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_<supf>v4si"
[(match_operand:V4SI 0 "s_register_operand")
@@ -9575,7 +9849,8 @@
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_<supf>v4si_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrwq_gather_base_wb_fv4sf"
[(match_operand:V4SI 0 "s_register_operand")
@@ -9684,7 +9959,8 @@
output_asm_insn ("vpst\;vldrwt.u32\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrwq_gather_base_wb_fv4sf_insn"))
+ (set_attr "length" "8")])
(define_expand "mve_vldrdq_gather_base_wb_<supf>v2di"
[(match_operand:V2DI 0 "s_register_operand")
@@ -9809,7 +10085,8 @@
output_asm_insn ("vpst\;vldrdt.u64\t%q0, [%q1, %2]!",ops);
return "";
}
- [(set_attr "length" "8")])
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vldrdq_gather_base_wb_<supf>v2di_insn"))
+ (set_attr "length" "8")])
;;
;; [vadciq_m_s, vadciq_m_u])
;;
@@ -9826,7 +10103,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vadcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -9862,7 +10140,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vadct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vadcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -9899,7 +10178,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsbcit.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbciq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -9935,7 +10215,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vsbct.i32\t%q0, %q2, %q3"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vsbcq_<supf>v4si"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;;
@@ -10352,7 +10633,8 @@
]
"TARGET_HAVE_MVE"
"vpst\;vshlct\t%q0, %1, %4"
- [(set_attr "type" "mve_move")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_mve_vshlcq_<supf><mode>"))
+ (set_attr "type" "mve_move")
(set_attr "length" "8")])
;; CDE instructions on MVE registers.
@@ -10435,7 +10717,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx1<a>t\\tp%c1, %q0, #%c3"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx1q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -10449,7 +10732,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx2<a>t\\tp%c1, %q0, %q3, #%c4"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx2q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
@@ -10464,7 +10748,8 @@
CDE_VCX))]
"TARGET_CDE && TARGET_HAVE_MVE"
"vpst\;vcx3<a>t\\tp%c1, %q0, %q3, %q4, #%c5"
- [(set_attr "type" "coproc")
+ [(set (attr "mve_unpredicated_insn") (symbol_ref "CODE_FOR_arm_vcx3q<a>v16qi"))
+ (set_attr "type" "coproc")
(set_attr "length" "8")]
)
diff --git a/gcc/testsuite/gcc.target/arm/dlstp-compile-asm.c b/gcc/testsuite/gcc.target/arm/dlstp-compile-asm.c
new file mode 100644
index 0000000000000000000000000000000000000000..ec6ee774cbda6604c4c24b57cd4d5d3bd08e07cd
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/dlstp-compile-asm.c
@@ -0,0 +1,82 @@
+/* { dg-do compile { target { arm*-*-* } } } */
+/* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-skip-if "avoid conflicting multilib options" { *-*-* } { "-marm" "-mcpu=*" } } */
+/* { dg-options "-march=armv8.1-m.main+fp.dp+mve.fp -O3" } */
+
+#include <arm_mve.h>
+
+#define IMM 5
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \
+void test_##NAME##PRED##_##SIGN##BITS (TYPE##BITS##_t *a, TYPE##BITS##_t *b, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \
+ TYPE##BITS##x##LANES##_t vb = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (b, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_##SIGN##BITS (va, vb, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ b += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY(BITS, LANES, LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY(NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (8, 16, b, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (16, 8, h, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY (32, 4, w, NAME, PRED)
+
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vaddq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY (vorrq, _x)
+
+
+#define TEST_COMPILE_IN_DLSTP_TERNARY_N(BITS, LANES, LDRSTRYTPE, TYPE, SIGN, NAME, PRED) \
+void test_##NAME##PRED##_n_##SIGN##BITS (TYPE##BITS##_t *a, TYPE##BITS##_t *c, int n) \
+{ \
+ while (n > 0) \
+ { \
+ mve_pred16_t p = vctp##BITS##q (n); \
+ TYPE##BITS##x##LANES##_t va = vldr##LDRSTRYTPE##q_z_##SIGN##BITS (a, p); \
+ TYPE##BITS##x##LANES##_t vc = NAME##PRED##_n_##SIGN##BITS (va, IMM, p); \
+ vstr##LDRSTRYTPE##q_p_##SIGN##BITS (c, vc, p); \
+ c += LANES; \
+ a += LANES; \
+ n -= LANES; \
+ } \
+}
+
+#define TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N(BITS, LANES, LDRSTRYTPE, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_N (BITS, LANES, LDRSTRYTPE, int, s, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_TERNARY_N (BITS, LANES, LDRSTRYTPE, uint, u, NAME, PRED)
+
+#define TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N(NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (8, 16, b, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (16, 8, h, NAME, PRED) \
+TEST_COMPILE_IN_DLSTP_SIGNED_UNSIGNED_TERNARY_N (32, 4, w, NAME, PRED)
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vaddq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vmulq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vsubq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vhaddq, _x)
+
+
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vbrsrq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vshlq, _x)
+TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY_N (vshrq, _x)
+
+
+/* The final number of DLSTPs currently is calculated by the number of
+ `TEST_COMPILE_IN_DLSTP_INTBITS_SIGNED_UNSIGNED_TERNARY.*` macros * 6. */
+/* { dg-final { scan-assembler-times {\tdlstp} 54 } } */
+/* { dg-final { scan-assembler-times {\tletp} 54 } } */
+/* { dg-final { scan-assembler-not "\tvctp\t" } } */
+/* { dg-final { scan-assembler-not "\tvpst\t" } } */
+/* { dg-final { scan-assembler-not "P0" } } */
\ No newline at end of file
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-12-20 16:54 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-18 11:53 [PATCH 0/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops Andre Vieira
2023-12-18 11:53 ` [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns Andre Vieira
2023-12-20 16:54 ` Andre Vieira (lists)
2023-12-18 11:53 ` [PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops Andre Vieira
2023-12-20 16:54 ` Andre Vieira (lists)
-- strict thread matches above, loose matches on Subject: below --
2023-11-06 11:20 [PATCH 1/2] arm: Add define_attr to to create a mapping between MVE predicated and unpredicated insns Stamatis Markianos-Wright
2023-12-12 10:33 ` Richard Earnshaw
2023-08-17 10:30 Stamatis Markianos-Wright
2023-06-15 11:47 Stamatis Markianos-Wright
2022-11-11 17:39 Stam Markianos-Wright
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).