From: Hongyu Wang <hongyu.wang@intel.com>
To: gcc-patches@gcc.gnu.org
Cc: ubizjak@gmail.com, vmakarov@redhat.com, jakub@redhat.com,
Kong Lingling <lingling.kong@intel.com>,
Hongtao Liu <hongtao.liu@intel.com>
Subject: [PATCH 13/13] [APX EGPR] Handle vex insns that only support GPR16 (5/5)
Date: Fri, 22 Sep 2023 18:56:31 +0800 [thread overview]
Message-ID: <20230922105631.2298849-14-hongyu.wang@intel.com> (raw)
In-Reply-To: <20230922105631.2298849-1-hongyu.wang@intel.com>
From: Kong Lingling <lingling.kong@intel.com>
These vex insn may have legacy counterpart that could support EGPR,
but they do not have evex counterpart. Split out its vex part from
patterns and set the vex part to non-EGPR supported by adjusting
constraints and attr_gpr32.
insn list:
1. vmovmskpd/vmovmskps
2. vpmovmskb
3. vrsqrtss/vrsqrtps
4. vrcpss/vrcpps
5. vhaddpd/vhaddps, vhsubpd/vhsubps
6. vldmxcsr/vstmxcsr
7. vaddsubpd/vaddsubps
8. vlddqu
9. vtestps/vtestpd
10. vmaskmovps/vmaskmovpd, vpmaskmovd/vpmaskmovq
11. vperm2f128/vperm2i128
12. vinserti128/vinsertf128
13. vbroadcasti128/vbroadcastf128
14. vcmppd/vcmpps, vcmpss/vcmpsd
15. vgatherdps/vgatherqps, vgatherdpd/vgatherqpd
gcc/ChangeLog:
* config/i386/constraints.md (jb): New constraint for vsib memory
that does not allow gpr32.
* config/i386/i386.md: (setcc_<mode>_sse): Replace m to jm for avx
alternative and set attr_gpr32 to 0.
(movmsk_df): Split avx/noavx alternatives and replace "r" to "jr" for
avx alternative.
(<sse>_rcp<mode>2): Split avx/noavx alternatives and replace
"m/Bm" to "jm/ja" for avx alternative, set its gpr32 attr to 0.
(*rsqrtsf2_sse): Likewise.
* config/i386/mmx.md (mmx_pmovmskb): Split alternative 1 to
avx/noavx and assign jr/r constraint to dest.
* config/i386/sse.md (<sse>_movmsk<ssemodesuffix><avxsizesuffix>):
Split avx/noavx alternatives and replace "r" to "jr" for avx alternative.
(*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_<u>ext): Likewise.
(*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_lt): Likewise.
(*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_<u>ext_lt): Likewise.
(*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_shift): Likewise.
(*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_<u>ext_shift): Likewise.
(<sse2_avx2>_pmovmskb): Likewise.
(*<sse2_avx2>_pmovmskb_zext): Likewise.
(*sse2_pmovmskb_ext): Likewise.
(*<sse2_avx2>_pmovmskb_lt): Likewise.
(*<sse2_avx2>_pmovmskb_zext_lt): Likewise.
(*sse2_pmovmskb_ext_lt): Likewise.
(<sse>_rcp<mode>2): Split avx/noavx alternatives and replace
"m/Bm" to "jm/ja" for avx alternative, set its attr_gpr32 to 0.
(sse_vmrcpv4sf2): Likewise.
(*sse_vmrcpv4sf2): Likewise.
(rsqrt<mode>2): Likewise.
(sse_vmrsqrtv4sf2): Likewise.
(*sse_vmrsqrtv4sf2): Likewise.
(avx_h<insn>v4df3): Likewise.
(sse3_hsubv2df3): Likewise.
(avx_h<insn>v8sf3): Likewise.
(sse3_h<insn>v4sf3): Likewise.
(<sse3>_lddqu<avxsizesuffix>): Likewise.
(avx_cmp<mode>3): Likewise.
(avx_vmcmp<mode>3): Likewise.
(*sse2_gt<mode>3): Likewise.
(sse_ldmxcsr): Likewise.
(sse_stmxcsr): Likewise.
(avx_vtest<ssemodesuffix><avxsizesuffix>): Replace m to jm for
avx alternative and set attr_gpr32 to 0.
(avx2_permv2ti): Likewise.
(*avx_vperm2f128<mode>_full): Likewise.
(*avx_vperm2f128<mode>_nozero): Likewise.
(vec_set_lo_v32qi): Likewise.
(<avx_avx2>_maskload<ssemodesuffix><avxsizesuffix>): Likewise.
(<avx_avx2>_maskstore<ssemodesuffix><avxsi)zesuffix>: Likewise.
(avx_cmp<mode>3): Likewise.
(avx_vmcmp<mode>3): Likewise.
(*<sse>_maskcmp<mode>3_comm): Likewise.
(*avx2_gathersi<VEC_GATHER_MODE:mode>): Replace Tv to jb and set
attr_gpr32 to 0.
(*avx2_gathersi<VEC_GATHER_MODE:mode>_2): Likewise.
(*avx2_gatherdi<VEC_GATHER_MODE:mode>): Likewise.
(*avx2_gatherdi<VEC_GATHER_MODE:mode>_2): Likewise.
(*avx2_gatherdi<VI4F_256:mode>_3): Likewise.
(*avx2_gatherdi<VI4F_256:mode>_4): Likewise.
(avx_vbroadcastf128_<mode>): Restrict non-egpr alternative to
noavx512vl, set its constraint to jm and set attr_gpr32 to 0.
(vec_set_lo_<mode><mask_name>): Likewise.
(vec_set_lo_<mode><mask_name>): Likewise for SF/SI modes.
(vec_set_hi_<mode><mask_name>): Likewise.
(vec_set_hi_<mode><mask_name>): Likewise for SF/SI modes.
(vec_set_hi_<mode>): Likewise.
(vec_set_lo_<mode>): Likewise.
(avx2_set_hi_v32qi): Likewise.
Co-authored-by: Hongyu Wang <hongyu.wang@intel.com>
Co-authored-by: Hongtao Liu <hongtao.liu@intel.com>
---
gcc/config/i386/constraints.md | 6 +
gcc/config/i386/i386.md | 47 +++--
gcc/config/i386/mmx.md | 11 +-
gcc/config/i386/sse.md | 320 +++++++++++++++++++++------------
4 files changed, 242 insertions(+), 142 deletions(-)
diff --git a/gcc/config/i386/constraints.md b/gcc/config/i386/constraints.md
index 36c268d7f9b..dc91bd94b27 100644
--- a/gcc/config/i386/constraints.md
+++ b/gcc/config/i386/constraints.md
@@ -428,3 +428,9 @@ (define_special_memory_constraint "ja"
(and (match_operand 0 "vector_memory_operand")
(not (and (match_test "TARGET_APX_EGPR")
(match_test "x86_extended_rex2reg_mentioned_p (op)")))))
+
+(define_address_constraint "jb"
+ "VSIB address operand without EGPR"
+ (and (match_operand 0 "vsib_address_operand")
+ (not (and (match_test "TARGET_APX_EGPR")
+ (match_test "x86_extended_rex2reg_mentioned_p (op)")))))
diff --git a/gcc/config/i386/i386.md b/gcc/config/i386/i386.md
index c09ee3989cb..a0ba1752a54 100644
--- a/gcc/config/i386/i386.md
+++ b/gcc/config/i386/i386.md
@@ -554,7 +554,8 @@ (define_attr "isa" "base,x64,nox64,x64_sse2,x64_sse4,x64_sse4_noavx,
avx,noavx,avx2,noavx2,bmi,bmi2,fma4,fma,avx512f,noavx512f,
avx512bw,noavx512bw,avx512dq,noavx512dq,fma_or_avx512vl,
avx512vl,noavx512vl,avxvnni,avx512vnnivl,avx512fp16,avxifma,
- avx512ifmavl,avxneconvert,avx512bf16vl,vpclmulqdqvl"
+ avx512ifmavl,avxneconvert,avx512bf16vl,vpclmulqdqvl,
+ avx_noavx512f,avx_noavx512vl"
(const_string "base"))
;; The (bounding maximum) length of an instruction immediate.
@@ -908,6 +909,8 @@ (define_attr "enabled" ""
(eq_attr "isa" "sse4_noavx")
(symbol_ref "TARGET_SSE4_1 && !TARGET_AVX")
(eq_attr "isa" "avx") (symbol_ref "TARGET_AVX")
+ (eq_attr "isa" "avx_noavx512f")
+ (symbol_ref "TARGET_AVX && !TARGET_AVX512F")
(eq_attr "isa" "noavx") (symbol_ref "!TARGET_AVX")
(eq_attr "isa" "avx2") (symbol_ref "TARGET_AVX2")
(eq_attr "isa" "noavx2") (symbol_ref "!TARGET_AVX2")
@@ -16661,12 +16664,13 @@ (define_insn "setcc_<mode>_sse"
[(set (match_operand:MODEF 0 "register_operand" "=x,x")
(match_operator:MODEF 3 "sse_comparison_operator"
[(match_operand:MODEF 1 "register_operand" "0,x")
- (match_operand:MODEF 2 "nonimmediate_operand" "xm,xm")]))]
+ (match_operand:MODEF 2 "nonimmediate_operand" "xm,xjm")]))]
"SSE_FLOAT_MODE_P (<MODE>mode)"
"@
cmp%D3<ssemodesuffix>\t{%2, %0|%0, %2}
vcmp%D3<ssemodesuffix>\t{%2, %1, %0|%0, %1, %2}"
[(set_attr "isa" "noavx,avx")
+ (set_attr "gpr32" "1,0")
(set_attr "type" "ssecmp")
(set_attr "length_immediate" "1")
(set_attr "prefix" "orig,vex")
@@ -20122,24 +20126,27 @@ (define_insn "*<insn>hf"
(set_attr "mode" "HF")])
(define_insn "*rcpsf2_sse"
- [(set (match_operand:SF 0 "register_operand" "=x,x,x")
- (unspec:SF [(match_operand:SF 1 "nonimmediate_operand" "0,x,m")]
+ [(set (match_operand:SF 0 "register_operand" "=x,x,x,x")
+ (unspec:SF [(match_operand:SF 1 "nonimmediate_operand" "0,x,m,ja")]
UNSPEC_RCP))]
"TARGET_SSE && TARGET_SSE_MATH"
"@
%vrcpss\t{%d1, %0|%0, %d1}
%vrcpss\t{%d1, %0|%0, %d1}
- %vrcpss\t{%1, %d0|%d0, %1}"
- [(set_attr "type" "sse")
+ rcpss\t{%1, %d0|%d0, %1}
+ vrcpss\t{%1, %d0|%d0, %1}"
+ [(set_attr "isa" "*,*,noavx,avx")
+ (set_attr "gpr32" "1,1,1,0")
+ (set_attr "type" "sse")
(set_attr "atom_sse_attr" "rcp")
(set_attr "btver2_sse_attr" "rcp")
(set_attr "prefix" "maybe_vex")
(set_attr "mode" "SF")
- (set_attr "avx_partial_xmm_update" "false,false,true")
+ (set_attr "avx_partial_xmm_update" "false,false,true,true")
(set (attr "preferred_for_speed")
(cond [(match_test "TARGET_AVX")
(symbol_ref "true")
- (eq_attr "alternative" "1,2")
+ (eq_attr "alternative" "1,2,3")
(symbol_ref "!TARGET_SSE_PARTIAL_REG_DEPENDENCY")
]
(symbol_ref "true")))])
@@ -20382,24 +20389,27 @@ (define_insn "sqrtxf2"
(set_attr "bdver1_decode" "direct")])
(define_insn "*rsqrtsf2_sse"
- [(set (match_operand:SF 0 "register_operand" "=x,x,x")
- (unspec:SF [(match_operand:SF 1 "nonimmediate_operand" "0,x,m")]
+ [(set (match_operand:SF 0 "register_operand" "=x,x,x,x")
+ (unspec:SF [(match_operand:SF 1 "nonimmediate_operand" "0,x,m,ja")]
UNSPEC_RSQRT))]
"TARGET_SSE && TARGET_SSE_MATH"
"@
%vrsqrtss\t{%d1, %0|%0, %d1}
%vrsqrtss\t{%d1, %0|%0, %d1}
- %vrsqrtss\t{%1, %d0|%d0, %1}"
- [(set_attr "type" "sse")
+ rsqrtss\t{%1, %d0|%d0, %1}
+ vrsqrtss\t{%1, %d0|%d0, %1}"
+ [(set_attr "isa" "*,*,noavx,avx")
+ (set_attr "gpr32" "1,1,1,0")
+ (set_attr "type" "sse")
(set_attr "atom_sse_attr" "rcp")
(set_attr "btver2_sse_attr" "rcp")
(set_attr "prefix" "maybe_vex")
(set_attr "mode" "SF")
- (set_attr "avx_partial_xmm_update" "false,false,true")
+ (set_attr "avx_partial_xmm_update" "false,false,true,true")
(set (attr "preferred_for_speed")
(cond [(match_test "TARGET_AVX")
(symbol_ref "true")
- (eq_attr "alternative" "1,2")
+ (eq_attr "alternative" "1,2,3")
(symbol_ref "!TARGET_SSE_PARTIAL_REG_DEPENDENCY")
]
(symbol_ref "true")))])
@@ -22103,14 +22113,15 @@ (define_expand "signbitxf2"
})
(define_insn "movmsk_df"
- [(set (match_operand:SI 0 "register_operand" "=r")
+ [(set (match_operand:SI 0 "register_operand" "=r,jr")
(unspec:SI
- [(match_operand:DF 1 "register_operand" "x")]
+ [(match_operand:DF 1 "register_operand" "x,x")]
UNSPEC_MOVMSK))]
"SSE_FLOAT_MODE_P (DFmode) && TARGET_SSE_MATH"
"%vmovmskpd\t{%1, %0|%0, %1}"
- [(set_attr "type" "ssemov")
- (set_attr "prefix" "maybe_vex")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
+ (set_attr "prefix" "maybe_evex")
(set_attr "mode" "DF")])
;; Use movmskpd in SSE mode to avoid store forwarding stall
diff --git a/gcc/config/i386/mmx.md b/gcc/config/i386/mmx.md
index 73809585a5d..3530615c706 100644
--- a/gcc/config/i386/mmx.md
+++ b/gcc/config/i386/mmx.md
@@ -5174,13 +5174,14 @@ (define_expand "usadv8qi"
})
(define_insn_and_split "mmx_pmovmskb"
- [(set (match_operand:SI 0 "register_operand" "=r,r")
- (unspec:SI [(match_operand:V8QI 1 "register_operand" "y,x")]
+ [(set (match_operand:SI 0 "register_operand" "=r,r,jr")
+ (unspec:SI [(match_operand:V8QI 1 "register_operand" "y,x,x")]
UNSPEC_MOVMSK))]
"(TARGET_MMX || TARGET_MMX_WITH_SSE)
&& (TARGET_SSE || TARGET_3DNOW_A)"
"@
pmovmskb\t{%1, %0|%0, %1}
+ #
#"
"TARGET_SSE2 && reload_completed
&& SSE_REGNO_P (REGNO (operands[1]))"
@@ -5195,9 +5196,9 @@ (define_insn_and_split "mmx_pmovmskb"
operands[2] = lowpart_subreg (QImode, operands[0],
GET_MODE (operands[0]));
}
- [(set_attr "mmx_isa" "native,sse")
- (set_attr "type" "mmxcvt,ssemov")
- (set_attr "mode" "DI,TI")])
+ [(set_attr "mmx_isa" "native,sse_noavx,avx")
+ (set_attr "type" "mmxcvt,ssemov,ssemov")
+ (set_attr "mode" "DI,TI,TI")])
(define_expand "mmx_maskmovq"
[(set (match_operand:V8QI 0 "memory_operand")
diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md
index d3b59c4866b..6bffd749c6d 100644
--- a/gcc/config/i386/sse.md
+++ b/gcc/config/i386/sse.md
@@ -1833,12 +1833,14 @@ (define_peephole2
"operands[4] = adjust_address (operands[0], V2DFmode, 0);")
(define_insn "<sse3>_lddqu<avxsizesuffix>"
- [(set (match_operand:VI1 0 "register_operand" "=x")
- (unspec:VI1 [(match_operand:VI1 1 "memory_operand" "m")]
+ [(set (match_operand:VI1 0 "register_operand" "=x,x")
+ (unspec:VI1 [(match_operand:VI1 1 "memory_operand" "m,jm")]
UNSPEC_LDDQU))]
"TARGET_SSE3"
"%vlddqu\t{%1, %0|%0, %1}"
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
+ (set_attr "gpr32" "1,0")
(set_attr "movu" "1")
(set (attr "prefix_data16")
(if_then_else
@@ -2507,12 +2509,14 @@ (define_insn "<sse>_div<mode>3<mask_name><round_name>"
(set_attr "mode" "<MODE>")])
(define_insn "<sse>_rcp<mode>2"
- [(set (match_operand:VF1_128_256 0 "register_operand" "=x")
+ [(set (match_operand:VF1_128_256 0 "register_operand" "=x,x")
(unspec:VF1_128_256
- [(match_operand:VF1_128_256 1 "vector_operand" "xBm")] UNSPEC_RCP))]
+ [(match_operand:VF1_128_256 1 "vector_operand" "xBm,xja")] UNSPEC_RCP))]
"TARGET_SSE"
"%vrcpps\t{%1, %0|%0, %1}"
- [(set_attr "type" "sse")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "sse")
+ (set_attr "gpr32" "1,0")
(set_attr "atom_sse_attr" "rcp")
(set_attr "btver2_sse_attr" "rcp")
(set_attr "prefix" "maybe_vex")
@@ -2521,7 +2525,7 @@ (define_insn "<sse>_rcp<mode>2"
(define_insn "sse_vmrcpv4sf2"
[(set (match_operand:V4SF 0 "register_operand" "=x,x")
(vec_merge:V4SF
- (unspec:V4SF [(match_operand:V4SF 1 "nonimmediate_operand" "xm,xm")]
+ (unspec:V4SF [(match_operand:V4SF 1 "nonimmediate_operand" "xm,xjm")]
UNSPEC_RCP)
(match_operand:V4SF 2 "register_operand" "0,x")
(const_int 1)))]
@@ -2531,6 +2535,7 @@ (define_insn "sse_vmrcpv4sf2"
vrcpss\t{%1, %2, %0|%0, %2, %k1}"
[(set_attr "isa" "noavx,avx")
(set_attr "type" "sse")
+ (set_attr "gpr32" "1,0")
(set_attr "atom_sse_attr" "rcp")
(set_attr "btver2_sse_attr" "rcp")
(set_attr "prefix" "orig,vex")
@@ -2540,7 +2545,7 @@ (define_insn "*sse_vmrcpv4sf2"
[(set (match_operand:V4SF 0 "register_operand" "=x,x")
(vec_merge:V4SF
(vec_duplicate:V4SF
- (unspec:SF [(match_operand:SF 1 "nonimmediate_operand" "xm,xm")]
+ (unspec:SF [(match_operand:SF 1 "nonimmediate_operand" "xm,xjm")]
UNSPEC_RCP))
(match_operand:V4SF 2 "register_operand" "0,x")
(const_int 1)))]
@@ -2550,6 +2555,7 @@ (define_insn "*sse_vmrcpv4sf2"
vrcpss\t{%1, %2, %0|%0, %2, %1}"
[(set_attr "isa" "noavx,avx")
(set_attr "type" "sse")
+ (set_attr "gpr32" "1,0")
(set_attr "atom_sse_attr" "rcp")
(set_attr "btver2_sse_attr" "rcp")
(set_attr "prefix" "orig,vex")
@@ -2726,12 +2732,14 @@ (define_expand "rsqrt<mode>2"
"TARGET_AVX512FP16")
(define_insn "<sse>_rsqrt<mode>2"
- [(set (match_operand:VF1_128_256 0 "register_operand" "=x")
+ [(set (match_operand:VF1_128_256 0 "register_operand" "=x,x")
(unspec:VF1_128_256
- [(match_operand:VF1_128_256 1 "vector_operand" "xBm")] UNSPEC_RSQRT))]
+ [(match_operand:VF1_128_256 1 "vector_operand" "xBm,xja")] UNSPEC_RSQRT))]
"TARGET_SSE"
"%vrsqrtps\t{%1, %0|%0, %1}"
- [(set_attr "type" "sse")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "sse")
+ (set_attr "gpr32" "1,0")
(set_attr "prefix" "maybe_vex")
(set_attr "mode" "<MODE>")])
@@ -2790,7 +2798,7 @@ (define_insn "rsqrt14_<mode>_mask"
(define_insn "sse_vmrsqrtv4sf2"
[(set (match_operand:V4SF 0 "register_operand" "=x,x")
(vec_merge:V4SF
- (unspec:V4SF [(match_operand:V4SF 1 "nonimmediate_operand" "xm,xm")]
+ (unspec:V4SF [(match_operand:V4SF 1 "nonimmediate_operand" "xm,xjm")]
UNSPEC_RSQRT)
(match_operand:V4SF 2 "register_operand" "0,x")
(const_int 1)))]
@@ -2800,6 +2808,7 @@ (define_insn "sse_vmrsqrtv4sf2"
vrsqrtss\t{%1, %2, %0|%0, %2, %k1}"
[(set_attr "isa" "noavx,avx")
(set_attr "type" "sse")
+ (set_attr "gpr32" "1,0")
(set_attr "prefix" "orig,vex")
(set_attr "mode" "SF")])
@@ -2807,7 +2816,7 @@ (define_insn "*sse_vmrsqrtv4sf2"
[(set (match_operand:V4SF 0 "register_operand" "=x,x")
(vec_merge:V4SF
(vec_duplicate:V4SF
- (unspec:SF [(match_operand:SF 1 "nonimmediate_operand" "xm,xm")]
+ (unspec:SF [(match_operand:SF 1 "nonimmediate_operand" "xm,xjm")]
UNSPEC_RSQRT))
(match_operand:V4SF 2 "register_operand" "0,x")
(const_int 1)))]
@@ -2817,6 +2826,7 @@ (define_insn "*sse_vmrsqrtv4sf2"
vrsqrtss\t{%1, %2, %0|%0, %2, %1}"
[(set_attr "isa" "noavx,avx")
(set_attr "type" "sse")
+ (set_attr "gpr32" "1,0")
(set_attr "prefix" "orig,vex")
(set_attr "mode" "SF")])
@@ -2992,7 +3002,7 @@ (define_insn "vec_addsub<mode>3"
(vec_merge:VF_128_256
(minus:VF_128_256
(match_operand:VF_128_256 1 "register_operand" "0,x")
- (match_operand:VF_128_256 2 "vector_operand" "xBm, xm"))
+ (match_operand:VF_128_256 2 "vector_operand" "xBm, xjm"))
(plus:VF_128_256 (match_dup 1) (match_dup 2))
(const_int <addsub_cst>)))]
"TARGET_SSE3"
@@ -3001,6 +3011,7 @@ (define_insn "vec_addsub<mode>3"
vaddsub<ssemodesuffix>\t{%2, %1, %0|%0, %1, %2}"
[(set_attr "isa" "noavx,avx")
(set_attr "type" "sseadd")
+ (set_attr "gpr32" "1,0")
(set (attr "atom_unit")
(if_then_else
(match_test "<MODE>mode == V2DFmode")
@@ -3144,7 +3155,7 @@ (define_insn "avx_h<insn>v4df3"
(vec_select:DF (match_dup 1) (parallel [(const_int 1)])))
(plusminus:DF
(vec_select:DF
- (match_operand:V4DF 2 "nonimmediate_operand" "xm")
+ (match_operand:V4DF 2 "nonimmediate_operand" "xjm")
(parallel [(const_int 0)]))
(vec_select:DF (match_dup 2) (parallel [(const_int 1)]))))
(vec_concat:V2DF
@@ -3157,6 +3168,7 @@ (define_insn "avx_h<insn>v4df3"
"TARGET_AVX"
"vh<plusminus_mnemonic>pd\t{%2, %1, %0|%0, %1, %2}"
[(set_attr "type" "sseadd")
+ (set_attr "gpr32" "0")
(set_attr "prefix" "vex")
(set_attr "mode" "V4DF")])
@@ -3187,7 +3199,7 @@ (define_insn "*sse3_haddv2df3"
(parallel [(match_operand:SI 4 "const_0_to_1_operand")])))
(plus:DF
(vec_select:DF
- (match_operand:V2DF 2 "vector_operand" "xBm,xm")
+ (match_operand:V2DF 2 "vector_operand" "xBm,xjm")
(parallel [(match_operand:SI 5 "const_0_to_1_operand")]))
(vec_select:DF
(match_dup 2)
@@ -3199,6 +3211,7 @@ (define_insn "*sse3_haddv2df3"
haddpd\t{%2, %0|%0, %2}
vhaddpd\t{%2, %1, %0|%0, %1, %2}"
[(set_attr "isa" "noavx,avx")
+ (set_attr "gpr32" "1,0")
(set_attr "type" "sseadd")
(set_attr "prefix" "orig,vex")
(set_attr "mode" "V2DF")])
@@ -3213,7 +3226,7 @@ (define_insn "sse3_hsubv2df3"
(vec_select:DF (match_dup 1) (parallel [(const_int 1)])))
(minus:DF
(vec_select:DF
- (match_operand:V2DF 2 "vector_operand" "xBm,xm")
+ (match_operand:V2DF 2 "vector_operand" "xBm,xjm")
(parallel [(const_int 0)]))
(vec_select:DF (match_dup 2) (parallel [(const_int 1)])))))]
"TARGET_SSE3"
@@ -3222,6 +3235,7 @@ (define_insn "sse3_hsubv2df3"
vhsubpd\t{%2, %1, %0|%0, %1, %2}"
[(set_attr "isa" "noavx,avx")
(set_attr "type" "sseadd")
+ (set_attr "gpr32" "1,0")
(set_attr "prefix" "orig,vex")
(set_attr "mode" "V2DF")])
@@ -3278,7 +3292,7 @@ (define_insn "avx_h<insn>v8sf3"
(vec_concat:V2SF
(plusminus:SF
(vec_select:SF
- (match_operand:V8SF 2 "nonimmediate_operand" "xm")
+ (match_operand:V8SF 2 "nonimmediate_operand" "xjm")
(parallel [(const_int 0)]))
(vec_select:SF (match_dup 2) (parallel [(const_int 1)])))
(plusminus:SF
@@ -3302,6 +3316,7 @@ (define_insn "avx_h<insn>v8sf3"
"TARGET_AVX"
"vh<plusminus_mnemonic>ps\t{%2, %1, %0|%0, %1, %2}"
[(set_attr "type" "sseadd")
+ (set_attr "gpr32" "0")
(set_attr "prefix" "vex")
(set_attr "mode" "V8SF")])
@@ -3320,7 +3335,7 @@ (define_insn "sse3_h<insn>v4sf3"
(vec_concat:V2SF
(plusminus:SF
(vec_select:SF
- (match_operand:V4SF 2 "vector_operand" "xBm,xm")
+ (match_operand:V4SF 2 "vector_operand" "xBm,xjm")
(parallel [(const_int 0)]))
(vec_select:SF (match_dup 2) (parallel [(const_int 1)])))
(plusminus:SF
@@ -3332,6 +3347,7 @@ (define_insn "sse3_h<insn>v4sf3"
vh<plusminus_mnemonic>ps\t{%2, %1, %0|%0, %1, %2}"
[(set_attr "isa" "noavx,avx")
(set_attr "type" "sseadd")
+ (set_attr "gpr32" "1,0")
(set_attr "atom_unit" "complex")
(set_attr "prefix" "orig,vex")
(set_attr "prefix_rep" "1,*")
@@ -3525,12 +3541,13 @@ (define_insn "avx_cmp<mode>3"
[(set (match_operand:VF_128_256 0 "register_operand" "=x")
(unspec:VF_128_256
[(match_operand:VF_128_256 1 "register_operand" "x")
- (match_operand:VF_128_256 2 "nonimmediate_operand" "xm")
+ (match_operand:VF_128_256 2 "nonimmediate_operand" "xjm")
(match_operand:SI 3 "const_0_to_31_operand")]
UNSPEC_PCMP))]
"TARGET_AVX"
"vcmp<ssemodesuffix>\t{%3, %2, %1, %0|%0, %1, %2, %3}"
[(set_attr "type" "ssecmp")
+ (set_attr "gpr32" "0")
(set_attr "length_immediate" "1")
(set_attr "prefix" "vex")
(set_attr "mode" "<MODE>")])
@@ -3736,7 +3753,7 @@ (define_insn "avx_vmcmp<mode>3"
(vec_merge:VF_128
(unspec:VF_128
[(match_operand:VF_128 1 "register_operand" "x")
- (match_operand:VF_128 2 "nonimmediate_operand" "xm")
+ (match_operand:VF_128 2 "nonimmediate_operand" "xjm")
(match_operand:SI 3 "const_0_to_31_operand")]
UNSPEC_PCMP)
(match_dup 1)
@@ -3744,6 +3761,7 @@ (define_insn "avx_vmcmp<mode>3"
"TARGET_AVX"
"vcmp<ssescalarmodesuffix>\t{%3, %2, %1, %0|%0, %1, %<iptr>2, %3}"
[(set_attr "type" "ssecmp")
+ (set_attr "gpr32" "0")
(set_attr "length_immediate" "1")
(set_attr "prefix" "vex")
(set_attr "mode" "<ssescalarmode>")])
@@ -3752,13 +3770,14 @@ (define_insn "*<sse>_maskcmp<mode>3_comm"
[(set (match_operand:VF_128_256 0 "register_operand" "=x,x")
(match_operator:VF_128_256 3 "sse_comparison_operator"
[(match_operand:VF_128_256 1 "register_operand" "%0,x")
- (match_operand:VF_128_256 2 "vector_operand" "xBm,xm")]))]
+ (match_operand:VF_128_256 2 "vector_operand" "xBm,xjm")]))]
"TARGET_SSE
&& GET_RTX_CLASS (GET_CODE (operands[3])) == RTX_COMM_COMPARE"
"@
cmp%D3<ssemodesuffix>\t{%2, %0|%0, %2}
vcmp%D3<ssemodesuffix>\t{%2, %1, %0|%0, %1, %2}"
[(set_attr "isa" "noavx,avx")
+ (set_attr "gpr32" "1,0")
(set_attr "type" "ssecmp")
(set_attr "length_immediate" "1")
(set_attr "prefix" "orig,vex")
@@ -3768,12 +3787,13 @@ (define_insn "<sse>_maskcmp<mode>3"
[(set (match_operand:VF_128_256 0 "register_operand" "=x,x")
(match_operator:VF_128_256 3 "sse_comparison_operator"
[(match_operand:VF_128_256 1 "register_operand" "0,x")
- (match_operand:VF_128_256 2 "vector_operand" "xBm,xm")]))]
+ (match_operand:VF_128_256 2 "vector_operand" "xBm,xjm")]))]
"TARGET_SSE"
"@
cmp%D3<ssemodesuffix>\t{%2, %0|%0, %2}
vcmp%D3<ssemodesuffix>\t{%2, %1, %0|%0, %1, %2}"
[(set_attr "isa" "noavx,avx")
+ (set_attr "gpr32" "1,0")
(set_attr "type" "ssecmp")
(set_attr "length_immediate" "1")
(set_attr "prefix" "orig,vex")
@@ -3784,7 +3804,7 @@ (define_insn "<sse>_vmmaskcmp<mode>3"
(vec_merge:VF_128
(match_operator:VF_128 3 "sse_comparison_operator"
[(match_operand:VF_128 1 "register_operand" "0,x")
- (match_operand:VF_128 2 "nonimmediate_operand" "xm,xm")])
+ (match_operand:VF_128 2 "nonimmediate_operand" "xm,xjm")])
(match_dup 1)
(const_int 1)))]
"TARGET_SSE"
@@ -3792,6 +3812,7 @@ (define_insn "<sse>_vmmaskcmp<mode>3"
cmp%D3<ssescalarmodesuffix>\t{%2, %0|%0, %<iptr>2}
vcmp%D3<ssescalarmodesuffix>\t{%2, %1, %0|%0, %1, %<iptr>2}"
[(set_attr "isa" "noavx,avx")
+ (set_attr "gpr32" "1,0")
(set_attr "type" "ssecmp")
(set_attr "length_immediate" "1,*")
(set_attr "prefix" "orig,vex")
@@ -4709,7 +4730,7 @@ (define_insn "<sse>_andnot<mode>3<mask_name>"
(and:VFB_128_256
(not:VFB_128_256
(match_operand:VFB_128_256 1 "register_operand" "0,x,v,v"))
- (match_operand:VFB_128_256 2 "vector_operand" "xBm,xm,vm,vm")))]
+ (match_operand:VFB_128_256 2 "vector_operand" "xBm,xjm,vm,vm")))]
"TARGET_SSE && <mask_avx512vl_condition>
&& (!<mask_applied> || <ssescalarmode>mode != HFmode)"
{
@@ -4753,7 +4774,8 @@ (define_insn "<sse>_andnot<mode>3<mask_name>"
output_asm_insn (buf, operands);
return "";
}
- [(set_attr "isa" "noavx,avx,avx512dq,avx512f")
+ [(set_attr "isa" "noavx,avx_noavx512f,avx512dq,avx512f")
+ (set_attr "gpr32" "1,0,1,1")
(set_attr "type" "sselog")
(set_attr "prefix" "orig,maybe_vex,evex,evex")
(set (attr "mode")
@@ -4761,6 +4783,10 @@ (define_insn "<sse>_andnot<mode>3<mask_name>"
(and (eq_attr "alternative" "1")
(match_test "!TARGET_AVX512DQ")))
(const_string "<sseintvecmode2>")
+ (and (not (match_test "<mask_applied>"))
+ (eq_attr "alternative" "3")
+ (match_test "!x86_evex_reg_mentioned_p (operands, 3)"))
+ (const_string "<MODE>")
(eq_attr "alternative" "3")
(const_string "<sseintvecmode2>")
(match_test "TARGET_AVX")
@@ -5063,7 +5089,7 @@ (define_insn "*andnot<mode>3"
[(set (match_operand:ANDNOT_MODE 0 "register_operand" "=x,x,v,v")
(and:ANDNOT_MODE
(not:ANDNOT_MODE (match_operand:ANDNOT_MODE 1 "register_operand" "0,x,v,v"))
- (match_operand:ANDNOT_MODE 2 "vector_operand" "xBm,xm,vm,v")))]
+ (match_operand:ANDNOT_MODE 2 "vector_operand" "xBm,xjm,vm,v")))]
"TARGET_SSE"
{
char buf[128];
@@ -5092,7 +5118,8 @@ (define_insn "*andnot<mode>3"
output_asm_insn (buf, operands);
return "";
}
- [(set_attr "isa" "noavx,avx,avx512vl,avx512f")
+ [(set_attr "isa" "noavx,avx_noavx512f,avx512vl,avx512f")
+ (set_attr "gpr32" "1,0,1,1")
(set_attr "type" "sselog")
(set (attr "prefix_data16")
(if_then_else
@@ -5102,7 +5129,10 @@ (define_insn "*andnot<mode>3"
(const_string "*")))
(set_attr "prefix" "orig,vex,evex,evex")
(set (attr "mode")
- (cond [(eq_attr "alternative" "2")
+ (cond [(and (eq_attr "alternative" "3")
+ (match_test "!x86_evex_reg_mentioned_p (operands, 3)"))
+ (const_string "TI")
+ (eq_attr "alternative" "2")
(const_string "TI")
(eq_attr "alternative" "3")
(const_string "XI")
@@ -12240,7 +12270,7 @@ (define_insn_and_split "vec_extract_lo_v32qi"
"operands[1] = gen_lowpart (V16QImode, operands[1]);")
(define_insn "vec_extract_hi_v32qi"
- [(set (match_operand:V16QI 0 "nonimmediate_operand" "=xm,vm")
+ [(set (match_operand:V16QI 0 "nonimmediate_operand" "=xjm,vm")
(vec_select:V16QI
(match_operand:V32QI 1 "register_operand" "x,v")
(parallel [(const_int 16) (const_int 17)
@@ -12258,7 +12288,8 @@ (define_insn "vec_extract_hi_v32qi"
[(set_attr "type" "sselog1")
(set_attr "prefix_extra" "1")
(set_attr "length_immediate" "1")
- (set_attr "isa" "*,avx512vl")
+ (set_attr "isa" "noavx512vl,avx512vl")
+ (set_attr "gpr32" "0,1")
(set_attr "prefix" "vex,evex")
(set_attr "mode" "OI")])
@@ -17130,6 +17161,7 @@ (define_insn "*sse2_gt<mode>3"
pcmpgt<ssemodesuffix>\t{%2, %0|%0, %2}
vpcmpgt<ssemodesuffix>\t{%2, %1, %0|%0, %1, %2}"
[(set_attr "isa" "noavx,avx")
+ (set_attr "gpr32" "1,0")
(set_attr "type" "ssecmp")
(set_attr "prefix" "orig,vex")
(set_attr "mode" "TI")])
@@ -17446,7 +17478,7 @@ (define_insn "*andnot<mode>3"
[(set (match_operand:VI 0 "register_operand" "=x,x,v,v,v")
(and:VI
(not:VI (match_operand:VI 1 "bcst_vector_operand" "0,x,v,m,Br"))
- (match_operand:VI 2 "bcst_vector_operand" "xBm,xm,vmBr,0,0")))]
+ (match_operand:VI 2 "bcst_vector_operand" "xBm,xjm,vmBr,0,0")))]
"TARGET_SSE
&& (register_operand (operands[1], <MODE>mode)
|| register_operand (operands[2], <MODE>mode))"
@@ -17533,7 +17565,8 @@ (define_insn "*andnot<mode>3"
output_asm_insn (buf, operands);
return "";
}
- [(set_attr "isa" "noavx,avx,avx,*,*")
+ [(set_attr "isa" "noavx,avx_noavx512f,avx512f,*,*")
+ (set_attr "gpr32" "1,0,1,1,1")
(set_attr "type" "sselog")
(set (attr "prefix_data16")
(if_then_else
@@ -17688,7 +17721,7 @@ (define_insn "*<code><mode>3<mask_name>"
[(set (match_operand:VI48_AVX_AVX512F 0 "register_operand" "=x,x,v")
(any_logic:VI48_AVX_AVX512F
(match_operand:VI48_AVX_AVX512F 1 "bcst_vector_operand" "%0,x,v")
- (match_operand:VI48_AVX_AVX512F 2 "bcst_vector_operand" "xBm,xm,vmBr")))]
+ (match_operand:VI48_AVX_AVX512F 2 "bcst_vector_operand" "xBm,xjm,vmBr")))]
"TARGET_SSE && <mask_mode512bit_condition>
&& ix86_binary_operator_ok (<CODE>, <MODE>mode, operands)"
{
@@ -17718,9 +17751,11 @@ (define_insn "*<code><mode>3<mask_name>"
case E_V4DImode:
case E_V4SImode:
case E_V2DImode:
- ssesuffix = (TARGET_AVX512VL
- && (<mask_applied> || which_alternative == 2)
- ? "<ssemodesuffix>" : "");
+ ssesuffix = ((TARGET_AVX512VL
+ && (<mask_applied> || which_alternative == 2))
+ || (MEM_P (operands[2]) && which_alternative == 2
+ && x86_extended_rex2reg_mentioned_p (operands[2])))
+ ? "<ssemodesuffix>" : "";
break;
default:
gcc_unreachable ();
@@ -17760,7 +17795,8 @@ (define_insn "*<code><mode>3<mask_name>"
output_asm_insn (buf, operands);
return "";
}
- [(set_attr "isa" "noavx,avx,avx")
+ [(set_attr "isa" "noavx,avx_noavx512f,avx512f")
+ (set_attr "gpr32" "1,0,1")
(set_attr "type" "sselog")
(set (attr "prefix_data16")
(if_then_else
@@ -17787,7 +17823,7 @@ (define_insn "*<code><mode>3"
[(set (match_operand:VI12_AVX_AVX512F 0 "register_operand" "=x,x,v")
(any_logic:VI12_AVX_AVX512F
(match_operand:VI12_AVX_AVX512F 1 "vector_operand" "%0,x,v")
- (match_operand:VI12_AVX_AVX512F 2 "vector_operand" "xBm,xm,vm")))]
+ (match_operand:VI12_AVX_AVX512F 2 "vector_operand" "xBm,xjm,vm")))]
"TARGET_SSE && !(MEM_P (operands[1]) && MEM_P (operands[2]))"
{
char buf[64];
@@ -17816,7 +17852,10 @@ (define_insn "*<code><mode>3"
case E_V16HImode:
case E_V16QImode:
case E_V8HImode:
- ssesuffix = TARGET_AVX512VL && which_alternative == 2 ? "q" : "";
+ ssesuffix = (((TARGET_AVX512VL && which_alternative == 2)
+ || (MEM_P (operands[2]) && which_alternative == 2
+ && x86_extended_rex2reg_mentioned_p (operands[2]))))
+ ? "q" : "";
break;
default:
gcc_unreachable ();
@@ -17853,7 +17892,8 @@ (define_insn "*<code><mode>3"
output_asm_insn (buf, operands);
return "";
}
- [(set_attr "isa" "noavx,avx,avx")
+ [(set_attr "isa" "noavx,avx_noavx512f,avx512f")
+ (set_attr "gpr32" "1,0,1")
(set_attr "type" "sselog")
(set (attr "prefix_data16")
(if_then_else
@@ -17880,13 +17920,14 @@ (define_insn "<code>v1ti3"
[(set (match_operand:V1TI 0 "register_operand" "=x,x,v")
(any_logic:V1TI
(match_operand:V1TI 1 "register_operand" "%0,x,v")
- (match_operand:V1TI 2 "vector_operand" "xBm,xm,vm")))]
+ (match_operand:V1TI 2 "vector_operand" "xBm,xjm,vm")))]
"TARGET_SSE2"
"@
p<logic>\t{%2, %0|%0, %2}
vp<logic>\t{%2, %1, %0|%0, %1, %2}
vp<logic>d\t{%2, %1, %0|%0, %1, %2}"
- [(set_attr "isa" "noavx,avx,avx512vl")
+ [(set_attr "isa" "noavx,avx_noavx512vl,avx512vl")
+ (set_attr "gpr32" "1,0,1")
(set_attr "prefix" "orig,vex,evex")
(set_attr "prefix_data16" "1,*,*")
(set_attr "type" "sselog")
@@ -20866,33 +20907,35 @@ (define_insn "*<sse2_avx2>_psadbw"
(set_attr "mode" "<sseinsnmode>")])
(define_insn "<sse>_movmsk<ssemodesuffix><avxsizesuffix>"
- [(set (match_operand:SI 0 "register_operand" "=r")
+ [(set (match_operand:SI 0 "register_operand" "=r,jr")
(unspec:SI
- [(match_operand:VF_128_256 1 "register_operand" "x")]
+ [(match_operand:VF_128_256 1 "register_operand" "x,x")]
UNSPEC_MOVMSK))]
"TARGET_SSE"
"%vmovmsk<ssemodesuffix>\t{%1, %0|%0, %1}"
- [(set_attr "type" "ssemov")
- (set_attr "prefix" "maybe_vex")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
+ (set_attr "prefix" "maybe_evex")
(set_attr "mode" "<MODE>")])
(define_insn "*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_<u>ext"
- [(set (match_operand:DI 0 "register_operand" "=r")
+ [(set (match_operand:DI 0 "register_operand" "=r,jr")
(any_extend:DI
(unspec:SI
- [(match_operand:VF_128_256 1 "register_operand" "x")]
+ [(match_operand:VF_128_256 1 "register_operand" "x,x")]
UNSPEC_MOVMSK)))]
"TARGET_64BIT && TARGET_SSE"
- "%vmovmsk<ssemodesuffix>\t{%1, %k0|%k0, %1}"
- [(set_attr "type" "ssemov")
- (set_attr "prefix" "maybe_vex")
+ "%vmovmsk<ssemodesuffix>\t{%1, %0|%0, %1}"
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
+ (set_attr "prefix" "maybe_evex")
(set_attr "mode" "<MODE>")])
(define_insn_and_split "*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_lt"
- [(set (match_operand:SI 0 "register_operand" "=r")
+ [(set (match_operand:SI 0 "register_operand" "=r,jr")
(unspec:SI
[(lt:VF_128_256
- (match_operand:<sseintvecmode> 1 "register_operand" "x")
+ (match_operand:<sseintvecmode> 1 "register_operand" "x,x")
(match_operand:<sseintvecmode> 2 "const0_operand"))]
UNSPEC_MOVMSK))]
"TARGET_SSE"
@@ -20901,16 +20944,17 @@ (define_insn_and_split "*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_lt"
[(set (match_dup 0)
(unspec:SI [(match_dup 1)] UNSPEC_MOVMSK))]
"operands[1] = gen_lowpart (<MODE>mode, operands[1]);"
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
(set_attr "prefix" "maybe_vex")
(set_attr "mode" "<MODE>")])
(define_insn_and_split "*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_<u>ext_lt"
- [(set (match_operand:DI 0 "register_operand" "=r")
+ [(set (match_operand:DI 0 "register_operand" "=r,jr")
(any_extend:DI
(unspec:SI
[(lt:VF_128_256
- (match_operand:<sseintvecmode> 1 "register_operand" "x")
+ (match_operand:<sseintvecmode> 1 "register_operand" "x,x")
(match_operand:<sseintvecmode> 2 "const0_operand"))]
UNSPEC_MOVMSK)))]
"TARGET_64BIT && TARGET_SSE"
@@ -20919,16 +20963,17 @@ (define_insn_and_split "*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_<u>ext_lt"
[(set (match_dup 0)
(any_extend:DI (unspec:SI [(match_dup 1)] UNSPEC_MOVMSK)))]
"operands[1] = gen_lowpart (<MODE>mode, operands[1]);"
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
(set_attr "prefix" "maybe_vex")
(set_attr "mode" "<MODE>")])
(define_insn_and_split "*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_shift"
- [(set (match_operand:SI 0 "register_operand" "=r")
+ [(set (match_operand:SI 0 "register_operand" "=r,jr")
(unspec:SI
[(subreg:VF_128_256
(ashiftrt:<sseintvecmode>
- (match_operand:<sseintvecmode> 1 "register_operand" "x")
+ (match_operand:<sseintvecmode> 1 "register_operand" "x,x")
(match_operand:QI 2 "const_int_operand")) 0)]
UNSPEC_MOVMSK))]
"TARGET_SSE"
@@ -20937,17 +20982,18 @@ (define_insn_and_split "*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_shift"
[(set (match_dup 0)
(unspec:SI [(match_dup 1)] UNSPEC_MOVMSK))]
"operands[1] = gen_lowpart (<MODE>mode, operands[1]);"
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
(set_attr "prefix" "maybe_vex")
(set_attr "mode" "<MODE>")])
(define_insn_and_split "*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_<u>ext_shift"
- [(set (match_operand:DI 0 "register_operand" "=r")
+ [(set (match_operand:DI 0 "register_operand" "=r,jr")
(any_extend:DI
(unspec:SI
[(subreg:VF_128_256
(ashiftrt:<sseintvecmode>
- (match_operand:<sseintvecmode> 1 "register_operand" "x")
+ (match_operand:<sseintvecmode> 1 "register_operand" "x,x")
(match_operand:QI 2 "const_int_operand")) 0)]
UNSPEC_MOVMSK)))]
"TARGET_64BIT && TARGET_SSE"
@@ -20956,18 +21002,20 @@ (define_insn_and_split "*<sse>_movmsk<ssemodesuffix><avxsizesuffix>_<u>ext_shift
[(set (match_dup 0)
(any_extend:DI (unspec:SI [(match_dup 1)] UNSPEC_MOVMSK)))]
"operands[1] = gen_lowpart (<MODE>mode, operands[1]);"
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
(set_attr "prefix" "maybe_vex")
(set_attr "mode" "<MODE>")])
(define_insn "<sse2_avx2>_pmovmskb"
- [(set (match_operand:SI 0 "register_operand" "=r")
+ [(set (match_operand:SI 0 "register_operand" "=r,jr")
(unspec:SI
- [(match_operand:VI1_AVX2 1 "register_operand" "x")]
+ [(match_operand:VI1_AVX2 1 "register_operand" "x,x")]
UNSPEC_MOVMSK))]
"TARGET_SSE2"
"%vpmovmskb\t{%1, %0|%0, %1}"
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
(set (attr "prefix_data16")
(if_then_else
(match_test "TARGET_AVX")
@@ -20977,14 +21025,15 @@ (define_insn "<sse2_avx2>_pmovmskb"
(set_attr "mode" "SI")])
(define_insn "*<sse2_avx2>_pmovmskb_zext"
- [(set (match_operand:DI 0 "register_operand" "=r")
+ [(set (match_operand:DI 0 "register_operand" "=r,jr")
(zero_extend:DI
(unspec:SI
- [(match_operand:VI1_AVX2 1 "register_operand" "x")]
+ [(match_operand:VI1_AVX2 1 "register_operand" "x,x")]
UNSPEC_MOVMSK)))]
"TARGET_64BIT && TARGET_SSE2"
"%vpmovmskb\t{%1, %k0|%k0, %1}"
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
(set (attr "prefix_data16")
(if_then_else
(match_test "TARGET_AVX")
@@ -20994,14 +21043,15 @@ (define_insn "*<sse2_avx2>_pmovmskb_zext"
(set_attr "mode" "SI")])
(define_insn "*sse2_pmovmskb_ext"
- [(set (match_operand:DI 0 "register_operand" "=r")
+ [(set (match_operand:DI 0 "register_operand" "=r,jr")
(sign_extend:DI
(unspec:SI
- [(match_operand:V16QI 1 "register_operand" "x")]
+ [(match_operand:V16QI 1 "register_operand" "x,x")]
UNSPEC_MOVMSK)))]
"TARGET_64BIT && TARGET_SSE2"
"%vpmovmskb\t{%1, %k0|%k0, %1}"
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
(set (attr "prefix_data16")
(if_then_else
(match_test "TARGET_AVX")
@@ -21086,9 +21136,9 @@ (define_split
})
(define_insn_and_split "*<sse2_avx2>_pmovmskb_lt"
- [(set (match_operand:SI 0 "register_operand" "=r")
+ [(set (match_operand:SI 0 "register_operand" "=r,jr")
(unspec:SI
- [(lt:VI1_AVX2 (match_operand:VI1_AVX2 1 "register_operand" "x")
+ [(lt:VI1_AVX2 (match_operand:VI1_AVX2 1 "register_operand" "x,x")
(match_operand:VI1_AVX2 2 "const0_operand"))]
UNSPEC_MOVMSK))]
"TARGET_SSE2"
@@ -21097,7 +21147,8 @@ (define_insn_and_split "*<sse2_avx2>_pmovmskb_lt"
[(set (match_dup 0)
(unspec:SI [(match_dup 1)] UNSPEC_MOVMSK))]
""
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
(set (attr "prefix_data16")
(if_then_else
(match_test "TARGET_AVX")
@@ -21107,10 +21158,10 @@ (define_insn_and_split "*<sse2_avx2>_pmovmskb_lt"
(set_attr "mode" "SI")])
(define_insn_and_split "*<sse2_avx2>_pmovmskb_zext_lt"
- [(set (match_operand:DI 0 "register_operand" "=r")
+ [(set (match_operand:DI 0 "register_operand" "=r,jr")
(zero_extend:DI
(unspec:SI
- [(lt:VI1_AVX2 (match_operand:VI1_AVX2 1 "register_operand" "x")
+ [(lt:VI1_AVX2 (match_operand:VI1_AVX2 1 "register_operand" "x,x")
(match_operand:VI1_AVX2 2 "const0_operand"))]
UNSPEC_MOVMSK)))]
"TARGET_64BIT && TARGET_SSE2"
@@ -21119,7 +21170,8 @@ (define_insn_and_split "*<sse2_avx2>_pmovmskb_zext_lt"
[(set (match_dup 0)
(zero_extend:DI (unspec:SI [(match_dup 1)] UNSPEC_MOVMSK)))]
""
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
(set (attr "prefix_data16")
(if_then_else
(match_test "TARGET_AVX")
@@ -21129,10 +21181,10 @@ (define_insn_and_split "*<sse2_avx2>_pmovmskb_zext_lt"
(set_attr "mode" "SI")])
(define_insn_and_split "*sse2_pmovmskb_ext_lt"
- [(set (match_operand:DI 0 "register_operand" "=r")
+ [(set (match_operand:DI 0 "register_operand" "=r,jr")
(sign_extend:DI
(unspec:SI
- [(lt:V16QI (match_operand:V16QI 1 "register_operand" "x")
+ [(lt:V16QI (match_operand:V16QI 1 "register_operand" "x,x")
(match_operand:V16QI 2 "const0_operand"))]
UNSPEC_MOVMSK)))]
"TARGET_64BIT && TARGET_SSE2"
@@ -21141,7 +21193,8 @@ (define_insn_and_split "*sse2_pmovmskb_ext_lt"
[(set (match_dup 0)
(sign_extend:DI (unspec:SI [(match_dup 1)] UNSPEC_MOVMSK)))]
""
- [(set_attr "type" "ssemov")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "ssemov")
(set (attr "prefix_data16")
(if_then_else
(match_test "TARGET_AVX")
@@ -21202,21 +21255,25 @@ (define_insn "*sse2_maskmovdqu"
(set_attr "mode" "TI")])
(define_insn "sse_ldmxcsr"
- [(unspec_volatile [(match_operand:SI 0 "memory_operand" "m")]
+ [(unspec_volatile [(match_operand:SI 0 "memory_operand" "m,jm")]
UNSPECV_LDMXCSR)]
"TARGET_SSE"
"%vldmxcsr\t%0"
- [(set_attr "type" "sse")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "sse")
+ (set_attr "gpr32" "1,0")
(set_attr "atom_sse_attr" "mxcsr")
(set_attr "prefix" "maybe_vex")
(set_attr "memory" "load")])
(define_insn "sse_stmxcsr"
- [(set (match_operand:SI 0 "memory_operand" "=m")
+ [(set (match_operand:SI 0 "memory_operand" "=m,jm")
(unspec_volatile:SI [(const_int 0)] UNSPECV_STMXCSR))]
"TARGET_SSE"
"%vstmxcsr\t%0"
- [(set_attr "type" "sse")
+ [(set_attr "isa" "noavx,avx")
+ (set_attr "type" "sse")
+ (set_attr "gpr32" "0")
(set_attr "atom_sse_attr" "mxcsr")
(set_attr "prefix" "maybe_vex")
(set_attr "memory" "store")])
@@ -23860,11 +23917,12 @@ (define_expand "<insn>v2siv2di2"
(define_insn "avx_vtest<ssemodesuffix><avxsizesuffix>"
[(set (reg:CC FLAGS_REG)
(unspec:CC [(match_operand:VF_128_256 0 "register_operand" "x")
- (match_operand:VF_128_256 1 "nonimmediate_operand" "xm")]
+ (match_operand:VF_128_256 1 "nonimmediate_operand" "xjm")]
UNSPEC_VTESTP))]
"TARGET_AVX"
"vtest<ssemodesuffix>\t{%1, %0|%0, %1}"
[(set_attr "type" "ssecomi")
+ (set_attr "gpr32" "0")
(set_attr "prefix_extra" "1")
(set_attr "prefix" "vex")
(set_attr "mode" "<MODE>")])
@@ -26925,7 +26983,7 @@ (define_split
(define_insn "avx_vbroadcastf128_<mode>"
[(set (match_operand:V_256 0 "register_operand" "=x,x,x,v,v,v,v")
(vec_concat:V_256
- (match_operand:<ssehalfvecmode> 1 "nonimmediate_operand" "m,0,?x,m,0,m,0")
+ (match_operand:<ssehalfvecmode> 1 "nonimmediate_operand" "jm,0,?x,m,0,m,0")
(match_dup 1)))]
"TARGET_AVX"
"@
@@ -26936,8 +26994,9 @@ (define_insn "avx_vbroadcastf128_<mode>"
vinsert<i128vldq>\t{$1, %1, %0, %0|%0, %0, %1, 1}
vbroadcast<shuffletype>32x4\t{%1, %0|%0, %1}
vinsert<shuffletype>32x4\t{$1, %1, %0, %0|%0, %0, %1, 1}"
- [(set_attr "isa" "*,*,*,avx512dq,avx512dq,avx512vl,avx512vl")
+ [(set_attr "isa" "noavx512vl,*,*,avx512dq,avx512dq,avx512vl,avx512vl")
(set_attr "type" "ssemov,sselog1,sselog1,ssemov,sselog1,ssemov,sselog1")
+ (set_attr "gpr32" "0,1,1,1,1,1,1")
(set_attr "prefix_extra" "1")
(set_attr "length_immediate" "0,1,1,0,1,0,1")
(set_attr "prefix" "vex,vex,vex,evex,evex,evex,evex")
@@ -27220,12 +27279,13 @@ (define_insn "*avx_vperm2f128<mode>_full"
[(set (match_operand:AVX256MODE2P 0 "register_operand" "=x")
(unspec:AVX256MODE2P
[(match_operand:AVX256MODE2P 1 "register_operand" "x")
- (match_operand:AVX256MODE2P 2 "nonimmediate_operand" "xm")
+ (match_operand:AVX256MODE2P 2 "nonimmediate_operand" "xjm")
(match_operand:SI 3 "const_0_to_255_operand")]
UNSPEC_VPERMIL2F128))]
"TARGET_AVX"
"vperm2<i128>\t{%3, %2, %1, %0|%0, %1, %2, %3}"
[(set_attr "type" "sselog")
+ (set_attr "gpr32" "0")
(set_attr "prefix_extra" "1")
(set_attr "length_immediate" "1")
(set_attr "prefix" "vex")
@@ -27342,11 +27402,11 @@ (define_expand "avx_vinsertf128<mode>"
})
(define_insn "vec_set_lo_<mode><mask_name>"
- [(set (match_operand:VI8F_256 0 "register_operand" "=v")
+ [(set (match_operand:VI8F_256 0 "register_operand" "=x,v")
(vec_concat:VI8F_256
- (match_operand:<ssehalfvecmode> 2 "nonimmediate_operand" "vm")
+ (match_operand:<ssehalfvecmode> 2 "nonimmediate_operand" "xjm,vm")
(vec_select:<ssehalfvecmode>
- (match_operand:VI8F_256 1 "register_operand" "v")
+ (match_operand:VI8F_256 1 "register_operand" "x,v")
(parallel [(const_int 2) (const_int 3)]))))]
"TARGET_AVX && <mask_avx512dq_condition>"
{
@@ -27357,7 +27417,9 @@ (define_insn "vec_set_lo_<mode><mask_name>"
else
return "vinsert<i128>\t{$0x0, %2, %1, %0|%0, %1, %2, 0x0}";
}
- [(set_attr "type" "sselog")
+ [(set_attr "isa" "noavx512vl,avx512vl")
+ (set_attr "gpr32" "0,1")
+ (set_attr "type" "sselog")
(set_attr "prefix_extra" "1")
(set_attr "length_immediate" "1")
(set_attr "prefix" "vex")
@@ -27386,11 +27448,11 @@ (define_insn "vec_set_hi_<mode><mask_name>"
(set_attr "mode" "<sseinsnmode>")])
(define_insn "vec_set_lo_<mode><mask_name>"
- [(set (match_operand:VI4F_256 0 "register_operand" "=v")
+ [(set (match_operand:VI4F_256 0 "register_operand" "=x,v")
(vec_concat:VI4F_256
- (match_operand:<ssehalfvecmode> 2 "nonimmediate_operand" "vm")
+ (match_operand:<ssehalfvecmode> 2 "nonimmediate_operand" "xjm,vm")
(vec_select:<ssehalfvecmode>
- (match_operand:VI4F_256 1 "register_operand" "v")
+ (match_operand:VI4F_256 1 "register_operand" "x,v")
(parallel [(const_int 4) (const_int 5)
(const_int 6) (const_int 7)]))))]
"TARGET_AVX"
@@ -27400,20 +27462,22 @@ (define_insn "vec_set_lo_<mode><mask_name>"
else
return "vinsert<i128>\t{$0x0, %2, %1, %0|%0, %1, %2, 0x0}";
}
- [(set_attr "type" "sselog")
+ [(set_attr "isa" "noavx512vl,avx512vl")
+ (set_attr "gpr32" "0,1")
+ (set_attr "type" "sselog")
(set_attr "prefix_extra" "1")
(set_attr "length_immediate" "1")
(set_attr "prefix" "vex")
(set_attr "mode" "<sseinsnmode>")])
(define_insn "vec_set_hi_<mode><mask_name>"
- [(set (match_operand:VI4F_256 0 "register_operand" "=v")
+ [(set (match_operand:VI4F_256 0 "register_operand" "=x,v")
(vec_concat:VI4F_256
(vec_select:<ssehalfvecmode>
- (match_operand:VI4F_256 1 "register_operand" "v")
+ (match_operand:VI4F_256 1 "register_operand" "x,v")
(parallel [(const_int 0) (const_int 1)
(const_int 2) (const_int 3)]))
- (match_operand:<ssehalfvecmode> 2 "nonimmediate_operand" "vm")))]
+ (match_operand:<ssehalfvecmode> 2 "nonimmediate_operand" "xjm,vm")))]
"TARGET_AVX"
{
if (TARGET_AVX512VL)
@@ -27421,7 +27485,9 @@ (define_insn "vec_set_hi_<mode><mask_name>"
else
return "vinsert<i128>\t{$0x1, %2, %1, %0|%0, %1, %2, 0x1}";
}
- [(set_attr "type" "sselog")
+ [(set_attr "isa" "noavx512vl,avx512vl")
+ (set_attr "gpr32" "0,1")
+ (set_attr "type" "sselog")
(set_attr "prefix_extra" "1")
(set_attr "length_immediate" "1")
(set_attr "prefix" "vex")
@@ -27430,7 +27496,7 @@ (define_insn "vec_set_hi_<mode><mask_name>"
(define_insn "vec_set_lo_<mode>"
[(set (match_operand:V16_256 0 "register_operand" "=x,v")
(vec_concat:V16_256
- (match_operand:<ssehalfvecmode> 2 "nonimmediate_operand" "xm,vm")
+ (match_operand:<ssehalfvecmode> 2 "nonimmediate_operand" "xjm,vm")
(vec_select:<ssehalfvecmode>
(match_operand:V16_256 1 "register_operand" "x,v")
(parallel [(const_int 8) (const_int 9)
@@ -27441,7 +27507,9 @@ (define_insn "vec_set_lo_<mode>"
"@
vinsert%~128\t{$0x0, %2, %1, %0|%0, %1, %2, 0x0}
vinserti32x4\t{$0x0, %2, %1, %0|%0, %1, %2, 0x0}"
- [(set_attr "type" "sselog")
+ [(set_attr "isa" "noavx512vl,avx512vl")
+ (set_attr "gpr32" "0,1")
+ (set_attr "type" "sselog")
(set_attr "prefix_extra" "1")
(set_attr "length_immediate" "1")
(set_attr "prefix" "vex,evex")
@@ -27456,12 +27524,14 @@ (define_insn "vec_set_hi_<mode>"
(const_int 2) (const_int 3)
(const_int 4) (const_int 5)
(const_int 6) (const_int 7)]))
- (match_operand:<ssehalfvecmode> 2 "nonimmediate_operand" "xm,vm")))]
+ (match_operand:<ssehalfvecmode> 2 "nonimmediate_operand" "xjm,vm")))]
"TARGET_AVX"
"@
vinsert%~128\t{$0x1, %2, %1, %0|%0, %1, %2, 0x1}
vinserti32x4\t{$0x1, %2, %1, %0|%0, %1, %2, 0x1}"
- [(set_attr "type" "sselog")
+ [(set_attr "isa" "noavx512vl,avx512vl")
+ (set_attr "gpr32" "0,1")
+ (set_attr "type" "sselog")
(set_attr "prefix_extra" "1")
(set_attr "length_immediate" "1")
(set_attr "prefix" "vex,evex")
@@ -27470,7 +27540,7 @@ (define_insn "vec_set_hi_<mode>"
(define_insn "vec_set_lo_v32qi"
[(set (match_operand:V32QI 0 "register_operand" "=x,v")
(vec_concat:V32QI
- (match_operand:V16QI 2 "nonimmediate_operand" "xm,v")
+ (match_operand:V16QI 2 "nonimmediate_operand" "xjm,v")
(vec_select:V16QI
(match_operand:V32QI 1 "register_operand" "x,v")
(parallel [(const_int 16) (const_int 17)
@@ -27485,7 +27555,9 @@ (define_insn "vec_set_lo_v32qi"
"@
vinsert%~128\t{$0x0, %2, %1, %0|%0, %1, %2, 0x0}
vinserti32x4\t{$0x0, %2, %1, %0|%0, %1, %2, 0x0}"
- [(set_attr "type" "sselog")
+ [(set_attr "isa" "noavx512vl,avx512vl")
+ (set_attr "type" "sselog")
+ (set_attr "gpr32" "0,1")
(set_attr "prefix_extra" "1")
(set_attr "length_immediate" "1")
(set_attr "prefix" "vex,evex")
@@ -27504,12 +27576,14 @@ (define_insn "vec_set_hi_v32qi"
(const_int 10) (const_int 11)
(const_int 12) (const_int 13)
(const_int 14) (const_int 15)]))
- (match_operand:V16QI 2 "nonimmediate_operand" "xm,vm")))]
+ (match_operand:V16QI 2 "nonimmediate_operand" "xjm,vm")))]
"TARGET_AVX"
"@
vinsert%~128\t{$0x1, %2, %1, %0|%0, %1, %2, 0x1}
vinserti32x4\t{$0x1, %2, %1, %0|%0, %1, %2, 0x1}"
- [(set_attr "type" "sselog")
+ [(set_attr "isa" "noavx512vl,avx512vl")
+ (set_attr "gpr32" "0")
+ (set_attr "type" "sselog")
(set_attr "prefix_extra" "1")
(set_attr "length_immediate" "1")
(set_attr "prefix" "vex,evex")
@@ -27519,7 +27593,7 @@ (define_insn "<avx_avx2>_maskload<ssemodesuffix><avxsizesuffix>"
[(set (match_operand:V48_128_256 0 "register_operand" "=x")
(unspec:V48_128_256
[(match_operand:<sseintvecmode> 2 "register_operand" "x")
- (match_operand:V48_128_256 1 "memory_operand" "m")]
+ (match_operand:V48_128_256 1 "memory_operand" "jm")]
UNSPEC_MASKMOV))]
"TARGET_AVX"
{
@@ -27529,13 +27603,14 @@ (define_insn "<avx_avx2>_maskload<ssemodesuffix><avxsizesuffix>"
return "vmaskmov<ssefltmodesuffix>\t{%1, %2, %0|%0, %2, %1}";
}
[(set_attr "type" "sselog1")
+ (set_attr "gpr32" "0")
(set_attr "prefix_extra" "1")
(set_attr "prefix" "vex")
(set_attr "btver2_decode" "vector")
(set_attr "mode" "<sseinsnmode>")])
(define_insn "<avx_avx2>_maskstore<ssemodesuffix><avxsizesuffix>"
- [(set (match_operand:V48_128_256 0 "memory_operand" "+m")
+ [(set (match_operand:V48_128_256 0 "memory_operand" "+jm")
(unspec:V48_128_256
[(match_operand:<sseintvecmode> 1 "register_operand" "x")
(match_operand:V48_128_256 2 "register_operand" "x")
@@ -27549,6 +27624,7 @@ (define_insn "<avx_avx2>_maskstore<ssemodesuffix><avxsizesuffix>"
return "vmaskmov<ssefltmodesuffix>\t{%2, %1, %0|%0, %1, %2}";
}
[(set_attr "type" "sselog1")
+ (set_attr "gpr32" "0")
(set_attr "prefix_extra" "1")
(set_attr "prefix" "vex")
(set_attr "btver2_decode" "vector")
@@ -27806,7 +27882,7 @@ (define_insn "avx_vec_concat<mode>"
[(set (match_operand:V_256_512 0 "register_operand" "=x,v,x,Yv")
(vec_concat:V_256_512
(match_operand:<ssehalfvecmode> 1 "nonimmediate_operand" "x,v,xm,vm")
- (match_operand:<ssehalfvecmode> 2 "nonimm_or_0_operand" "xBt,vm,C,C")))]
+ (match_operand:<ssehalfvecmode> 2 "nonimm_or_0_operand" "xjm,vm,C,C")))]
"TARGET_AVX
&& (operands[2] == CONST0_RTX (<ssehalfvecmode>mode)
|| !MEM_P (operands[1]))"
@@ -28145,7 +28221,7 @@ (define_insn "*avx2_gathersi<VEC_GATHER_MODE:mode>"
[(match_operand:VEC_GATHER_MODE 2 "register_operand" "0")
(match_operator:<ssescalarmode> 7 "vsib_mem_operator"
[(unspec:P
- [(match_operand:P 3 "vsib_address_operand" "Tv")
+ [(match_operand:P 3 "vsib_address_operand" "jb")
(match_operand:<VEC_GATHER_IDXSI> 4 "register_operand" "x")
(match_operand:SI 6 "const1248_operand")]
UNSPEC_VSIBADDR)])
@@ -28156,6 +28232,7 @@ (define_insn "*avx2_gathersi<VEC_GATHER_MODE:mode>"
"TARGET_AVX2"
"%M3v<sseintprefix>gatherd<ssemodesuffix>\t{%1, %7, %0|%0, %7, %1}"
[(set_attr "type" "ssemov")
+ (set_attr "gpr32" "0")
(set_attr "prefix" "vex")
(set_attr "mode" "<sseinsnmode>")])
@@ -28165,7 +28242,7 @@ (define_insn "*avx2_gathersi<VEC_GATHER_MODE:mode>_2"
[(pc)
(match_operator:<ssescalarmode> 6 "vsib_mem_operator"
[(unspec:P
- [(match_operand:P 2 "vsib_address_operand" "Tv")
+ [(match_operand:P 2 "vsib_address_operand" "jb")
(match_operand:<VEC_GATHER_IDXSI> 3 "register_operand" "x")
(match_operand:SI 5 "const1248_operand")]
UNSPEC_VSIBADDR)])
@@ -28176,6 +28253,7 @@ (define_insn "*avx2_gathersi<VEC_GATHER_MODE:mode>_2"
"TARGET_AVX2"
"%M2v<sseintprefix>gatherd<ssemodesuffix>\t{%1, %6, %0|%0, %6, %1}"
[(set_attr "type" "ssemov")
+ (set_attr "gpr32" "0")
(set_attr "prefix" "vex")
(set_attr "mode" "<sseinsnmode>")])
@@ -28206,7 +28284,7 @@ (define_insn "*avx2_gatherdi<VEC_GATHER_MODE:mode>"
[(match_operand:<VEC_GATHER_SRCDI> 2 "register_operand" "0")
(match_operator:<ssescalarmode> 7 "vsib_mem_operator"
[(unspec:P
- [(match_operand:P 3 "vsib_address_operand" "Tv")
+ [(match_operand:P 3 "vsib_address_operand" "jb")
(match_operand:<VEC_GATHER_IDXDI> 4 "register_operand" "x")
(match_operand:SI 6 "const1248_operand")]
UNSPEC_VSIBADDR)])
@@ -28217,6 +28295,7 @@ (define_insn "*avx2_gatherdi<VEC_GATHER_MODE:mode>"
"TARGET_AVX2"
"%M3v<sseintprefix>gatherq<ssemodesuffix>\t{%5, %7, %2|%2, %7, %5}"
[(set_attr "type" "ssemov")
+ (set_attr "gpr32" "0")
(set_attr "prefix" "vex")
(set_attr "mode" "<sseinsnmode>")])
@@ -28226,7 +28305,7 @@ (define_insn "*avx2_gatherdi<VEC_GATHER_MODE:mode>_2"
[(pc)
(match_operator:<ssescalarmode> 6 "vsib_mem_operator"
[(unspec:P
- [(match_operand:P 2 "vsib_address_operand" "Tv")
+ [(match_operand:P 2 "vsib_address_operand" "jb")
(match_operand:<VEC_GATHER_IDXDI> 3 "register_operand" "x")
(match_operand:SI 5 "const1248_operand")]
UNSPEC_VSIBADDR)])
@@ -28241,6 +28320,7 @@ (define_insn "*avx2_gatherdi<VEC_GATHER_MODE:mode>_2"
return "%M2v<sseintprefix>gatherq<ssemodesuffix>\t{%4, %6, %0|%0, %6, %4}";
}
[(set_attr "type" "ssemov")
+ (set_attr "gpr32" "0")
(set_attr "prefix" "vex")
(set_attr "mode" "<sseinsnmode>")])
@@ -28251,7 +28331,7 @@ (define_insn "*avx2_gatherdi<VI4F_256:mode>_3"
[(match_operand:<VEC_GATHER_SRCDI> 2 "register_operand" "0")
(match_operator:<ssescalarmode> 7 "vsib_mem_operator"
[(unspec:P
- [(match_operand:P 3 "vsib_address_operand" "Tv")
+ [(match_operand:P 3 "vsib_address_operand" "jb")
(match_operand:<VEC_GATHER_IDXDI> 4 "register_operand" "x")
(match_operand:SI 6 "const1248_operand")]
UNSPEC_VSIBADDR)])
@@ -28264,6 +28344,7 @@ (define_insn "*avx2_gatherdi<VI4F_256:mode>_3"
"TARGET_AVX2"
"%M3v<sseintprefix>gatherq<ssemodesuffix>\t{%5, %7, %0|%0, %7, %5}"
[(set_attr "type" "ssemov")
+ (set_attr "gpr32" "0")
(set_attr "prefix" "vex")
(set_attr "mode" "<sseinsnmode>")])
@@ -28274,7 +28355,7 @@ (define_insn "*avx2_gatherdi<VI4F_256:mode>_4"
[(pc)
(match_operator:<ssescalarmode> 6 "vsib_mem_operator"
[(unspec:P
- [(match_operand:P 2 "vsib_address_operand" "Tv")
+ [(match_operand:P 2 "vsib_address_operand" "jb")
(match_operand:<VEC_GATHER_IDXDI> 3 "register_operand" "x")
(match_operand:SI 5 "const1248_operand")]
UNSPEC_VSIBADDR)])
@@ -28287,6 +28368,7 @@ (define_insn "*avx2_gatherdi<VI4F_256:mode>_4"
"TARGET_AVX2"
"%M2v<sseintprefix>gatherq<ssemodesuffix>\t{%4, %6, %0|%0, %6, %4}"
[(set_attr "type" "ssemov")
+ (set_attr "gpr32" "0")
(set_attr "prefix" "vex")
(set_attr "mode" "<sseinsnmode>")])
--
2.31.1
next prev parent reply other threads:[~2023-09-22 10:56 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-22 10:56 [PATCH v2 00/13] Support Intel APX EGPR Hongyu Wang
2023-09-22 10:56 ` [PATCH 01/13] [APX EGPR] middle-end: Add insn argument to base_reg_class Hongyu Wang
2023-09-22 16:02 ` Vladimir Makarov
2023-10-07 8:22 ` Hongyu Wang
2023-09-22 10:56 ` [PATCH 02/13] [APX EGPR] middle-end: Add index_reg_class with insn argument Hongyu Wang
2023-09-22 16:03 ` Vladimir Makarov
2023-09-22 10:56 ` [PATCH 03/13] [APX_EGPR] Initial support for APX_F Hongyu Wang
2023-10-07 2:35 ` Hongtao Liu
2023-09-22 10:56 ` [PATCH 04/13] [APX EGPR] Add 16 new integer general purpose registers Hongyu Wang
2023-09-22 10:56 ` [PATCH 05/13] [APX EGPR] Add register and memory constraints that disallow EGPR Hongyu Wang
2023-09-22 10:56 ` [PATCH 06/13] [APX EGPR] Add backend hook for base_reg_class/index_reg_class Hongyu Wang
2023-09-22 10:56 ` [PATCH 07/13] [APX EGPR] Map reg/mem constraints in inline asm to non-EGPR constraint Hongyu Wang
2023-09-22 10:56 ` [PATCH 08/13] [APX EGPR] Handle GPR16 only vector move insns Hongyu Wang
2023-09-22 10:56 ` [PATCH 09/13] [APX EGPR] Handle legacy insn that only support GPR16 (1/5) Hongyu Wang
2023-09-22 10:56 ` [PATCH 10/13] [APX EGPR] Handle legacy insns that only support GPR16 (2/5) Hongyu Wang
2023-09-22 10:56 ` [PATCH 11/13] [APX EGPR] Handle legacy insns that only support GPR16 (3/5) Hongyu Wang
2023-09-22 10:56 ` [PATCH 12/13] [APX_EGPR] Handle legacy insns that only support GPR16 (4/5) Hongyu Wang
2023-09-22 10:56 ` Hongyu Wang [this message]
2023-09-25 2:02 ` [PATCH v2 00/13] Support Intel APX EGPR Hongtao Liu
-- strict thread matches above, loose matches on Subject: below --
2023-08-31 8:20 [PATCH 00/13] [RFC] " Hongyu Wang
2023-08-31 8:20 ` [PATCH 13/13] [APX EGPR] Handle vex insns that only support GPR16 (5/5) Hongyu Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230922105631.2298849-14-hongyu.wang@intel.com \
--to=hongyu.wang@intel.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=hongtao.liu@intel.com \
--cc=jakub@redhat.com \
--cc=lingling.kong@intel.com \
--cc=ubizjak@gmail.com \
--cc=vmakarov@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).