* [PATCH] RISC-V: Fix incorrect combine of extended scalar pattern
@ 2023-12-01 12:31 Juzhe-Zhong
0 siblings, 0 replies; only message in thread
From: Juzhe-Zhong @ 2023-12-01 12:31 UTC (permalink / raw)
To: gcc-patches; +Cc: kito.cheng, kito.cheng, jeffreyalaw, rdapp.gcc, Juzhe-Zhong
Background:
RVV ISA vx instructions for example vadd.vx,
When EEW = 64 and RV32. We can't directly use vadd.vx.
Instead, we need to use:
sw
sw
vlse
vadd.vv
However, we have some special situation that we still can directly use
vadd.vx directly for EEW=64 && RV32.
that is, when scalar is a known CONST_INT value that doesn't overflow 32-bit value.
So, we have a dedicated pattern for such situation:
...
(sign_extend:<VEL> (match_operand:<VSUBEL> 3 "register_operand" " r, r, r, r")).
...
We first force_reg such CONST_INT (within 32bit value) into a SImode reg.
Then use such special patterns.
Those pattern with this operand match should only value on! TARGET_64BIT.
The PR112801 combine into such patterns on RV64 incorrectly (Those patterns should be only value on RV32).
This is the bug:
andi a2,a2,2
vsetivli zero,2,e64,m1,ta,ma
sext.w a3,a4
vmv.v.x v1,a2
vslide1down.vx v1,v1,a4 -> it should be a3 instead of a4.
Such incorrect codegen is caused by
...
(sign_extend:DI (subreg:SI (reg:DI 135 [ f.0_3 ]) 0))
] UNSPEC_VSLIDE1DOWN)) 16935 {*pred_slide1downv2di_extended}
...
Incorretly combine into the patterns should not be valid on RV64 system.
So add !TARGET_64BIT to all same type patterns which can fix such issue as well as robostify the vector.md.
PR target/112801
gcc/ChangeLog:
* config/riscv/vector.md: Add !TARGET_64BIT.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr112801.c: New test.
---
gcc/config/riscv/vector.md | 52 +++++++++----------
.../gcc.target/riscv/rvv/autovec/pr112801.c | 36 +++++++++++++
2 files changed, 62 insertions(+), 26 deletions(-)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/pr112801.c
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 09e8a63af07..acb812593a0 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -1913,7 +1913,7 @@
(match_operand:V_VLSI_D 2 "register_operand" " vr,vr")
(match_operand:<VM> 4 "register_operand" " vm,vm"))
(match_operand:V_VLSI_D 1 "vector_merge_operand" " vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vmerge.vxm\t%0,%2,%3,%4"
[(set_attr "type" "vimerge")
(set_attr "mode" "<MODE>")])
@@ -2091,7 +2091,7 @@
(sign_extend:<VEL>
(match_operand:<VSUBEL> 3 "register_operand" " r, r, r, r")))
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"@
vmv.v.x\t%0,%3
vmv.v.x\t%0,%3
@@ -2677,7 +2677,7 @@
(match_operand:<VSUBEL> 4 "reg_or_0_operand" "rJ,rJ, rJ, rJ")))
(match_operand:V_VLSI_D 3 "register_operand" "vr,vr, vr, vr"))
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"v<insn>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -2753,7 +2753,7 @@
(sign_extend:<VEL>
(match_operand:<VSUBEL> 4 "reg_or_0_operand" "rJ,rJ, rJ, rJ"))))
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"v<insn>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -2829,7 +2829,7 @@
(match_operand:<VSUBEL> 4 "reg_or_0_operand" "rJ,rJ, rJ, rJ")))
(match_operand:V_VLSI_D 3 "register_operand" "vr,vr, vr, vr"))
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vrsub.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vialu")
(set_attr "mode" "<MODE>")])
@@ -2947,7 +2947,7 @@
(match_operand:<VSUBEL> 4 "reg_or_0_operand" "rJ,rJ, rJ, rJ")))
(match_operand:VFULLI_D 3 "register_operand" "vr,vr, vr, vr")] VMULH)
(match_operand:VFULLI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vmulh<v_su>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vimul")
(set_attr "mode" "<MODE>")])
@@ -3126,7 +3126,7 @@
(match_operand:VI_D 2 "register_operand" "vr,vr"))
(match_operand:<VM> 4 "register_operand" "vm,vm")] UNSPEC_VADC)
(match_operand:VI_D 1 "vector_merge_operand" "vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vadc.vxm\t%0,%2,%z3,%4"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
@@ -3210,7 +3210,7 @@
(match_operand:<VSUBEL> 3 "reg_or_0_operand" "rJ,rJ"))))
(match_operand:<VM> 4 "register_operand" "vm,vm")] UNSPEC_VSBC)
(match_operand:VI_D 1 "vector_merge_operand" "vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vsbc.vxm\t%0,%2,%z3,%4"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
@@ -3360,7 +3360,7 @@
(match_operand 5 "const_int_operand" " i, i")
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vmadc.vxm\t%0,%1,%z2,%3"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
@@ -3430,7 +3430,7 @@
(match_operand 5 "const_int_operand" " i, i")
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vmsbc.vxm\t%0,%1,%z2,%3"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
@@ -3571,7 +3571,7 @@
(match_operand 4 "const_int_operand" " i, i")
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vmadc.vx\t%0,%1,%z2"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
@@ -3638,7 +3638,7 @@
(match_operand 4 "const_int_operand" " i, i")
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vmsbc.vx\t%0,%1,%z2"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
@@ -4162,7 +4162,7 @@
(match_operand:<VSUBEL> 4 "register_operand" " r, r, r, r")))
(match_operand:VI_D 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:VI_D 2 "vector_merge_operand" " vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"v<insn>.vx\t%0,%3,%4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4238,7 +4238,7 @@
(sign_extend:<VEL>
(match_operand:<VSUBEL> 4 "register_operand" " r, r, r, r"))))
(match_operand:VI_D 2 "vector_merge_operand" " vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"v<insn>.vx\t%0,%3,%4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4386,7 +4386,7 @@
(sign_extend:<VEL>
(match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ, rJ, rJ"))] VSAT_ARITH_OP)
(match_operand:VI_D 2 "vector_merge_operand" " vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"v<sat_op>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<sat_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4994,7 +4994,7 @@
(sign_extend:<VEL>
(match_operand:<VSUBEL> 4 "register_operand" " r")))])
(match_dup 1)))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vms%B2.vx\t%0,%3,%4,v0.t"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")
@@ -5020,7 +5020,7 @@
(sign_extend:<VEL>
(match_operand:<VSUBEL> 5 "register_operand" " r, r")))])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
- "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode) && !TARGET_64BIT"
"vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -5041,7 +5041,7 @@
(sign_extend:<VEL>
(match_operand:<VSUBEL> 5 "register_operand" " r, r, r, r, r")))])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, 0, vu, 0")))]
- "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode) && !TARGET_64BIT"
"vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -5062,7 +5062,7 @@
(match_operand:<VSUBEL> 4 "register_operand" " r")))
(match_operand:V_VLSI_D 3 "register_operand" " vr")])
(match_dup 1)))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vms%B2.vx\t%0,%3,%4,v0.t"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")
@@ -5088,7 +5088,7 @@
(match_operand:<VSUBEL> 5 "register_operand" " r, r")))
(match_operand:V_VLSI_D 4 "register_operand" " vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
- "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode) && !TARGET_64BIT"
"vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -5109,7 +5109,7 @@
(match_operand:<VSUBEL> 5 "register_operand" " r, r, r, r, r")))
(match_operand:V_VLSI_D 4 "register_operand" " vr, 0, 0, vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, 0, vu, 0")))]
- "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode) && !TARGET_64BIT"
"vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -5489,7 +5489,7 @@
(match_operand:V_VLSI_D 3 "register_operand" " 0, vr, 0, vr"))
(match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr"))
(match_dup 3)))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"@
vmadd.vx\t%0,%2,%4%p1
vmv.v.v\t%0,%2\;vmadd.vx\t%0,%2,%4%p1
@@ -5522,7 +5522,7 @@
(match_operand:V_VLSI_D 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:V_VLSI_D 4 "register_operand" " 0, vr, 0, vr"))
(match_dup 4)))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"@
vmacc.vx\t%0,%2,%3%p1
vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1
@@ -5787,7 +5787,7 @@
(match_operand:<VSUBEL> 2 "register_operand" " r, r, r, r")))
(match_operand:V_VLSI_D 3 "register_operand" " 0, vr, 0, vr")))
(match_dup 3)))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"@
vnmsub.vx\t%0,%2,%4%p1
vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1
@@ -5820,7 +5820,7 @@
(match_operand:<VSUBEL> 2 "register_operand" " r, r, r, r")))
(match_operand:V_VLSI_D 3 "register_operand" " vr, vr, vr, vr")))
(match_dup 4)))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"@
vnmsac.vx\t%0,%2,%3%p1
vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1
@@ -8153,7 +8153,7 @@
(match_operand:V_VLSI_D 3 "register_operand" " vr, vr, vr, vr")
(sign_extend:<VEL>
(match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ, rJ, rJ"))] VSLIDES1))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_64BIT"
"vslide<ud>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vislide<ud>")
(set_attr "mode" "<MODE>")])
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr112801.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr112801.c
new file mode 100644
index 00000000000..eaf5c1c39d9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr112801.c
@@ -0,0 +1,36 @@
+/* { dg-do run } */
+/* { dg-options "-O3" } */
+/* { dg-require-effective-target rv64 } */
+/* { dg-require-effective-target riscv_v } */
+
+#include <assert.h>
+int a;
+void c(int b) { a = b; }
+char d;
+char *const e = &d;
+long f = 66483309998;
+unsigned long g[2];
+short h;
+int k;
+void __attribute__ ((noinline)) l() {
+ int i = 0;
+ for (; i < 2; i++) {
+ {
+ unsigned long *m = &g[0];
+ *m &= 2;
+ if (f && *e)
+ for (;;)
+ ;
+ }
+ k = f;
+ g[1] = k;
+ for (; h;)
+ ;
+ }
+}
+int main() {
+ l();
+ assert (g[1] == 2058800558);
+ c(g[1] >> 32);
+ assert (a == 0);
+}
--
2.36.3
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2023-12-01 12:31 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-01 12:31 [PATCH] RISC-V: Fix incorrect combine of extended scalar pattern Juzhe-Zhong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).