public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] RISC-V: Support interleave vector with different step sequence for VLA SLP
@ 2023-12-07 10:24 Juzhe-Zhong
  2023-12-07 15:15 ` Robin Dapp
  0 siblings, 1 reply; 4+ messages in thread
From: Juzhe-Zhong @ 2023-12-07 10:24 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, kito.cheng, jeffreyalaw, rdapp.gcc, Juzhe-Zhong

This patch fixes 64 ICEs in full coverage testing since they happens due to same reason.

Before this patch:

internal compiler error: in expand_const_vector, at config/riscv/riscv-v.cc:1270

appears 400 times in full coverage testing report.

The root cause is we didn't support interleave vector with different steps.

Here is the story:

We already supported interleave with single same step, that is:
e.g. v = { 0, 100, 2, 102, 4, 104, ... }
This sequence can be interpreted as interleave vector by 2 seperate sequences:
sequence1 = { 0, 2, 4, ... } and sequence2 = { 100, 102, 104, ... }.
Their step are both 2.

However, we didn't support interleave vector when they have different steps which
cause ICE in such situations.

This patch support different steps interleaved vector for the following 2 situations:

1. When vector can be extended EEW:

Case 1: { 0, 0, 1, 0, 2, 0, ... }
It's interleaved by sequence1 = { 0, 1, 2, ... } and sequence1 = { 0, 0, 0, ... }
Suppose the original vector can be extended EEW, e.g. mode = RVVM1SImode.
Then such interleaved vector can be achieved with { 1, 2, 3, ... } with RVVM1DImode.
So, for this situation the codegen is pretty efficient and clean:

.MASK_LEN_STORE (&s, 32B, { -1, ... }, 16, 0, { 0, 0, 1, 0, 2, 0, ... });

->
   vsetvli	a5,zero,e64,m8,ta,ma
   vid.v	v8
   vsetivli	zero,16,e32,m8,ta,ma
   vse32.v	v8,0(a4)

Case 2: { 0, 100, 1, 100, 2, 100, ... }

.MASK_LEN_STORE (&s, 32B, { -1, ... }, 16, 0, { 0, 100, 1, 100, 2, 100, ... });

->

   vsetvli	a1,zero,e64,m8,ta,ma
   vid.v	v8
   li	a7,100
   vand.vx	v8,v8,a4
   vsetivli	zero,16,e32,m8,ta,ma
   vse32.v	v8,0(a5)

2. When vector can't be extended EEW:

Since we can't use EEW = 64, for example, RVVM1SImode in -march=rv32gc_zve32f,
we use vmerge to combine the sequence.

.MASK_LEN_STORE (&s, 32B, { -1, ... }, 16, 0, { 200, 100, 201, 103, 202, 106, ... });

1. Generate sequence1 = { 200, 200, 201, 201, 202, 202, ... } and sequence2 = { 100, 100, 103, 103, 106, 106, ... }
2. Merge sequence1 and sequence2 with mask { 0, 1, 0, 1, ... }

gcc/ChangeLog:

	* config/riscv/riscv-protos.h (expand_vec_series): Adapt function.
	* config/riscv/riscv-v.cc (rvv_builder::double_steps_npatterns_p): New function.
	(expand_vec_series): Adapt function.
	(expand_const_vector): Support new interleave vector with different step.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/autovec/slp-interleave-1.c: New test.
	* gcc.target/riscv/rvv/autovec/slp-interleave-2.c: New test.
	* gcc.target/riscv/rvv/autovec/slp-interleave-3.c: New test.
	* gcc.target/riscv/rvv/autovec/slp-interleave-4.c: New test.

---
 gcc/config/riscv/riscv-protos.h               |   2 +-
 gcc/config/riscv/riscv-v.cc                   | 147 ++++++++++++++++--
 .../riscv/rvv/autovec/slp-interleave-1.c      |  17 ++
 .../riscv/rvv/autovec/slp-interleave-2.c      |  18 +++
 .../riscv/rvv/autovec/slp-interleave-3.c      |  19 +++
 .../riscv/rvv/autovec/slp-interleave-4.c      |  19 +++
 6 files changed, 210 insertions(+), 12 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index bfbd2bf0d18..a6f204f3066 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -543,7 +543,7 @@ void expand_tuple_move (rtx *);
 bool expand_block_move (rtx, rtx, rtx);
 machine_mode preferred_simd_mode (scalar_mode);
 machine_mode get_mask_mode (machine_mode);
-void expand_vec_series (rtx, rtx, rtx);
+void expand_vec_series (rtx, rtx, rtx, rtx = 0);
 void expand_vec_init (rtx, rtx);
 void expand_vec_perm (rtx, rtx, rtx, rtx);
 void expand_select_vl (rtx *);
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 71cb7567f1a..8533bbcc1c3 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -432,6 +432,7 @@ public:
 
   bool single_step_npatterns_p () const;
   bool npatterns_all_equal_p () const;
+  bool double_steps_npatterns_p () const;
 
   machine_mode new_mode () const { return m_new_mode; }
   scalar_mode inner_mode () const { return m_inner_mode; }
@@ -668,6 +669,26 @@ rvv_builder::single_step_npatterns_p () const
   return true;
 }
 
+/* Return true each pattern has different 2 steps.
+   TODO: We currently only support NPATTERNS = 2.  */
+bool
+rvv_builder::double_steps_npatterns_p () const
+{
+  if (npatterns () != 2 || nelts_per_pattern () != 3)
+    return false;
+  for (unsigned int i = 0; i < npatterns (); i++)
+    {
+      poly_int64 ele0 = rtx_to_poly_int64 (elt (i));
+      poly_int64 ele1 = rtx_to_poly_int64 (elt (npatterns () + i));
+      poly_int64 ele2 = rtx_to_poly_int64 (elt (npatterns () * 2 + i));
+      poly_int64 diff1 = ele1 - ele0;
+      poly_int64 diff2 = ele2 - ele1;
+      if (maybe_ne (diff1, diff2))
+	return false;
+    }
+  return true;
+}
+
 /* Return true if all elements of NPATTERNS are equal.
 
    E.g. NPATTERNS = 4:
@@ -955,10 +976,15 @@ get_repeating_sequence_dup_machine_mode (const rvv_builder &builder,
   return get_vector_mode (inner_mode, dup_nunit).require ();
 }
 
-/* Expand series const vector.  */
+/* Expand series const vector.  If VID is NULL_RTX, we use vid.v
+   instructions to generate sequence for VID:
+
+     VID = { 0, 1, 2, 3, ... }
+
+   Otherwise, we use the VID argument directly.  */
 
 void
-expand_vec_series (rtx dest, rtx base, rtx step)
+expand_vec_series (rtx dest, rtx base, rtx step, rtx vid)
 {
   machine_mode mode = GET_MODE (dest);
   poly_int64 nunits_m1 = GET_MODE_NUNITS (mode) - 1;
@@ -968,14 +994,18 @@ expand_vec_series (rtx dest, rtx base, rtx step)
   /* VECT_IV = BASE + I * STEP.  */
 
   /* Step 1: Generate I = { 0, 1, 2, ... } by vid.v.  */
-  rtx vid = gen_reg_rtx (mode);
-  rtx op[] = {vid};
-  emit_vlmax_insn (code_for_pred_series (mode), NULLARY_OP, op);
+  bool reverse_p = !vid && rtx_equal_p (step, constm1_rtx)
+		   && poly_int_rtx_p (base, &value)
+		   && known_eq (nunits_m1, value);
+  if (!vid)
+    {
+      vid = gen_reg_rtx (mode);
+      rtx op[] = {vid};
+      emit_vlmax_insn (code_for_pred_series (mode), NULLARY_OP, op);
+    }
 
   rtx step_adj;
-  if (rtx_equal_p (step, constm1_rtx)
-      && poly_int_rtx_p (base, &value)
-      && known_eq (nunits_m1, value))
+  if (reverse_p)
     {
       /* Special case:
 	   {nunits - 1, nunits - 2, ... , 0}.
@@ -1246,13 +1276,108 @@ expand_const_vector (rtx target, rtx src)
 				BINARY_OP, add_ops);
 	    }
 	}
+      else if (builder.double_steps_npatterns_p ())
+	{
+	  rtx base1 = builder.elt (0);
+	  rtx base2 = builder.elt (1);
+	  poly_int64 step1
+	    = rtx_to_poly_int64 (builder.elt (builder.npatterns ()))
+	      - rtx_to_poly_int64 (base1);
+	  poly_int64 step2
+	    = rtx_to_poly_int64 (builder.elt (builder.npatterns () + 1))
+	      - rtx_to_poly_int64 (base2);
+
+	  /* For { 1, 0, 2, 0, ... , n - 1, 0 }, we can use larger EEW
+	     integer vector mode to generate such vector efficiently.
+
+	     E.g. EEW = 16, { 2, 0, 4, 0, ... }
+
+	     can be interpreted into:
+
+		  EEW = 32, { 2, 4, ... }  */
+	  unsigned int new_smode_bitsize = builder.inner_bits_size () * 2;
+	  scalar_int_mode new_smode;
+	  machine_mode new_mode;
+	  poly_uint64 new_nunits
+	    = exact_div (GET_MODE_NUNITS (builder.mode ()), 2);
+	  if (int_mode_for_size (new_smode_bitsize, 0).exists (&new_smode)
+	      && get_vector_mode (new_smode, new_nunits).exists (&new_mode))
+	    {
+	      rtx tmp = gen_reg_rtx (new_mode);
+	      base1 = gen_int_mode (rtx_to_poly_int64 (base1), new_smode);
+	      expand_vec_series (tmp, base1, gen_int_mode (step1, new_smode));
+
+	      if (rtx_equal_p (base2, const0_rtx) && known_eq (step2, 0))
+		/* { 1, 0, 2, 0, ... }.  */
+		emit_move_insn (target, gen_lowpart (mode, tmp));
+	      else if (known_eq (step2, 0))
+		{
+		  /* { 1, 1, 2, 1, ... }.  */
+		  rtx scalar = expand_simple_binop (
+		    new_smode, ASHIFT,
+		    gen_int_mode (rtx_to_poly_int64 (base2), new_smode),
+		    gen_int_mode (builder.inner_bits_size (), new_smode),
+		    NULL_RTX, false, OPTAB_DIRECT);
+		  rtx tmp2 = gen_reg_rtx (new_mode);
+		  rtx and_ops[] = {tmp2, tmp, scalar};
+		  emit_vlmax_insn (code_for_pred_scalar (AND, new_mode),
+				   BINARY_OP, and_ops);
+		  emit_move_insn (target, gen_lowpart (mode, tmp2));
+		}
+	      else
+		{
+		  /* { 1, 3, 2, 6, ... }.  */
+		  rtx tmp2 = gen_reg_rtx (new_mode);
+		  base2 = gen_int_mode (rtx_to_poly_int64 (base2), new_smode);
+		  expand_vec_series (tmp2, base2,
+				     gen_int_mode (step1, new_smode));
+		  rtx shifted_tmp2 = expand_simple_binop (
+		    new_mode, ASHIFT, tmp2,
+		    gen_int_mode (builder.inner_bits_size (), Pmode), NULL_RTX,
+		    false, OPTAB_DIRECT);
+		  rtx tmp3 = gen_reg_rtx (new_mode);
+		  rtx ior_ops[] = {tmp3, tmp, shifted_tmp2};
+		  emit_vlmax_insn (code_for_pred (IOR, new_mode), BINARY_OP,
+				   ior_ops);
+		  emit_move_insn (target, gen_lowpart (mode, tmp3));
+		}
+	    }
+	  else
+	    {
+	      rtx vid = gen_reg_rtx (mode);
+	      expand_vec_series (vid, const0_rtx, const1_rtx);
+	      /* Transform into { 0, 0, 1, 1, 2, 2, ... }.  */
+	      rtx shifted_vid
+		= expand_simple_binop (mode, LSHIFTRT, vid, const1_rtx,
+				       NULL_RTX, false, OPTAB_DIRECT);
+	      rtx tmp1 = gen_reg_rtx (mode);
+	      rtx tmp2 = gen_reg_rtx (mode);
+	      expand_vec_series (tmp1, base1,
+				 gen_int_mode (step1, builder.inner_mode ()),
+				 shifted_vid);
+	      expand_vec_series (tmp2, base2,
+				 gen_int_mode (step2, builder.inner_mode ()),
+				 shifted_vid);
+
+	      /* Transform into { 0, 1, 0, 1, 0, 1, ... }.  */
+	      rtx and_vid = gen_reg_rtx (mode);
+	      rtx and_ops[] = {and_vid, vid, const1_rtx};
+	      emit_vlmax_insn (code_for_pred_scalar (AND, mode), BINARY_OP,
+			       and_ops);
+	      rtx mask = gen_reg_rtx (builder.mask_mode ());
+	      expand_vec_cmp (mask, EQ, and_vid, CONST1_RTX (mode));
+
+	      rtx ops[] = {target, tmp1, tmp2, mask};
+	      emit_vlmax_insn (code_for_pred_merge (mode), MERGE_OP, ops);
+	    }
+	}
       else if (npatterns == 1 && nelts_per_pattern == 3)
 	{
 	  /* Generate the following CONST_VECTOR:
 	     { base0, base1, base1 + step, base1 + step * 2, ... }  */
-	  rtx base0 = CONST_VECTOR_ELT (src, 0);
-	  rtx base1 = CONST_VECTOR_ELT (src, 1);
-	  rtx step = CONST_VECTOR_ELT (src, 2);
+	  rtx base0 = builder.elt (0);
+	  rtx base1 = builder.elt (1);
+	  rtx step = builder.elt (2);
 	  /* Step 1 - { base1, base1 + step, base1 + step * 2, ... }  */
 	  rtx tmp = gen_reg_rtx (mode);
 	  expand_vec_series (tmp, base1, step);
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c
new file mode 100644
index 00000000000..9f371436fe1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c
@@ -0,0 +1,17 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = 0;
+      s[i].a = i;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 0, 0, 1, 0, 2, 0, ... \}" 1 "optimized" } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c
new file mode 100644
index 00000000000..6cc390c0b34
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = 1;
+      s[i].a = i;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 0, 1, 1, 1, 2, 1, ... \}" 1 "optimized" } } */
+/* { dg-final { scan-assembler-times {slli\t[a-x0-9]+,\s*[a-x0-9]+,\s*32} 1 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c
new file mode 100644
index 00000000000..326d66e2559
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c
@@ -0,0 +1,19 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = i*3 + 100;
+      s[i].a = i + 200;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 200, 100, 201, 103, 202, 106, ... \}" 1 "optimized" } } */
+/* { dg-final { scan-assembler-times {vsll\.vx} 1 } } */
+/* { dg-final { scan-assembler-times {vor\.vv} 1 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c
new file mode 100644
index 00000000000..2bb73ebcfd1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c
@@ -0,0 +1,19 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zve32f_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = i*3 + 100;
+      s[i].a = i + 200;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 200, 100, 201, 103, 202, 106, ... \}" 1 "optimized" } } */
+/* { dg-final { scan-assembler-times {vand\.vi} 1 } } */
+/* { dg-final { scan-assembler-times {vmseq\.vi} 1 } } */
-- 
2.36.3


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] RISC-V: Support interleave vector with different step sequence for VLA SLP
  2023-12-07 10:24 [PATCH] RISC-V: Support interleave vector with different step sequence for VLA SLP Juzhe-Zhong
@ 2023-12-07 15:15 ` Robin Dapp
  0 siblings, 0 replies; 4+ messages in thread
From: Robin Dapp @ 2023-12-07 15:15 UTC (permalink / raw)
  To: Juzhe-Zhong, gcc-patches; +Cc: rdapp.gcc, kito.cheng, kito.cheng, jeffreyalaw

Sorry for the delay, just a tiny naming/comment nit.
Rest LGTM, no need for a v2.
 
> +/* Return true each pattern has different 2 steps.
> +   TODO: We currently only support NPATTERNS = 2.  */

Return true if the permutation consists of two interleaved
patterns with a constant step each.

> +bool
> +rvv_builder::double_steps_npatterns_p () const
> +{

Maybe also rename this to interleaved_stepped_npatterns_p?

Thanks.

Regards
 Robin


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] RISC-V: Support interleave vector with different step sequence for VLA SLP
  2023-12-07 10:15 Juzhe-Zhong
@ 2023-12-07 10:26 ` juzhe.zhong
  0 siblings, 0 replies; 4+ messages in thread
From: juzhe.zhong @ 2023-12-07 10:26 UTC (permalink / raw)
  To: 钟居哲, gcc-patches
  Cc: kito.cheng, Kito.cheng, jeffreyalaw, Robin Dapp

[-- Attachment #1: Type: text/plain, Size: 14681 bytes --]

Resend the patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/639728.html 
with changelog changes.

No codes change.



juzhe.zhong@rivai.ai
 
From: Juzhe-Zhong
Date: 2023-12-07 18:15
To: gcc-patches
CC: kito.cheng; kito.cheng; jeffreyalaw; rdapp.gcc; Juzhe-Zhong
Subject: [PATCH] RISC-V: Support interleave vector with different step sequence for VLA SLP
This patch fixes 400 ICEs in full coverage testing since they happens due to same reason.
 
Before this patch:
 
internal compiler error: in validate_change_or_fail, at config/riscv/riscv-v.cc:4597
 
appears 400 times in full coverage testing report.
 
The root cause is we didn't support interleave vector with different steps.
 
Here is the story:
 
We already supported interleave with single same step, that is:
e.g. v = { 0, 100, 2, 102, 4, 104, ... }
This sequence can be interpreted as interleave vector by 2 seperate sequences:
sequence1 = { 0, 2, 4, ... } and sequence2 = { 100, 102, 104, ... }.
Their step are both 2.
 
However, we didn't support interleave vector when they have different steps which
cause ICE in such situations.
 
This patch support different steps interleaved vector for the following 2 situations:
 
1. When vector can be extended EEW:
 
Case 1: { 0, 0, 1, 0, 2, 0, ... }
It's interleaved by sequence1 = { 0, 1, 2, ... } and sequence1 = { 0, 0, 0, ... }
Suppose the original vector can be extended EEW, e.g. mode = RVVM1SImode.
Then such interleaved vector can be achieved with { 1, 2, 3, ... } with RVVM1DImode.
So, for this situation the codegen is pretty efficient and clean:
 
.MASK_LEN_STORE (&s, 32B, { -1, ... }, 16, 0, { 0, 0, 1, 0, 2, 0, ... });
 
->
   vsetvli a5,zero,e64,m8,ta,ma
   vid.v v8
   vsetivli zero,16,e32,m8,ta,ma
   vse32.v v8,0(a4)
 
Case 2: { 0, 100, 1, 100, 2, 100, ... }
 
.MASK_LEN_STORE (&s, 32B, { -1, ... }, 16, 0, { 0, 100, 1, 100, 2, 100, ... });
 
->
 
   vsetvli a1,zero,e64,m8,ta,ma
   vid.v v8
   li a7,100
   vand.vx v8,v8,a4
   vsetivli zero,16,e32,m8,ta,ma
   vse32.v v8,0(a5)
 
2. When vector can't be extended EEW:
 
Since we can't use EEW = 64, for example, RVVM1SImode in -march=rv32gc_zve32f,
we use vmerge to combine the sequence.
 
.MASK_LEN_STORE (&s, 32B, { -1, ... }, 16, 0, { 200, 100, 201, 103, 202, 106, ... });
 
1. Generate sequence1 = { 200, 200, 201, 201, 202, 202, ... } and sequence2 = { 100, 100, 103, 103, 106, 106, ... }
2. Merge sequence1 and sequence2 with mask { 0, 1, 0, 1, ... }
 
gcc/ChangeLog:
 
* config/riscv/riscv-protos.h (expand_vec_series): Adapt function.
* config/riscv/riscv-v.cc (rvv_builder::double_steps_npatterns_p): New function.
(expand_vec_series): Adapt function.
(expand_const_vector): Support new interleave vector with different step.
 
gcc/testsuite/ChangeLog:
 
* gcc.target/riscv/rvv/autovec/slp-interleave-1.c: New test.
* gcc.target/riscv/rvv/autovec/slp-interleave-2.c: New test.
* gcc.target/riscv/rvv/autovec/slp-interleave-3.c: New test.
* gcc.target/riscv/rvv/autovec/slp-interleave-4.c: New test.
 
---
gcc/config/riscv/riscv-protos.h               |   2 +-
gcc/config/riscv/riscv-v.cc                   | 147 ++++++++++++++++--
.../riscv/rvv/autovec/slp-interleave-1.c      |  17 ++
.../riscv/rvv/autovec/slp-interleave-2.c      |  18 +++
.../riscv/rvv/autovec/slp-interleave-3.c      |  19 +++
.../riscv/rvv/autovec/slp-interleave-4.c      |  19 +++
6 files changed, 210 insertions(+), 12 deletions(-)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c
 
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index bfbd2bf0d18..a6f204f3066 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -543,7 +543,7 @@ void expand_tuple_move (rtx *);
bool expand_block_move (rtx, rtx, rtx);
machine_mode preferred_simd_mode (scalar_mode);
machine_mode get_mask_mode (machine_mode);
-void expand_vec_series (rtx, rtx, rtx);
+void expand_vec_series (rtx, rtx, rtx, rtx = 0);
void expand_vec_init (rtx, rtx);
void expand_vec_perm (rtx, rtx, rtx, rtx);
void expand_select_vl (rtx *);
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 71cb7567f1a..8533bbcc1c3 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -432,6 +432,7 @@ public:
   bool single_step_npatterns_p () const;
   bool npatterns_all_equal_p () const;
+  bool double_steps_npatterns_p () const;
   machine_mode new_mode () const { return m_new_mode; }
   scalar_mode inner_mode () const { return m_inner_mode; }
@@ -668,6 +669,26 @@ rvv_builder::single_step_npatterns_p () const
   return true;
}
+/* Return true each pattern has different 2 steps.
+   TODO: We currently only support NPATTERNS = 2.  */
+bool
+rvv_builder::double_steps_npatterns_p () const
+{
+  if (npatterns () != 2 || nelts_per_pattern () != 3)
+    return false;
+  for (unsigned int i = 0; i < npatterns (); i++)
+    {
+      poly_int64 ele0 = rtx_to_poly_int64 (elt (i));
+      poly_int64 ele1 = rtx_to_poly_int64 (elt (npatterns () + i));
+      poly_int64 ele2 = rtx_to_poly_int64 (elt (npatterns () * 2 + i));
+      poly_int64 diff1 = ele1 - ele0;
+      poly_int64 diff2 = ele2 - ele1;
+      if (maybe_ne (diff1, diff2))
+ return false;
+    }
+  return true;
+}
+
/* Return true if all elements of NPATTERNS are equal.
    E.g. NPATTERNS = 4:
@@ -955,10 +976,15 @@ get_repeating_sequence_dup_machine_mode (const rvv_builder &builder,
   return get_vector_mode (inner_mode, dup_nunit).require ();
}
-/* Expand series const vector.  */
+/* Expand series const vector.  If VID is NULL_RTX, we use vid.v
+   instructions to generate sequence for VID:
+
+     VID = { 0, 1, 2, 3, ... }
+
+   Otherwise, we use the VID argument directly.  */
void
-expand_vec_series (rtx dest, rtx base, rtx step)
+expand_vec_series (rtx dest, rtx base, rtx step, rtx vid)
{
   machine_mode mode = GET_MODE (dest);
   poly_int64 nunits_m1 = GET_MODE_NUNITS (mode) - 1;
@@ -968,14 +994,18 @@ expand_vec_series (rtx dest, rtx base, rtx step)
   /* VECT_IV = BASE + I * STEP.  */
   /* Step 1: Generate I = { 0, 1, 2, ... } by vid.v.  */
-  rtx vid = gen_reg_rtx (mode);
-  rtx op[] = {vid};
-  emit_vlmax_insn (code_for_pred_series (mode), NULLARY_OP, op);
+  bool reverse_p = !vid && rtx_equal_p (step, constm1_rtx)
+    && poly_int_rtx_p (base, &value)
+    && known_eq (nunits_m1, value);
+  if (!vid)
+    {
+      vid = gen_reg_rtx (mode);
+      rtx op[] = {vid};
+      emit_vlmax_insn (code_for_pred_series (mode), NULLARY_OP, op);
+    }
   rtx step_adj;
-  if (rtx_equal_p (step, constm1_rtx)
-      && poly_int_rtx_p (base, &value)
-      && known_eq (nunits_m1, value))
+  if (reverse_p)
     {
       /* Special case:
   {nunits - 1, nunits - 2, ... , 0}.
@@ -1246,13 +1276,108 @@ expand_const_vector (rtx target, rtx src)
BINARY_OP, add_ops);
    }
}
+      else if (builder.double_steps_npatterns_p ())
+ {
+   rtx base1 = builder.elt (0);
+   rtx base2 = builder.elt (1);
+   poly_int64 step1
+     = rtx_to_poly_int64 (builder.elt (builder.npatterns ()))
+       - rtx_to_poly_int64 (base1);
+   poly_int64 step2
+     = rtx_to_poly_int64 (builder.elt (builder.npatterns () + 1))
+       - rtx_to_poly_int64 (base2);
+
+   /* For { 1, 0, 2, 0, ... , n - 1, 0 }, we can use larger EEW
+      integer vector mode to generate such vector efficiently.
+
+      E.g. EEW = 16, { 2, 0, 4, 0, ... }
+
+      can be interpreted into:
+
+   EEW = 32, { 2, 4, ... }  */
+   unsigned int new_smode_bitsize = builder.inner_bits_size () * 2;
+   scalar_int_mode new_smode;
+   machine_mode new_mode;
+   poly_uint64 new_nunits
+     = exact_div (GET_MODE_NUNITS (builder.mode ()), 2);
+   if (int_mode_for_size (new_smode_bitsize, 0).exists (&new_smode)
+       && get_vector_mode (new_smode, new_nunits).exists (&new_mode))
+     {
+       rtx tmp = gen_reg_rtx (new_mode);
+       base1 = gen_int_mode (rtx_to_poly_int64 (base1), new_smode);
+       expand_vec_series (tmp, base1, gen_int_mode (step1, new_smode));
+
+       if (rtx_equal_p (base2, const0_rtx) && known_eq (step2, 0))
+ /* { 1, 0, 2, 0, ... }.  */
+ emit_move_insn (target, gen_lowpart (mode, tmp));
+       else if (known_eq (step2, 0))
+ {
+   /* { 1, 1, 2, 1, ... }.  */
+   rtx scalar = expand_simple_binop (
+     new_smode, ASHIFT,
+     gen_int_mode (rtx_to_poly_int64 (base2), new_smode),
+     gen_int_mode (builder.inner_bits_size (), new_smode),
+     NULL_RTX, false, OPTAB_DIRECT);
+   rtx tmp2 = gen_reg_rtx (new_mode);
+   rtx and_ops[] = {tmp2, tmp, scalar};
+   emit_vlmax_insn (code_for_pred_scalar (AND, new_mode),
+    BINARY_OP, and_ops);
+   emit_move_insn (target, gen_lowpart (mode, tmp2));
+ }
+       else
+ {
+   /* { 1, 3, 2, 6, ... }.  */
+   rtx tmp2 = gen_reg_rtx (new_mode);
+   base2 = gen_int_mode (rtx_to_poly_int64 (base2), new_smode);
+   expand_vec_series (tmp2, base2,
+      gen_int_mode (step1, new_smode));
+   rtx shifted_tmp2 = expand_simple_binop (
+     new_mode, ASHIFT, tmp2,
+     gen_int_mode (builder.inner_bits_size (), Pmode), NULL_RTX,
+     false, OPTAB_DIRECT);
+   rtx tmp3 = gen_reg_rtx (new_mode);
+   rtx ior_ops[] = {tmp3, tmp, shifted_tmp2};
+   emit_vlmax_insn (code_for_pred (IOR, new_mode), BINARY_OP,
+    ior_ops);
+   emit_move_insn (target, gen_lowpart (mode, tmp3));
+ }
+     }
+   else
+     {
+       rtx vid = gen_reg_rtx (mode);
+       expand_vec_series (vid, const0_rtx, const1_rtx);
+       /* Transform into { 0, 0, 1, 1, 2, 2, ... }.  */
+       rtx shifted_vid
+ = expand_simple_binop (mode, LSHIFTRT, vid, const1_rtx,
+        NULL_RTX, false, OPTAB_DIRECT);
+       rtx tmp1 = gen_reg_rtx (mode);
+       rtx tmp2 = gen_reg_rtx (mode);
+       expand_vec_series (tmp1, base1,
+ gen_int_mode (step1, builder.inner_mode ()),
+ shifted_vid);
+       expand_vec_series (tmp2, base2,
+ gen_int_mode (step2, builder.inner_mode ()),
+ shifted_vid);
+
+       /* Transform into { 0, 1, 0, 1, 0, 1, ... }.  */
+       rtx and_vid = gen_reg_rtx (mode);
+       rtx and_ops[] = {and_vid, vid, const1_rtx};
+       emit_vlmax_insn (code_for_pred_scalar (AND, mode), BINARY_OP,
+        and_ops);
+       rtx mask = gen_reg_rtx (builder.mask_mode ());
+       expand_vec_cmp (mask, EQ, and_vid, CONST1_RTX (mode));
+
+       rtx ops[] = {target, tmp1, tmp2, mask};
+       emit_vlmax_insn (code_for_pred_merge (mode), MERGE_OP, ops);
+     }
+ }
       else if (npatterns == 1 && nelts_per_pattern == 3)
{
  /* Generate the following CONST_VECTOR:
     { base0, base1, base1 + step, base1 + step * 2, ... }  */
-   rtx base0 = CONST_VECTOR_ELT (src, 0);
-   rtx base1 = CONST_VECTOR_ELT (src, 1);
-   rtx step = CONST_VECTOR_ELT (src, 2);
+   rtx base0 = builder.elt (0);
+   rtx base1 = builder.elt (1);
+   rtx step = builder.elt (2);
  /* Step 1 - { base1, base1 + step, base1 + step * 2, ... }  */
  rtx tmp = gen_reg_rtx (mode);
  expand_vec_series (tmp, base1, step);
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c
new file mode 100644
index 00000000000..9f371436fe1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c
@@ -0,0 +1,17 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = 0;
+      s[i].a = i;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 0, 0, 1, 0, 2, 0, ... \}" 1 "optimized" } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c
new file mode 100644
index 00000000000..6cc390c0b34
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = 1;
+      s[i].a = i;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 0, 1, 1, 1, 2, 1, ... \}" 1 "optimized" } } */
+/* { dg-final { scan-assembler-times {slli\t[a-x0-9]+,\s*[a-x0-9]+,\s*32} 1 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c
new file mode 100644
index 00000000000..326d66e2559
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c
@@ -0,0 +1,19 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = i*3 + 100;
+      s[i].a = i + 200;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 200, 100, 201, 103, 202, 106, ... \}" 1 "optimized" } } */
+/* { dg-final { scan-assembler-times {vsll\.vx} 1 } } */
+/* { dg-final { scan-assembler-times {vor\.vv} 1 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c
new file mode 100644
index 00000000000..2bb73ebcfd1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c
@@ -0,0 +1,19 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zve32f_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = i*3 + 100;
+      s[i].a = i + 200;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 200, 100, 201, 103, 202, 106, ... \}" 1 "optimized" } } */
+/* { dg-final { scan-assembler-times {vand\.vi} 1 } } */
+/* { dg-final { scan-assembler-times {vmseq\.vi} 1 } } */
-- 
2.36.3
 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH] RISC-V: Support interleave vector with different step sequence for VLA SLP
@ 2023-12-07 10:15 Juzhe-Zhong
  2023-12-07 10:26 ` juzhe.zhong
  0 siblings, 1 reply; 4+ messages in thread
From: Juzhe-Zhong @ 2023-12-07 10:15 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, kito.cheng, jeffreyalaw, rdapp.gcc, Juzhe-Zhong

This patch fixes 400 ICEs in full coverage testing since they happens due to same reason.

Before this patch:

internal compiler error: in validate_change_or_fail, at config/riscv/riscv-v.cc:4597

appears 400 times in full coverage testing report.

The root cause is we didn't support interleave vector with different steps.

Here is the story:

We already supported interleave with single same step, that is:
e.g. v = { 0, 100, 2, 102, 4, 104, ... }
This sequence can be interpreted as interleave vector by 2 seperate sequences:
sequence1 = { 0, 2, 4, ... } and sequence2 = { 100, 102, 104, ... }.
Their step are both 2.

However, we didn't support interleave vector when they have different steps which
cause ICE in such situations.

This patch support different steps interleaved vector for the following 2 situations:

1. When vector can be extended EEW:

Case 1: { 0, 0, 1, 0, 2, 0, ... }
It's interleaved by sequence1 = { 0, 1, 2, ... } and sequence1 = { 0, 0, 0, ... }
Suppose the original vector can be extended EEW, e.g. mode = RVVM1SImode.
Then such interleaved vector can be achieved with { 1, 2, 3, ... } with RVVM1DImode.
So, for this situation the codegen is pretty efficient and clean:

.MASK_LEN_STORE (&s, 32B, { -1, ... }, 16, 0, { 0, 0, 1, 0, 2, 0, ... });

->
   vsetvli	a5,zero,e64,m8,ta,ma
   vid.v	v8
   vsetivli	zero,16,e32,m8,ta,ma
   vse32.v	v8,0(a4)

Case 2: { 0, 100, 1, 100, 2, 100, ... }

.MASK_LEN_STORE (&s, 32B, { -1, ... }, 16, 0, { 0, 100, 1, 100, 2, 100, ... });

->

   vsetvli	a1,zero,e64,m8,ta,ma
   vid.v	v8
   li	a7,100
   vand.vx	v8,v8,a4
   vsetivli	zero,16,e32,m8,ta,ma
   vse32.v	v8,0(a5)

2. When vector can't be extended EEW:

Since we can't use EEW = 64, for example, RVVM1SImode in -march=rv32gc_zve32f,
we use vmerge to combine the sequence.

.MASK_LEN_STORE (&s, 32B, { -1, ... }, 16, 0, { 200, 100, 201, 103, 202, 106, ... });

1. Generate sequence1 = { 200, 200, 201, 201, 202, 202, ... } and sequence2 = { 100, 100, 103, 103, 106, 106, ... }
2. Merge sequence1 and sequence2 with mask { 0, 1, 0, 1, ... }

gcc/ChangeLog:

	* config/riscv/riscv-protos.h (expand_vec_series): Adapt function.
	* config/riscv/riscv-v.cc (rvv_builder::double_steps_npatterns_p): New function.
	(expand_vec_series): Adapt function.
	(expand_const_vector): Support new interleave vector with different step.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/autovec/slp-interleave-1.c: New test.
	* gcc.target/riscv/rvv/autovec/slp-interleave-2.c: New test.
	* gcc.target/riscv/rvv/autovec/slp-interleave-3.c: New test.
	* gcc.target/riscv/rvv/autovec/slp-interleave-4.c: New test.

---
 gcc/config/riscv/riscv-protos.h               |   2 +-
 gcc/config/riscv/riscv-v.cc                   | 147 ++++++++++++++++--
 .../riscv/rvv/autovec/slp-interleave-1.c      |  17 ++
 .../riscv/rvv/autovec/slp-interleave-2.c      |  18 +++
 .../riscv/rvv/autovec/slp-interleave-3.c      |  19 +++
 .../riscv/rvv/autovec/slp-interleave-4.c      |  19 +++
 6 files changed, 210 insertions(+), 12 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index bfbd2bf0d18..a6f204f3066 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -543,7 +543,7 @@ void expand_tuple_move (rtx *);
 bool expand_block_move (rtx, rtx, rtx);
 machine_mode preferred_simd_mode (scalar_mode);
 machine_mode get_mask_mode (machine_mode);
-void expand_vec_series (rtx, rtx, rtx);
+void expand_vec_series (rtx, rtx, rtx, rtx = 0);
 void expand_vec_init (rtx, rtx);
 void expand_vec_perm (rtx, rtx, rtx, rtx);
 void expand_select_vl (rtx *);
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 71cb7567f1a..8533bbcc1c3 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -432,6 +432,7 @@ public:
 
   bool single_step_npatterns_p () const;
   bool npatterns_all_equal_p () const;
+  bool double_steps_npatterns_p () const;
 
   machine_mode new_mode () const { return m_new_mode; }
   scalar_mode inner_mode () const { return m_inner_mode; }
@@ -668,6 +669,26 @@ rvv_builder::single_step_npatterns_p () const
   return true;
 }
 
+/* Return true each pattern has different 2 steps.
+   TODO: We currently only support NPATTERNS = 2.  */
+bool
+rvv_builder::double_steps_npatterns_p () const
+{
+  if (npatterns () != 2 || nelts_per_pattern () != 3)
+    return false;
+  for (unsigned int i = 0; i < npatterns (); i++)
+    {
+      poly_int64 ele0 = rtx_to_poly_int64 (elt (i));
+      poly_int64 ele1 = rtx_to_poly_int64 (elt (npatterns () + i));
+      poly_int64 ele2 = rtx_to_poly_int64 (elt (npatterns () * 2 + i));
+      poly_int64 diff1 = ele1 - ele0;
+      poly_int64 diff2 = ele2 - ele1;
+      if (maybe_ne (diff1, diff2))
+	return false;
+    }
+  return true;
+}
+
 /* Return true if all elements of NPATTERNS are equal.
 
    E.g. NPATTERNS = 4:
@@ -955,10 +976,15 @@ get_repeating_sequence_dup_machine_mode (const rvv_builder &builder,
   return get_vector_mode (inner_mode, dup_nunit).require ();
 }
 
-/* Expand series const vector.  */
+/* Expand series const vector.  If VID is NULL_RTX, we use vid.v
+   instructions to generate sequence for VID:
+
+     VID = { 0, 1, 2, 3, ... }
+
+   Otherwise, we use the VID argument directly.  */
 
 void
-expand_vec_series (rtx dest, rtx base, rtx step)
+expand_vec_series (rtx dest, rtx base, rtx step, rtx vid)
 {
   machine_mode mode = GET_MODE (dest);
   poly_int64 nunits_m1 = GET_MODE_NUNITS (mode) - 1;
@@ -968,14 +994,18 @@ expand_vec_series (rtx dest, rtx base, rtx step)
   /* VECT_IV = BASE + I * STEP.  */
 
   /* Step 1: Generate I = { 0, 1, 2, ... } by vid.v.  */
-  rtx vid = gen_reg_rtx (mode);
-  rtx op[] = {vid};
-  emit_vlmax_insn (code_for_pred_series (mode), NULLARY_OP, op);
+  bool reverse_p = !vid && rtx_equal_p (step, constm1_rtx)
+		   && poly_int_rtx_p (base, &value)
+		   && known_eq (nunits_m1, value);
+  if (!vid)
+    {
+      vid = gen_reg_rtx (mode);
+      rtx op[] = {vid};
+      emit_vlmax_insn (code_for_pred_series (mode), NULLARY_OP, op);
+    }
 
   rtx step_adj;
-  if (rtx_equal_p (step, constm1_rtx)
-      && poly_int_rtx_p (base, &value)
-      && known_eq (nunits_m1, value))
+  if (reverse_p)
     {
       /* Special case:
 	   {nunits - 1, nunits - 2, ... , 0}.
@@ -1246,13 +1276,108 @@ expand_const_vector (rtx target, rtx src)
 				BINARY_OP, add_ops);
 	    }
 	}
+      else if (builder.double_steps_npatterns_p ())
+	{
+	  rtx base1 = builder.elt (0);
+	  rtx base2 = builder.elt (1);
+	  poly_int64 step1
+	    = rtx_to_poly_int64 (builder.elt (builder.npatterns ()))
+	      - rtx_to_poly_int64 (base1);
+	  poly_int64 step2
+	    = rtx_to_poly_int64 (builder.elt (builder.npatterns () + 1))
+	      - rtx_to_poly_int64 (base2);
+
+	  /* For { 1, 0, 2, 0, ... , n - 1, 0 }, we can use larger EEW
+	     integer vector mode to generate such vector efficiently.
+
+	     E.g. EEW = 16, { 2, 0, 4, 0, ... }
+
+	     can be interpreted into:
+
+		  EEW = 32, { 2, 4, ... }  */
+	  unsigned int new_smode_bitsize = builder.inner_bits_size () * 2;
+	  scalar_int_mode new_smode;
+	  machine_mode new_mode;
+	  poly_uint64 new_nunits
+	    = exact_div (GET_MODE_NUNITS (builder.mode ()), 2);
+	  if (int_mode_for_size (new_smode_bitsize, 0).exists (&new_smode)
+	      && get_vector_mode (new_smode, new_nunits).exists (&new_mode))
+	    {
+	      rtx tmp = gen_reg_rtx (new_mode);
+	      base1 = gen_int_mode (rtx_to_poly_int64 (base1), new_smode);
+	      expand_vec_series (tmp, base1, gen_int_mode (step1, new_smode));
+
+	      if (rtx_equal_p (base2, const0_rtx) && known_eq (step2, 0))
+		/* { 1, 0, 2, 0, ... }.  */
+		emit_move_insn (target, gen_lowpart (mode, tmp));
+	      else if (known_eq (step2, 0))
+		{
+		  /* { 1, 1, 2, 1, ... }.  */
+		  rtx scalar = expand_simple_binop (
+		    new_smode, ASHIFT,
+		    gen_int_mode (rtx_to_poly_int64 (base2), new_smode),
+		    gen_int_mode (builder.inner_bits_size (), new_smode),
+		    NULL_RTX, false, OPTAB_DIRECT);
+		  rtx tmp2 = gen_reg_rtx (new_mode);
+		  rtx and_ops[] = {tmp2, tmp, scalar};
+		  emit_vlmax_insn (code_for_pred_scalar (AND, new_mode),
+				   BINARY_OP, and_ops);
+		  emit_move_insn (target, gen_lowpart (mode, tmp2));
+		}
+	      else
+		{
+		  /* { 1, 3, 2, 6, ... }.  */
+		  rtx tmp2 = gen_reg_rtx (new_mode);
+		  base2 = gen_int_mode (rtx_to_poly_int64 (base2), new_smode);
+		  expand_vec_series (tmp2, base2,
+				     gen_int_mode (step1, new_smode));
+		  rtx shifted_tmp2 = expand_simple_binop (
+		    new_mode, ASHIFT, tmp2,
+		    gen_int_mode (builder.inner_bits_size (), Pmode), NULL_RTX,
+		    false, OPTAB_DIRECT);
+		  rtx tmp3 = gen_reg_rtx (new_mode);
+		  rtx ior_ops[] = {tmp3, tmp, shifted_tmp2};
+		  emit_vlmax_insn (code_for_pred (IOR, new_mode), BINARY_OP,
+				   ior_ops);
+		  emit_move_insn (target, gen_lowpart (mode, tmp3));
+		}
+	    }
+	  else
+	    {
+	      rtx vid = gen_reg_rtx (mode);
+	      expand_vec_series (vid, const0_rtx, const1_rtx);
+	      /* Transform into { 0, 0, 1, 1, 2, 2, ... }.  */
+	      rtx shifted_vid
+		= expand_simple_binop (mode, LSHIFTRT, vid, const1_rtx,
+				       NULL_RTX, false, OPTAB_DIRECT);
+	      rtx tmp1 = gen_reg_rtx (mode);
+	      rtx tmp2 = gen_reg_rtx (mode);
+	      expand_vec_series (tmp1, base1,
+				 gen_int_mode (step1, builder.inner_mode ()),
+				 shifted_vid);
+	      expand_vec_series (tmp2, base2,
+				 gen_int_mode (step2, builder.inner_mode ()),
+				 shifted_vid);
+
+	      /* Transform into { 0, 1, 0, 1, 0, 1, ... }.  */
+	      rtx and_vid = gen_reg_rtx (mode);
+	      rtx and_ops[] = {and_vid, vid, const1_rtx};
+	      emit_vlmax_insn (code_for_pred_scalar (AND, mode), BINARY_OP,
+			       and_ops);
+	      rtx mask = gen_reg_rtx (builder.mask_mode ());
+	      expand_vec_cmp (mask, EQ, and_vid, CONST1_RTX (mode));
+
+	      rtx ops[] = {target, tmp1, tmp2, mask};
+	      emit_vlmax_insn (code_for_pred_merge (mode), MERGE_OP, ops);
+	    }
+	}
       else if (npatterns == 1 && nelts_per_pattern == 3)
 	{
 	  /* Generate the following CONST_VECTOR:
 	     { base0, base1, base1 + step, base1 + step * 2, ... }  */
-	  rtx base0 = CONST_VECTOR_ELT (src, 0);
-	  rtx base1 = CONST_VECTOR_ELT (src, 1);
-	  rtx step = CONST_VECTOR_ELT (src, 2);
+	  rtx base0 = builder.elt (0);
+	  rtx base1 = builder.elt (1);
+	  rtx step = builder.elt (2);
 	  /* Step 1 - { base1, base1 + step, base1 + step * 2, ... }  */
 	  rtx tmp = gen_reg_rtx (mode);
 	  expand_vec_series (tmp, base1, step);
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c
new file mode 100644
index 00000000000..9f371436fe1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-1.c
@@ -0,0 +1,17 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = 0;
+      s[i].a = i;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 0, 0, 1, 0, 2, 0, ... \}" 1 "optimized" } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c
new file mode 100644
index 00000000000..6cc390c0b34
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-2.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = 1;
+      s[i].a = i;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 0, 1, 1, 1, 2, 1, ... \}" 1 "optimized" } } */
+/* { dg-final { scan-assembler-times {slli\t[a-x0-9]+,\s*[a-x0-9]+,\s*32} 1 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c
new file mode 100644
index 00000000000..326d66e2559
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-3.c
@@ -0,0 +1,19 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = i*3 + 100;
+      s[i].a = i + 200;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 200, 100, 201, 103, 202, 106, ... \}" 1 "optimized" } } */
+/* { dg-final { scan-assembler-times {vsll\.vx} 1 } } */
+/* { dg-final { scan-assembler-times {vor\.vv} 1 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c
new file mode 100644
index 00000000000..2bb73ebcfd1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/slp-interleave-4.c
@@ -0,0 +1,19 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zve32f_zvl1024b -mabi=lp64d -fno-vect-cost-model --param=riscv-autovec-lmul=m8 -O3 -fdump-tree-optimized-details" } */
+
+struct S { int a, b; } s[8];
+
+void
+foo ()
+{
+  int i;
+  for (i = 0; i < 8; i++)
+    {
+      s[i].b = i*3 + 100;
+      s[i].a = i + 200;
+    }
+}
+
+/* { dg-final { scan-tree-dump-times "\{ 200, 100, 201, 103, 202, 106, ... \}" 1 "optimized" } } */
+/* { dg-final { scan-assembler-times {vand\.vi} 1 } } */
+/* { dg-final { scan-assembler-times {vmseq\.vi} 1 } } */
-- 
2.36.3


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-12-07 15:15 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-07 10:24 [PATCH] RISC-V: Support interleave vector with different step sequence for VLA SLP Juzhe-Zhong
2023-12-07 15:15 ` Robin Dapp
  -- strict thread matches above, loose matches on Subject: below --
2023-12-07 10:15 Juzhe-Zhong
2023-12-07 10:26 ` juzhe.zhong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).