public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c There's loop in vect_peel_nonlinear_iv_init to get init_expr * pow (step_expr, skip_niters). When skipn_iters is too big, compile time hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is pow of 2, otherwise give up vectorization when skip_niters >= TYPE_PRECISION (TREE_TYPE (init_expr)).
@ 2023-10-18  8:32 liuhongt
  2023-10-18  8:43 ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65; 6800; 1c " Hongtao Liu
  2023-10-18 10:50 ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c " Richard Biener
  0 siblings, 2 replies; 9+ messages in thread
From: liuhongt @ 2023-10-18  8:32 UTC (permalink / raw)
  To: gcc-patches; +Cc: rguenther

Also give up vectorization when niters_skip is negative which will be
used for fully masked loop.

Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
Ok for trunk?

gcc/ChangeLog:

	PR tree-optimization/111820
	PR tree-optimization/111833
	* tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give
	up vectorization for nonlinear iv vect_step_op_mul when
	step_expr is not exact_log2 and niters is greater than
	TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize
	for nagative niters_skip which will be used by fully masked
	loop.
	(vect_can_advance_ivs_p): Pass whole phi_info to
	vect_can_peel_nonlinear_iv_p.
	* tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize
	init_expr * pow (step_expr, skipn) to init_expr
	<< (log2 (step_expr) * skipn) when step_expr is exact_log2.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/pr111820-1.c: New test.
	* gcc.target/i386/pr111820-2.c: New test.
	* gcc.target/i386/pr103144-mul-1.c: Adjust testcase.
---
 .../gcc.target/i386/pr103144-mul-1.c          |  6 ++--
 gcc/testsuite/gcc.target/i386/pr111820-1.c    | 16 ++++++++++
 gcc/testsuite/gcc.target/i386/pr111820-2.c    | 17 ++++++++++
 gcc/tree-vect-loop-manip.cc                   | 28 ++++++++++++++--
 gcc/tree-vect-loop.cc                         | 32 ++++++++++++++++---
 5 files changed, 88 insertions(+), 11 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c

diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
index 640c34fd959..f80d1094097 100644
--- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
+++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
@@ -23,7 +23,7 @@ foo_mul_const (int* a)
   for (int i = 0; i != N; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b)
   for (int i = 0; i != 39; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -46,6 +46,6 @@ foo_mul_peel_const (int* a)
   for (int i = 0; i != 39; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c
new file mode 100644
index 00000000000..50e960c39d4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */
+/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */
+
+int r;
+int r_0;
+
+void f1 (void)
+{
+  int n = 0;
+  while (-- n)
+    {
+      r_0 += r;
+      r  *= 3;
+    }
+}
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c
new file mode 100644
index 00000000000..bbdb40798c6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c
@@ -0,0 +1,17 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
+/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
+
+int r;
+int r_0;
+
+void f (void)
+{
+  int n = 0;
+  while (-- n)
+    {
+      r_0 += r ;
+      r  *= 2;
+    }
+}
+
diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
index 2608c286e5d..a530088b61d 100644
--- a/gcc/tree-vect-loop-manip.cc
+++ b/gcc/tree-vect-loop-manip.cc
@@ -1783,8 +1783,10 @@ iv_phi_p (stmt_vec_info stmt_info)
 /* Return true if vectorizer can peel for nonlinear iv.  */
 static bool
 vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
-			      enum vect_induction_op_type induction_type)
+			      stmt_vec_info stmt_info)
 {
+  enum vect_induction_op_type induction_type
+    = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info);
   tree niters_skip;
   /* Init_expr will be update by vect_update_ivs_after_vectorizer,
      if niters or vf is unkown:
@@ -1805,11 +1807,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
       return false;
     }
 
+  /* Avoid compile time hog on vect_peel_nonlinear_iv_init.  */
+  if (induction_type == vect_step_op_mul)
+    {
+      tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info);
+      tree type = TREE_TYPE (step_expr);
+
+      if (wi::exact_log2 (wi::to_wide (step_expr)) == -1
+	  && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type))
+	{
+	  if (dump_enabled_p ())
+	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
+			     "Avoid compile time hog on"
+			     " vect_peel_nonlinear_iv_init"
+			     " for nonlinear induction vec_step_op_mul"
+			     " when iteration count is too big.\n");
+	  return false;
+	}
+    }
+
   /* Also doens't support peel for neg when niter is variable.
      ??? generate something like niter_expr & 1 ? init_expr : -init_expr?  */
   niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo);
   if ((niters_skip != NULL_TREE
-       && TREE_CODE (niters_skip) != INTEGER_CST)
+       && (TREE_CODE (niters_skip) != INTEGER_CST
+	   || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0))
       || (!vect_use_loop_mask_for_alignment_p (loop_vinfo)
 	  && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0))
     {
@@ -1870,7 +1892,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
       induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info);
       if (induction_type != vect_step_op_add)
 	{
-	  if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type))
+	  if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info))
 	    return false;
 
 	  continue;
diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
index 89bdcaa0910..6bb1f3dc462 100644
--- a/gcc/tree-vect-loop.cc
+++ b/gcc/tree-vect-loop.cc
@@ -9134,11 +9134,33 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr,
 	init_expr = gimple_convert (stmts, utype, init_expr);
 	unsigned skipn = TREE_INT_CST_LOW (skip_niters);
 	wide_int begin = wi::to_wide (step_expr);
-	for (unsigned i = 0; i != skipn - 1; i++)
-	  begin = wi::mul (begin, wi::to_wide (step_expr));
-	tree mult_expr = wide_int_to_tree (utype, begin);
-	init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr);
-	init_expr = gimple_convert (stmts, type, init_expr);
+	int pow2_step = wi::exact_log2 (begin);
+	/* Optimize init_expr * pow (step_expr, skipn) to
+	   init_expr << (log2 (step_expr) * skipn).  */
+	if (pow2_step != -1)
+	  {
+	    if (skipn >= TYPE_PRECISION (type)
+		|| skipn > (UINT_MAX / (unsigned) pow2_step)
+		|| skipn * (unsigned) pow2_step >= TYPE_PRECISION (type))
+		init_expr = build_zero_cst (type);
+	    else
+	      {
+		tree lshc = build_int_cst (utype, skipn * (unsigned) pow2_step);
+		init_expr = gimple_build (stmts, LSHIFT_EXPR, utype,
+					  init_expr, lshc);
+	      }
+	  }
+	/* Any better way for init_expr * pow (step_expr, skipn)???.  */
+	else
+	  {
+	    gcc_assert (skipn < TYPE_PRECISION (type));
+	    for (unsigned i = 0; i != skipn - 1; i++)
+	      begin = wi::mul (begin, wi::to_wide (step_expr));
+	    tree mult_expr = wide_int_to_tree (utype, begin);
+	    init_expr = gimple_build (stmts, MULT_EXPR, utype,
+				      init_expr, mult_expr);
+	  }
+	  init_expr = gimple_convert (stmts, type, init_expr);
       }
       break;
 
-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65; 6800; 1c There's loop in vect_peel_nonlinear_iv_init to get init_expr * pow (step_expr, skip_niters). When skipn_iters is too big, compile time hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is pow of 2, otherwise give up vectorization when skip_niters >= TYPE_PRECISION (TREE_TYPE (init_expr)).
  2023-10-18  8:32 [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c There's loop in vect_peel_nonlinear_iv_init to get init_expr * pow (step_expr, skip_niters). When skipn_iters is too big, compile time hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is pow of 2, otherwise give up vectorization when skip_niters >= TYPE_PRECISION (TREE_TYPE (init_expr)) liuhongt
@ 2023-10-18  8:43 ` Hongtao Liu
  2023-10-18 10:50 ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c " Richard Biener
  1 sibling, 0 replies; 9+ messages in thread
From: Hongtao Liu @ 2023-10-18  8:43 UTC (permalink / raw)
  To: liuhongt; +Cc: gcc-patches, rguenther

On Wed, Oct 18, 2023 at 4:33 PM liuhongt <hongtao.liu@intel.com> wrote:
>
Cut from subject...
There's a loop in vect_peel_nonlinear_iv_init to get init_expr * pow
(step_expr, skip_niters). When skipn_iters is too big, compile time
hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters)
to init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr
is pow of 2, otherwise give up vectorization when skip_niters >=
TYPE_PRECISION (TREE_TYPE (init_expr)).

> Also give up vectorization when niters_skip is negative which will be
> used for fully masked loop.
>
> Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
> Ok for trunk?
>
> gcc/ChangeLog:
>
>         PR tree-optimization/111820
>         PR tree-optimization/111833
>         * tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give
>         up vectorization for nonlinear iv vect_step_op_mul when
>         step_expr is not exact_log2 and niters is greater than
>         TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize
>         for nagative niters_skip which will be used by fully masked
>         loop.
>         (vect_can_advance_ivs_p): Pass whole phi_info to
>         vect_can_peel_nonlinear_iv_p.
>         * tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize
>         init_expr * pow (step_expr, skipn) to init_expr
>         << (log2 (step_expr) * skipn) when step_expr is exact_log2.
>
> gcc/testsuite/ChangeLog:
>
>         * gcc.target/i386/pr111820-1.c: New test.
>         * gcc.target/i386/pr111820-2.c: New test.
>         * gcc.target/i386/pr103144-mul-1.c: Adjust testcase.
> ---
>  .../gcc.target/i386/pr103144-mul-1.c          |  6 ++--
>  gcc/testsuite/gcc.target/i386/pr111820-1.c    | 16 ++++++++++
>  gcc/testsuite/gcc.target/i386/pr111820-2.c    | 17 ++++++++++
>  gcc/tree-vect-loop-manip.cc                   | 28 ++++++++++++++--
>  gcc/tree-vect-loop.cc                         | 32 ++++++++++++++++---
>  5 files changed, 88 insertions(+), 11 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c
>  create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c
>
> diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> index 640c34fd959..f80d1094097 100644
> --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> @@ -23,7 +23,7 @@ foo_mul_const (int* a)
>    for (int i = 0; i != N; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
>
> @@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b)
>    for (int i = 0; i != 39; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
>
> @@ -46,6 +46,6 @@ foo_mul_peel_const (int* a)
>    for (int i = 0; i != 39; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c
> new file mode 100644
> index 00000000000..50e960c39d4
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */
> +/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f1 (void)
> +{
> +  int n = 0;
> +  while (-- n)
> +    {
> +      r_0 += r;
> +      r  *= 3;
> +    }
> +}
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c
> new file mode 100644
> index 00000000000..bbdb40798c6
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c
> @@ -0,0 +1,17 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
> +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f (void)
> +{
> +  int n = 0;
> +  while (-- n)
> +    {
> +      r_0 += r ;
> +      r  *= 2;
> +    }
> +}
> +
> diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
> index 2608c286e5d..a530088b61d 100644
> --- a/gcc/tree-vect-loop-manip.cc
> +++ b/gcc/tree-vect-loop-manip.cc
> @@ -1783,8 +1783,10 @@ iv_phi_p (stmt_vec_info stmt_info)
>  /* Return true if vectorizer can peel for nonlinear iv.  */
>  static bool
>  vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
> -                             enum vect_induction_op_type induction_type)
> +                             stmt_vec_info stmt_info)
>  {
> +  enum vect_induction_op_type induction_type
> +    = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info);
>    tree niters_skip;
>    /* Init_expr will be update by vect_update_ivs_after_vectorizer,
>       if niters or vf is unkown:
> @@ -1805,11 +1807,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
>        return false;
>      }
>
> +  /* Avoid compile time hog on vect_peel_nonlinear_iv_init.  */
> +  if (induction_type == vect_step_op_mul)
> +    {
> +      tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info);
> +      tree type = TREE_TYPE (step_expr);
> +
> +      if (wi::exact_log2 (wi::to_wide (step_expr)) == -1
> +         && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type))
> +       {
> +         if (dump_enabled_p ())
> +           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> +                            "Avoid compile time hog on"
> +                            " vect_peel_nonlinear_iv_init"
> +                            " for nonlinear induction vec_step_op_mul"
> +                            " when iteration count is too big.\n");
> +         return false;
> +       }
> +    }
> +
>    /* Also doens't support peel for neg when niter is variable.
>       ??? generate something like niter_expr & 1 ? init_expr : -init_expr?  */
>    niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo);
>    if ((niters_skip != NULL_TREE
> -       && TREE_CODE (niters_skip) != INTEGER_CST)
> +       && (TREE_CODE (niters_skip) != INTEGER_CST
> +          || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0))
>        || (!vect_use_loop_mask_for_alignment_p (loop_vinfo)
>           && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0))
>      {
> @@ -1870,7 +1892,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
>        induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info);
>        if (induction_type != vect_step_op_add)
>         {
> -         if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type))
> +         if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info))
>             return false;
>
>           continue;
> diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
> index 89bdcaa0910..6bb1f3dc462 100644
> --- a/gcc/tree-vect-loop.cc
> +++ b/gcc/tree-vect-loop.cc
> @@ -9134,11 +9134,33 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr,
>         init_expr = gimple_convert (stmts, utype, init_expr);
>         unsigned skipn = TREE_INT_CST_LOW (skip_niters);
>         wide_int begin = wi::to_wide (step_expr);
> -       for (unsigned i = 0; i != skipn - 1; i++)
> -         begin = wi::mul (begin, wi::to_wide (step_expr));
> -       tree mult_expr = wide_int_to_tree (utype, begin);
> -       init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr);
> -       init_expr = gimple_convert (stmts, type, init_expr);
> +       int pow2_step = wi::exact_log2 (begin);
> +       /* Optimize init_expr * pow (step_expr, skipn) to
> +          init_expr << (log2 (step_expr) * skipn).  */
> +       if (pow2_step != -1)
> +         {
> +           if (skipn >= TYPE_PRECISION (type)
> +               || skipn > (UINT_MAX / (unsigned) pow2_step)
> +               || skipn * (unsigned) pow2_step >= TYPE_PRECISION (type))
> +               init_expr = build_zero_cst (type);
> +           else
> +             {
> +               tree lshc = build_int_cst (utype, skipn * (unsigned) pow2_step);
> +               init_expr = gimple_build (stmts, LSHIFT_EXPR, utype,
> +                                         init_expr, lshc);
> +             }
> +         }
> +       /* Any better way for init_expr * pow (step_expr, skipn)???.  */
> +       else
> +         {
> +           gcc_assert (skipn < TYPE_PRECISION (type));
> +           for (unsigned i = 0; i != skipn - 1; i++)
> +             begin = wi::mul (begin, wi::to_wide (step_expr));
> +           tree mult_expr = wide_int_to_tree (utype, begin);
> +           init_expr = gimple_build (stmts, MULT_EXPR, utype,
> +                                     init_expr, mult_expr);
> +         }
> +         init_expr = gimple_convert (stmts, type, init_expr);
>        }
>        break;
>
> --
> 2.31.1
>


-- 
BR,
Hongtao

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c There's loop in vect_peel_nonlinear_iv_init to get init_expr * pow (step_expr, skip_niters). When skipn_iters is too big, compile time hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is pow of 2, otherwise give up vectorization when skip_niters >= TYPE_PRECISION (TREE_TYPE (init_expr)).
  2023-10-18  8:32 [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c There's loop in vect_peel_nonlinear_iv_init to get init_expr * pow (step_expr, skip_niters). When skipn_iters is too big, compile time hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is pow of 2, otherwise give up vectorization when skip_niters >= TYPE_PRECISION (TREE_TYPE (init_expr)) liuhongt
  2023-10-18  8:43 ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65; 6800; 1c " Hongtao Liu
@ 2023-10-18 10:50 ` Richard Biener
  2023-10-19  6:14   ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big liuhongt
  1 sibling, 1 reply; 9+ messages in thread
From: Richard Biener @ 2023-10-18 10:50 UTC (permalink / raw)
  To: liuhongt; +Cc: gcc-patches

On Wed, 18 Oct 2023, liuhongt wrote:

> Also give up vectorization when niters_skip is negative which will be
> used for fully masked loop.
> 
> Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
> Ok for trunk?
> 
> gcc/ChangeLog:
> 
> 	PR tree-optimization/111820
> 	PR tree-optimization/111833
> 	* tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give
> 	up vectorization for nonlinear iv vect_step_op_mul when
> 	step_expr is not exact_log2 and niters is greater than
> 	TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize
> 	for nagative niters_skip which will be used by fully masked
> 	loop.
> 	(vect_can_advance_ivs_p): Pass whole phi_info to
> 	vect_can_peel_nonlinear_iv_p.
> 	* tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize
> 	init_expr * pow (step_expr, skipn) to init_expr
> 	<< (log2 (step_expr) * skipn) when step_expr is exact_log2.
> 
> gcc/testsuite/ChangeLog:
> 
> 	* gcc.target/i386/pr111820-1.c: New test.
> 	* gcc.target/i386/pr111820-2.c: New test.
> 	* gcc.target/i386/pr103144-mul-1.c: Adjust testcase.
> ---
>  .../gcc.target/i386/pr103144-mul-1.c          |  6 ++--
>  gcc/testsuite/gcc.target/i386/pr111820-1.c    | 16 ++++++++++
>  gcc/testsuite/gcc.target/i386/pr111820-2.c    | 17 ++++++++++
>  gcc/tree-vect-loop-manip.cc                   | 28 ++++++++++++++--
>  gcc/tree-vect-loop.cc                         | 32 ++++++++++++++++---
>  5 files changed, 88 insertions(+), 11 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c
>  create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c
> 
> diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> index 640c34fd959..f80d1094097 100644
> --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> @@ -23,7 +23,7 @@ foo_mul_const (int* a)
>    for (int i = 0; i != N; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
>  
> @@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b)
>    for (int i = 0; i != 39; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
>  
> @@ -46,6 +46,6 @@ foo_mul_peel_const (int* a)
>    for (int i = 0; i != 39; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c
> new file mode 100644
> index 00000000000..50e960c39d4
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */
> +/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f1 (void)
> +{
> +  int n = 0;
> +  while (-- n)
> +    {
> +      r_0 += r;
> +      r  *= 3;
> +    }
> +}
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c
> new file mode 100644
> index 00000000000..bbdb40798c6
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c
> @@ -0,0 +1,17 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
> +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f (void)
> +{
> +  int n = 0;
> +  while (-- n)
> +    {
> +      r_0 += r ;
> +      r  *= 2;
> +    }
> +}
> +
> diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
> index 2608c286e5d..a530088b61d 100644
> --- a/gcc/tree-vect-loop-manip.cc
> +++ b/gcc/tree-vect-loop-manip.cc
> @@ -1783,8 +1783,10 @@ iv_phi_p (stmt_vec_info stmt_info)
>  /* Return true if vectorizer can peel for nonlinear iv.  */
>  static bool
>  vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
> -			      enum vect_induction_op_type induction_type)
> +			      stmt_vec_info stmt_info)
>  {
> +  enum vect_induction_op_type induction_type
> +    = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info);
>    tree niters_skip;
>    /* Init_expr will be update by vect_update_ivs_after_vectorizer,
>       if niters or vf is unkown:
> @@ -1805,11 +1807,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
>        return false;
>      }
>  
> +  /* Avoid compile time hog on vect_peel_nonlinear_iv_init.  */
> +  if (induction_type == vect_step_op_mul)
> +    {
> +      tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info);
> +      tree type = TREE_TYPE (step_expr);
> +
> +      if (wi::exact_log2 (wi::to_wide (step_expr)) == -1
> +	  && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type))
> +	{
> +	  if (dump_enabled_p ())
> +	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> +			     "Avoid compile time hog on"
> +			     " vect_peel_nonlinear_iv_init"
> +			     " for nonlinear induction vec_step_op_mul"
> +			     " when iteration count is too big.\n");
> +	  return false;
> +	}
> +    }
> +
>    /* Also doens't support peel for neg when niter is variable.
>       ??? generate something like niter_expr & 1 ? init_expr : -init_expr?  */
>    niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo);
>    if ((niters_skip != NULL_TREE
> -       && TREE_CODE (niters_skip) != INTEGER_CST)
> +       && (TREE_CODE (niters_skip) != INTEGER_CST
> +	   || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0))

So the bugs were not fixed without this hunk?  IIRC in the audit
trail we concluded the value is always positive ... (but of course
a large unsigned value can appear negative if you test it this way?)

>        || (!vect_use_loop_mask_for_alignment_p (loop_vinfo)
>  	  && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0))
>      {
> @@ -1870,7 +1892,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
>        induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info);
>        if (induction_type != vect_step_op_add)
>  	{
> -	  if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type))
> +	  if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info))
>  	    return false;
>  
>  	  continue;
> diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
> index 89bdcaa0910..6bb1f3dc462 100644
> --- a/gcc/tree-vect-loop.cc
> +++ b/gcc/tree-vect-loop.cc
> @@ -9134,11 +9134,33 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr,
>  	init_expr = gimple_convert (stmts, utype, init_expr);
>  	unsigned skipn = TREE_INT_CST_LOW (skip_niters);
>  	wide_int begin = wi::to_wide (step_expr);
> -	for (unsigned i = 0; i != skipn - 1; i++)
> -	  begin = wi::mul (begin, wi::to_wide (step_expr));
> -	tree mult_expr = wide_int_to_tree (utype, begin);
> -	init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr);
> -	init_expr = gimple_convert (stmts, type, init_expr);
> +	int pow2_step = wi::exact_log2 (begin);
> +	/* Optimize init_expr * pow (step_expr, skipn) to
> +	   init_expr << (log2 (step_expr) * skipn).  */
> +	if (pow2_step != -1)
> +	  {
> +	    if (skipn >= TYPE_PRECISION (type)
> +		|| skipn > (UINT_MAX / (unsigned) pow2_step)
> +		|| skipn * (unsigned) pow2_step >= TYPE_PRECISION (type))
> +		init_expr = build_zero_cst (type);
> +	    else
> +	      {
> +		tree lshc = build_int_cst (utype, skipn * (unsigned) pow2_step);
> +		init_expr = gimple_build (stmts, LSHIFT_EXPR, utype,
> +					  init_expr, lshc);
> +	      }
> +	  }
> +	/* Any better way for init_expr * pow (step_expr, skipn)???.  */

I think you can use one of the mpz_pow* functions and
wi::to_mpz/from_mpz for this.  See tree-ssa-loop-niter.cc for the
most heavy user of mpz (but not pow I think).

Richard.

> +	else
> +	  {
> +	    gcc_assert (skipn < TYPE_PRECISION (type));
> +	    for (unsigned i = 0; i != skipn - 1; i++)
> +	      begin = wi::mul (begin, wi::to_wide (step_expr));
> +	    tree mult_expr = wide_int_to_tree (utype, begin);
> +	    init_expr = gimple_build (stmts, MULT_EXPR, utype,
> +				      init_expr, mult_expr);
> +	  }
> +	  init_expr = gimple_convert (stmts, type, init_expr);
>        }
>        break;
>  
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big.
  2023-10-18 10:50 ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c " Richard Biener
@ 2023-10-19  6:14   ` liuhongt
  2023-10-19  7:14     ` Richard Biener
  0 siblings, 1 reply; 9+ messages in thread
From: liuhongt @ 2023-10-19  6:14 UTC (permalink / raw)
  To: gcc-patches; +Cc: crazylht, hjl.tools

>So the bugs were not fixed without this hunk?  IIRC in the audit
>trail we concluded the value is always positive ... (but of course
>a large unsigned value can appear negative if you test it this way?)
No, I added this incase in the future there's negative skip_niters as
you mentioned in the PR, it's just defensive programming.

>I think you can use one of the mpz_pow* functions and
>wi::to_mpz/from_mpz for this.  See tree-ssa-loop-niter.cc for the
>most heavy user of mpz (but not pow I think).
Changed.

Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
Ok for trunk.

There's loop in vect_peel_nonlinear_iv_init to get init_expr *
pow (step_expr, skip_niters). When skipn_iters is too big, compile time
hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to
init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is
pow of 2, otherwise give up vectorization when skip_niters >=
TYPE_PRECISION (TREE_TYPE (init_expr)).

Also give up vectorization when niters_skip is negative which will be
used for fully masked loop.

gcc/ChangeLog:

	PR tree-optimization/111820
	PR tree-optimization/111833
	* tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give
	up vectorization for nonlinear iv vect_step_op_mul when
	step_expr is not exact_log2 and niters is greater than
	TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize
	for nagative niters_skip which will be used by fully masked
	loop.
	(vect_can_advance_ivs_p): Pass whole phi_info to
	vect_can_peel_nonlinear_iv_p.
	* tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize
	init_expr * pow (step_expr, skipn) to init_expr
	<< (log2 (step_expr) * skipn) when step_expr is exact_log2.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/pr111820-1.c: New test.
	* gcc.target/i386/pr111820-2.c: New test.
	* gcc.target/i386/pr111820-3.c: New test.
	* gcc.target/i386/pr103144-mul-1.c: Adjust testcase.
	* gcc.target/i386/pr103144-mul-2.c: Adjust testcase.
---
 .../gcc.target/i386/pr103144-mul-1.c          |  8 ++---
 .../gcc.target/i386/pr103144-mul-2.c          |  8 ++---
 gcc/testsuite/gcc.target/i386/pr111820-1.c    | 16 +++++++++
 gcc/testsuite/gcc.target/i386/pr111820-2.c    | 16 +++++++++
 gcc/testsuite/gcc.target/i386/pr111820-3.c    | 16 +++++++++
 gcc/tree-vect-loop-manip.cc                   | 28 +++++++++++++--
 gcc/tree-vect-loop.cc                         | 34 ++++++++++++++++---
 7 files changed, 110 insertions(+), 16 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-3.c

diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
index 640c34fd959..913d7737dcd 100644
--- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
+++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
@@ -11,7 +11,7 @@ foo_mul (int* a, int b)
   for (int i = 0; i != N; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -23,7 +23,7 @@ foo_mul_const (int* a)
   for (int i = 0; i != N; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b)
   for (int i = 0; i != 39; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -46,6 +46,6 @@ foo_mul_peel_const (int* a)
   for (int i = 0; i != 39; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
index 39fdea3a69d..b2ff186e335 100644
--- a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
+++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
@@ -16,12 +16,12 @@ avx2_test (void)
 
   __builtin_memset (epi32_exp, 0, N * sizeof (int));
   int b = 8;
-  v8si init = __extension__(v8si) { b, b * 3, b * 9, b * 27, b * 81, b * 243, b * 729, b * 2187 };
+  v8si init = __extension__(v8si) { b, b * 4, b * 16, b * 64, b * 256, b * 1024, b * 4096, b * 16384 };
 
   for (int i = 0; i != N / 8; i++)
     {
       memcpy (epi32_exp + i * 8, &init, 32);
-      init *= 6561;
+      init *= 65536;
     }
 
   foo_mul (epi32_dst, b);
@@ -32,11 +32,11 @@ avx2_test (void)
   if (__builtin_memcmp (epi32_dst, epi32_exp, 39 * 4) != 0)
     __builtin_abort ();
 
-  init = __extension__(v8si) { 1, 3, 9, 27, 81, 243, 729, 2187 };
+  init = __extension__(v8si) { 1, 4, 16, 64, 256, 1024, 4096, 16384 };
   for (int i = 0; i != N / 8; i++)
     {
       memcpy (epi32_exp + i * 8, &init, 32);
-      init *= 6561;
+      init *= 65536;
     }
 
   foo_mul_const (epi32_dst);
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c
new file mode 100644
index 00000000000..50e960c39d4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */
+/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */
+
+int r;
+int r_0;
+
+void f1 (void)
+{
+  int n = 0;
+  while (-- n)
+    {
+      r_0 += r;
+      r  *= 3;
+    }
+}
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c
new file mode 100644
index 00000000000..dbeceb228c3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
+/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
+
+int r;
+int r_0;
+
+void f (void)
+{
+  int n = 0;
+  while (-- n)
+    {
+      r_0 += r ;
+      r  *= 2;
+    }
+}
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-3.c b/gcc/testsuite/gcc.target/i386/pr111820-3.c
new file mode 100644
index 00000000000..b778f517663
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-3.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
+/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
+
+int r;
+int r_0;
+
+void f (void)
+{
+  int n = 14;
+  while (-- n)
+    {
+      r_0 += r ;
+      r  *= 3;
+    }
+}
diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
index 2608c286e5d..a530088b61d 100644
--- a/gcc/tree-vect-loop-manip.cc
+++ b/gcc/tree-vect-loop-manip.cc
@@ -1783,8 +1783,10 @@ iv_phi_p (stmt_vec_info stmt_info)
 /* Return true if vectorizer can peel for nonlinear iv.  */
 static bool
 vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
-			      enum vect_induction_op_type induction_type)
+			      stmt_vec_info stmt_info)
 {
+  enum vect_induction_op_type induction_type
+    = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info);
   tree niters_skip;
   /* Init_expr will be update by vect_update_ivs_after_vectorizer,
      if niters or vf is unkown:
@@ -1805,11 +1807,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
       return false;
     }
 
+  /* Avoid compile time hog on vect_peel_nonlinear_iv_init.  */
+  if (induction_type == vect_step_op_mul)
+    {
+      tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info);
+      tree type = TREE_TYPE (step_expr);
+
+      if (wi::exact_log2 (wi::to_wide (step_expr)) == -1
+	  && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type))
+	{
+	  if (dump_enabled_p ())
+	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
+			     "Avoid compile time hog on"
+			     " vect_peel_nonlinear_iv_init"
+			     " for nonlinear induction vec_step_op_mul"
+			     " when iteration count is too big.\n");
+	  return false;
+	}
+    }
+
   /* Also doens't support peel for neg when niter is variable.
      ??? generate something like niter_expr & 1 ? init_expr : -init_expr?  */
   niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo);
   if ((niters_skip != NULL_TREE
-       && TREE_CODE (niters_skip) != INTEGER_CST)
+       && (TREE_CODE (niters_skip) != INTEGER_CST
+	   || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0))
       || (!vect_use_loop_mask_for_alignment_p (loop_vinfo)
 	  && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0))
     {
@@ -1870,7 +1892,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
       induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info);
       if (induction_type != vect_step_op_add)
 	{
-	  if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type))
+	  if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info))
 	    return false;
 
 	  continue;
diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
index 89bdcaa0910..f0dbba50786 100644
--- a/gcc/tree-vect-loop.cc
+++ b/gcc/tree-vect-loop.cc
@@ -9134,11 +9134,35 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr,
 	init_expr = gimple_convert (stmts, utype, init_expr);
 	unsigned skipn = TREE_INT_CST_LOW (skip_niters);
 	wide_int begin = wi::to_wide (step_expr);
-	for (unsigned i = 0; i != skipn - 1; i++)
-	  begin = wi::mul (begin, wi::to_wide (step_expr));
-	tree mult_expr = wide_int_to_tree (utype, begin);
-	init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr);
-	init_expr = gimple_convert (stmts, type, init_expr);
+	int pow2_step = wi::exact_log2 (begin);
+	/* Optimize init_expr * pow (step_expr, skipn) to
+	   init_expr << (log2 (step_expr) * skipn).  */
+	if (pow2_step != -1)
+	  {
+	    if (skipn >= TYPE_PRECISION (type)
+		|| skipn > (UINT_MAX / (unsigned) pow2_step)
+		|| skipn * (unsigned) pow2_step >= TYPE_PRECISION (type))
+		init_expr = build_zero_cst (type);
+	    else
+	      {
+		tree lshc = build_int_cst (utype, skipn * (unsigned) pow2_step);
+		init_expr = gimple_build (stmts, LSHIFT_EXPR, utype,
+					  init_expr, lshc);
+	      }
+	  }
+	/* Any better way for init_expr * pow (step_expr, skipn)???.  */
+	else
+	  {
+	    gcc_assert (skipn < TYPE_PRECISION (type));
+	    auto_mpz base, exp;
+	    wi::to_mpz (begin, base, TYPE_SIGN (type));
+	    mpz_pow_ui (exp, base, skipn);
+	    begin = wi::from_mpz (type, exp, TYPE_SIGN (type));
+	    tree mult_expr = wide_int_to_tree (utype, begin);
+	    init_expr = gimple_build (stmts, MULT_EXPR, utype,
+				      init_expr, mult_expr);
+	  }
+	  init_expr = gimple_convert (stmts, type, init_expr);
       }
       break;
 
-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big.
  2023-10-19  6:14   ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big liuhongt
@ 2023-10-19  7:14     ` Richard Biener
  2023-10-20  2:18       ` liuhongt
  0 siblings, 1 reply; 9+ messages in thread
From: Richard Biener @ 2023-10-19  7:14 UTC (permalink / raw)
  To: liuhongt; +Cc: gcc-patches, crazylht, hjl.tools

On Thu, Oct 19, 2023 at 8:16 AM liuhongt <hongtao.liu@intel.com> wrote:
>
> >So the bugs were not fixed without this hunk?  IIRC in the audit
> >trail we concluded the value is always positive ... (but of course
> >a large unsigned value can appear negative if you test it this way?)
> No, I added this incase in the future there's negative skip_niters as
> you mentioned in the PR, it's just defensive programming.
>
> >I think you can use one of the mpz_pow* functions and
> >wi::to_mpz/from_mpz for this.  See tree-ssa-loop-niter.cc for the
> >most heavy user of mpz (but not pow I think).
> Changed.
>
> Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
> Ok for trunk.
>
> There's loop in vect_peel_nonlinear_iv_init to get init_expr *
> pow (step_expr, skip_niters). When skipn_iters is too big, compile time
> hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to
> init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is
> pow of 2, otherwise give up vectorization when skip_niters >=
> TYPE_PRECISION (TREE_TYPE (init_expr)).
>
> Also give up vectorization when niters_skip is negative which will be
> used for fully masked loop.
>
> gcc/ChangeLog:
>
>         PR tree-optimization/111820
>         PR tree-optimization/111833
>         * tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give
>         up vectorization for nonlinear iv vect_step_op_mul when
>         step_expr is not exact_log2 and niters is greater than
>         TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize
>         for nagative niters_skip which will be used by fully masked
>         loop.
>         (vect_can_advance_ivs_p): Pass whole phi_info to
>         vect_can_peel_nonlinear_iv_p.
>         * tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize
>         init_expr * pow (step_expr, skipn) to init_expr
>         << (log2 (step_expr) * skipn) when step_expr is exact_log2.
>
> gcc/testsuite/ChangeLog:
>
>         * gcc.target/i386/pr111820-1.c: New test.
>         * gcc.target/i386/pr111820-2.c: New test.
>         * gcc.target/i386/pr111820-3.c: New test.
>         * gcc.target/i386/pr103144-mul-1.c: Adjust testcase.
>         * gcc.target/i386/pr103144-mul-2.c: Adjust testcase.
> ---
>  .../gcc.target/i386/pr103144-mul-1.c          |  8 ++---
>  .../gcc.target/i386/pr103144-mul-2.c          |  8 ++---
>  gcc/testsuite/gcc.target/i386/pr111820-1.c    | 16 +++++++++
>  gcc/testsuite/gcc.target/i386/pr111820-2.c    | 16 +++++++++
>  gcc/testsuite/gcc.target/i386/pr111820-3.c    | 16 +++++++++
>  gcc/tree-vect-loop-manip.cc                   | 28 +++++++++++++--
>  gcc/tree-vect-loop.cc                         | 34 ++++++++++++++++---
>  7 files changed, 110 insertions(+), 16 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c
>  create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c
>  create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-3.c
>
> diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> index 640c34fd959..913d7737dcd 100644
> --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> @@ -11,7 +11,7 @@ foo_mul (int* a, int b)
>    for (int i = 0; i != N; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
>
> @@ -23,7 +23,7 @@ foo_mul_const (int* a)
>    for (int i = 0; i != N; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
>
> @@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b)
>    for (int i = 0; i != 39; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
>
> @@ -46,6 +46,6 @@ foo_mul_peel_const (int* a)
>    for (int i = 0; i != 39; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
> diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
> index 39fdea3a69d..b2ff186e335 100644
> --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
> +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
> @@ -16,12 +16,12 @@ avx2_test (void)
>
>    __builtin_memset (epi32_exp, 0, N * sizeof (int));
>    int b = 8;
> -  v8si init = __extension__(v8si) { b, b * 3, b * 9, b * 27, b * 81, b * 243, b * 729, b * 2187 };
> +  v8si init = __extension__(v8si) { b, b * 4, b * 16, b * 64, b * 256, b * 1024, b * 4096, b * 16384 };
>
>    for (int i = 0; i != N / 8; i++)
>      {
>        memcpy (epi32_exp + i * 8, &init, 32);
> -      init *= 6561;
> +      init *= 65536;
>      }
>
>    foo_mul (epi32_dst, b);
> @@ -32,11 +32,11 @@ avx2_test (void)
>    if (__builtin_memcmp (epi32_dst, epi32_exp, 39 * 4) != 0)
>      __builtin_abort ();
>
> -  init = __extension__(v8si) { 1, 3, 9, 27, 81, 243, 729, 2187 };
> +  init = __extension__(v8si) { 1, 4, 16, 64, 256, 1024, 4096, 16384 };
>    for (int i = 0; i != N / 8; i++)
>      {
>        memcpy (epi32_exp + i * 8, &init, 32);
> -      init *= 6561;
> +      init *= 65536;
>      }
>
>    foo_mul_const (epi32_dst);
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c
> new file mode 100644
> index 00000000000..50e960c39d4
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */
> +/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f1 (void)
> +{
> +  int n = 0;
> +  while (-- n)
> +    {
> +      r_0 += r;
> +      r  *= 3;
> +    }
> +}
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c
> new file mode 100644
> index 00000000000..dbeceb228c3
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
> +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f (void)
> +{
> +  int n = 0;
> +  while (-- n)
> +    {
> +      r_0 += r ;
> +      r  *= 2;
> +    }
> +}
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-3.c b/gcc/testsuite/gcc.target/i386/pr111820-3.c
> new file mode 100644
> index 00000000000..b778f517663
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-3.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
> +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f (void)
> +{
> +  int n = 14;
> +  while (-- n)
> +    {
> +      r_0 += r ;
> +      r  *= 3;
> +    }
> +}
> diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
> index 2608c286e5d..a530088b61d 100644
> --- a/gcc/tree-vect-loop-manip.cc
> +++ b/gcc/tree-vect-loop-manip.cc
> @@ -1783,8 +1783,10 @@ iv_phi_p (stmt_vec_info stmt_info)
>  /* Return true if vectorizer can peel for nonlinear iv.  */
>  static bool
>  vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
> -                             enum vect_induction_op_type induction_type)
> +                             stmt_vec_info stmt_info)
>  {
> +  enum vect_induction_op_type induction_type
> +    = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info);
>    tree niters_skip;
>    /* Init_expr will be update by vect_update_ivs_after_vectorizer,
>       if niters or vf is unkown:
> @@ -1805,11 +1807,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
>        return false;
>      }
>
> +  /* Avoid compile time hog on vect_peel_nonlinear_iv_init.  */
> +  if (induction_type == vect_step_op_mul)
> +    {
> +      tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info);
> +      tree type = TREE_TYPE (step_expr);
> +
> +      if (wi::exact_log2 (wi::to_wide (step_expr)) == -1
> +         && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type))

So with pow being available this limit shouldn't be necessary any more and
the testcase adjustment can be avoided?

> +       {
> +         if (dump_enabled_p ())
> +           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> +                            "Avoid compile time hog on"
> +                            " vect_peel_nonlinear_iv_init"
> +                            " for nonlinear induction vec_step_op_mul"
> +                            " when iteration count is too big.\n");
> +         return false;
> +       }
> +    }
> +
>    /* Also doens't support peel for neg when niter is variable.
>       ??? generate something like niter_expr & 1 ? init_expr : -init_expr?  */
>    niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo);
>    if ((niters_skip != NULL_TREE
> -       && TREE_CODE (niters_skip) != INTEGER_CST)
> +       && (TREE_CODE (niters_skip) != INTEGER_CST
> +          || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0))
>        || (!vect_use_loop_mask_for_alignment_p (loop_vinfo)
>           && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0))
>      {
> @@ -1870,7 +1892,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
>        induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info);
>        if (induction_type != vect_step_op_add)
>         {
> -         if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type))
> +         if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info))
>             return false;
>
>           continue;
> diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
> index 89bdcaa0910..f0dbba50786 100644
> --- a/gcc/tree-vect-loop.cc
> +++ b/gcc/tree-vect-loop.cc
> @@ -9134,11 +9134,35 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr,
>         init_expr = gimple_convert (stmts, utype, init_expr);
>         unsigned skipn = TREE_INT_CST_LOW (skip_niters);
>         wide_int begin = wi::to_wide (step_expr);
> -       for (unsigned i = 0; i != skipn - 1; i++)
> -         begin = wi::mul (begin, wi::to_wide (step_expr));
> -       tree mult_expr = wide_int_to_tree (utype, begin);
> -       init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr);
> -       init_expr = gimple_convert (stmts, type, init_expr);
> +       int pow2_step = wi::exact_log2 (begin);
> +       /* Optimize init_expr * pow (step_expr, skipn) to
> +          init_expr << (log2 (step_expr) * skipn).  */
> +       if (pow2_step != -1)

and to avoid undefined behavior with too large shift just go the gmp
way unconditionally.

> +         {
> +           if (skipn >= TYPE_PRECISION (type)
> +               || skipn > (UINT_MAX / (unsigned) pow2_step)
> +               || skipn * (unsigned) pow2_step >= TYPE_PRECISION (type))
> +               init_expr = build_zero_cst (type);
> +           else
> +             {
> +               tree lshc = build_int_cst (utype, skipn * (unsigned) pow2_step);
> +               init_expr = gimple_build (stmts, LSHIFT_EXPR, utype,
> +                                         init_expr, lshc);
> +             }
> +         }
> +       /* Any better way for init_expr * pow (step_expr, skipn)???.  */

this comment is now resolved I think.

> +       else
> +         {
> +           gcc_assert (skipn < TYPE_PRECISION (type));
> +           auto_mpz base, exp;
> +           wi::to_mpz (begin, base, TYPE_SIGN (type));
> +           mpz_pow_ui (exp, base, skipn);

mpz_pow_ui uses unsigned long while i think we constrain known niters
to uint64 - so I suggest to use mpz_powm instead (limiting to a possibly
host specific limit - unsigned long - is unfortunately a no-go).

Otherwise looks good to me.

Thanks,
Richard.

> +           begin = wi::from_mpz (type, exp, TYPE_SIGN (type));
> +           tree mult_expr = wide_int_to_tree (utype, begin);
> +           init_expr = gimple_build (stmts, MULT_EXPR, utype,
> +                                     init_expr, mult_expr);
> +         }
> +         init_expr = gimple_convert (stmts, type, init_expr);
>        }
>        break;
>
> --
> 2.31.1
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big.
  2023-10-19  7:14     ` Richard Biener
@ 2023-10-20  2:18       ` liuhongt
  2023-10-20  6:30         ` Richard Biener
  0 siblings, 1 reply; 9+ messages in thread
From: liuhongt @ 2023-10-20  2:18 UTC (permalink / raw)
  To: gcc-patches; +Cc: crazylht, hjl.tools

>So with pow being available this limit shouldn't be necessary any more and
>the testcase adjustment can be avoided?
I tries, compile time still hogs on mpz_powm(3, INT_MAX), so i'll just
keep this.

>and to avoid undefined behavior with too large shift just go the gmp
>way unconditionally.
Changed.

>this comment is now resolved I think.
Removed.

>mpz_pow_ui uses unsigned long while i think we constrain known niters
>to uint64 - so I suggest to use mpz_powm instead (limiting to a possibly
>host specific limit - unsigned long - is unfortunately a no-go).
Changed.

There's loop in vect_peel_nonlinear_iv_init to get init_expr *
pow (step_expr, skip_niters). When skipn_iters is too big, compile time
hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to
init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is
pow of 2, otherwise give up vectorization when skip_niters >=
TYPE_PRECISION (TREE_TYPE (init_expr)).

Also give up vectorization when niters_skip is negative which will be
used for fully masked loop.

Bootstrapped and regtested on x86_64-linux-gnu{-m32,}.
Ok for trunk.

gcc/ChangeLog:

	PR tree-optimization/111820
	PR tree-optimization/111833
	* tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give
	up vectorization for nonlinear iv vect_step_op_mul when
	step_expr is not exact_log2 and niters is greater than
	TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize
	for nagative niters_skip which will be used by fully masked
	loop.
	(vect_can_advance_ivs_p): Pass whole phi_info to
	vect_can_peel_nonlinear_iv_p.
	* tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize
	init_expr * pow (step_expr, skipn) to init_expr
	<< (log2 (step_expr) * skipn) when step_expr is exact_log2.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/pr111820-1.c: New test.
	* gcc.target/i386/pr111820-2.c: New test.
	* gcc.target/i386/pr111820-3.c: New test.
	* gcc.target/i386/pr103144-mul-1.c: Adjust testcase.
	* gcc.target/i386/pr103144-mul-2.c: Adjust testcase.
---
 .../gcc.target/i386/pr103144-mul-1.c          |  8 +++---
 .../gcc.target/i386/pr103144-mul-2.c          |  8 +++---
 gcc/testsuite/gcc.target/i386/pr111820-1.c    | 16 +++++++++++
 gcc/testsuite/gcc.target/i386/pr111820-2.c    | 16 +++++++++++
 gcc/testsuite/gcc.target/i386/pr111820-3.c    | 16 +++++++++++
 gcc/tree-vect-loop-manip.cc                   | 28 +++++++++++++++++--
 gcc/tree-vect-loop.cc                         | 13 ++++++---
 7 files changed, 90 insertions(+), 15 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-3.c

diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
index 640c34fd959..913d7737dcd 100644
--- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
+++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
@@ -11,7 +11,7 @@ foo_mul (int* a, int b)
   for (int i = 0; i != N; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -23,7 +23,7 @@ foo_mul_const (int* a)
   for (int i = 0; i != N; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b)
   for (int i = 0; i != 39; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -46,6 +46,6 @@ foo_mul_peel_const (int* a)
   for (int i = 0; i != 39; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
index 39fdea3a69d..b2ff186e335 100644
--- a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
+++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
@@ -16,12 +16,12 @@ avx2_test (void)
 
   __builtin_memset (epi32_exp, 0, N * sizeof (int));
   int b = 8;
-  v8si init = __extension__(v8si) { b, b * 3, b * 9, b * 27, b * 81, b * 243, b * 729, b * 2187 };
+  v8si init = __extension__(v8si) { b, b * 4, b * 16, b * 64, b * 256, b * 1024, b * 4096, b * 16384 };
 
   for (int i = 0; i != N / 8; i++)
     {
       memcpy (epi32_exp + i * 8, &init, 32);
-      init *= 6561;
+      init *= 65536;
     }
 
   foo_mul (epi32_dst, b);
@@ -32,11 +32,11 @@ avx2_test (void)
   if (__builtin_memcmp (epi32_dst, epi32_exp, 39 * 4) != 0)
     __builtin_abort ();
 
-  init = __extension__(v8si) { 1, 3, 9, 27, 81, 243, 729, 2187 };
+  init = __extension__(v8si) { 1, 4, 16, 64, 256, 1024, 4096, 16384 };
   for (int i = 0; i != N / 8; i++)
     {
       memcpy (epi32_exp + i * 8, &init, 32);
-      init *= 6561;
+      init *= 65536;
     }
 
   foo_mul_const (epi32_dst);
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c
new file mode 100644
index 00000000000..50e960c39d4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */
+/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */
+
+int r;
+int r_0;
+
+void f1 (void)
+{
+  int n = 0;
+  while (-- n)
+    {
+      r_0 += r;
+      r  *= 3;
+    }
+}
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c
new file mode 100644
index 00000000000..dbeceb228c3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
+/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
+
+int r;
+int r_0;
+
+void f (void)
+{
+  int n = 0;
+  while (-- n)
+    {
+      r_0 += r ;
+      r  *= 2;
+    }
+}
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-3.c b/gcc/testsuite/gcc.target/i386/pr111820-3.c
new file mode 100644
index 00000000000..b778f517663
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-3.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
+/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
+
+int r;
+int r_0;
+
+void f (void)
+{
+  int n = 14;
+  while (-- n)
+    {
+      r_0 += r ;
+      r  *= 3;
+    }
+}
diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
index 2608c286e5d..a530088b61d 100644
--- a/gcc/tree-vect-loop-manip.cc
+++ b/gcc/tree-vect-loop-manip.cc
@@ -1783,8 +1783,10 @@ iv_phi_p (stmt_vec_info stmt_info)
 /* Return true if vectorizer can peel for nonlinear iv.  */
 static bool
 vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
-			      enum vect_induction_op_type induction_type)
+			      stmt_vec_info stmt_info)
 {
+  enum vect_induction_op_type induction_type
+    = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info);
   tree niters_skip;
   /* Init_expr will be update by vect_update_ivs_after_vectorizer,
      if niters or vf is unkown:
@@ -1805,11 +1807,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
       return false;
     }
 
+  /* Avoid compile time hog on vect_peel_nonlinear_iv_init.  */
+  if (induction_type == vect_step_op_mul)
+    {
+      tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info);
+      tree type = TREE_TYPE (step_expr);
+
+      if (wi::exact_log2 (wi::to_wide (step_expr)) == -1
+	  && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type))
+	{
+	  if (dump_enabled_p ())
+	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
+			     "Avoid compile time hog on"
+			     " vect_peel_nonlinear_iv_init"
+			     " for nonlinear induction vec_step_op_mul"
+			     " when iteration count is too big.\n");
+	  return false;
+	}
+    }
+
   /* Also doens't support peel for neg when niter is variable.
      ??? generate something like niter_expr & 1 ? init_expr : -init_expr?  */
   niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo);
   if ((niters_skip != NULL_TREE
-       && TREE_CODE (niters_skip) != INTEGER_CST)
+       && (TREE_CODE (niters_skip) != INTEGER_CST
+	   || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0))
       || (!vect_use_loop_mask_for_alignment_p (loop_vinfo)
 	  && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0))
     {
@@ -1870,7 +1892,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
       induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info);
       if (induction_type != vect_step_op_add)
 	{
-	  if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type))
+	  if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info))
 	    return false;
 
 	  continue;
diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
index 89bdcaa0910..7a6bbff0c8f 100644
--- a/gcc/tree-vect-loop.cc
+++ b/gcc/tree-vect-loop.cc
@@ -9132,12 +9132,17 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr,
       {
 	tree utype = unsigned_type_for (type);
 	init_expr = gimple_convert (stmts, utype, init_expr);
-	unsigned skipn = TREE_INT_CST_LOW (skip_niters);
+	wide_int skipn = wi::to_wide (skip_niters);
 	wide_int begin = wi::to_wide (step_expr);
-	for (unsigned i = 0; i != skipn - 1; i++)
-	  begin = wi::mul (begin, wi::to_wide (step_expr));
+	auto_mpz base, exp, mod, res;
+	wi::to_mpz (begin, base, TYPE_SIGN (type));
+	wi::to_mpz (skipn, exp, UNSIGNED);
+	mpz_ui_pow_ui (mod, 2, TYPE_PRECISION (type));
+	mpz_powm (res, base, exp, mod);
+	begin = wi::from_mpz (type, res, TYPE_SIGN (type));
 	tree mult_expr = wide_int_to_tree (utype, begin);
-	init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr);
+	init_expr = gimple_build (stmts, MULT_EXPR, utype,
+				  init_expr, mult_expr);
 	init_expr = gimple_convert (stmts, type, init_expr);
       }
       break;
-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big.
  2023-10-20  2:18       ` liuhongt
@ 2023-10-20  6:30         ` Richard Biener
  2023-10-24 11:19           ` [PATCH GCC13 backport] " liuhongt
  0 siblings, 1 reply; 9+ messages in thread
From: Richard Biener @ 2023-10-20  6:30 UTC (permalink / raw)
  To: liuhongt; +Cc: gcc-patches, crazylht, hjl.tools

On Fri, Oct 20, 2023 at 4:21 AM liuhongt <hongtao.liu@intel.com> wrote:
>
> >So with pow being available this limit shouldn't be necessary any more and
> >the testcase adjustment can be avoided?
> I tries, compile time still hogs on mpz_powm(3, INT_MAX), so i'll just
> keep this.

Doh :/  Maybe it's possible to limit the output precision for mpz computes
somehow?  I suspect we're really computing the very large resulting number
here ... unfortunately I don't know GMP enough to tell.

> >and to avoid undefined behavior with too large shift just go the gmp
> >way unconditionally.
> Changed.
>
> >this comment is now resolved I think.
> Removed.
>
> >mpz_pow_ui uses unsigned long while i think we constrain known niters
> >to uint64 - so I suggest to use mpz_powm instead (limiting to a possibly
> >host specific limit - unsigned long - is unfortunately a no-go).
> Changed.
>
> There's loop in vect_peel_nonlinear_iv_init to get init_expr *
> pow (step_expr, skip_niters). When skipn_iters is too big, compile time
> hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to
> init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is
> pow of 2, otherwise give up vectorization when skip_niters >=
> TYPE_PRECISION (TREE_TYPE (init_expr)).
>
> Also give up vectorization when niters_skip is negative which will be
> used for fully masked loop.
>
> Bootstrapped and regtested on x86_64-linux-gnu{-m32,}.
> Ok for trunk.

OK.

Thanks,
Richard.

> gcc/ChangeLog:
>
>         PR tree-optimization/111820
>         PR tree-optimization/111833
>         * tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give
>         up vectorization for nonlinear iv vect_step_op_mul when
>         step_expr is not exact_log2 and niters is greater than
>         TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize
>         for nagative niters_skip which will be used by fully masked
>         loop.
>         (vect_can_advance_ivs_p): Pass whole phi_info to
>         vect_can_peel_nonlinear_iv_p.
>         * tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize
>         init_expr * pow (step_expr, skipn) to init_expr
>         << (log2 (step_expr) * skipn) when step_expr is exact_log2.
>
> gcc/testsuite/ChangeLog:
>
>         * gcc.target/i386/pr111820-1.c: New test.
>         * gcc.target/i386/pr111820-2.c: New test.
>         * gcc.target/i386/pr111820-3.c: New test.
>         * gcc.target/i386/pr103144-mul-1.c: Adjust testcase.
>         * gcc.target/i386/pr103144-mul-2.c: Adjust testcase.
> ---
>  .../gcc.target/i386/pr103144-mul-1.c          |  8 +++---
>  .../gcc.target/i386/pr103144-mul-2.c          |  8 +++---
>  gcc/testsuite/gcc.target/i386/pr111820-1.c    | 16 +++++++++++
>  gcc/testsuite/gcc.target/i386/pr111820-2.c    | 16 +++++++++++
>  gcc/testsuite/gcc.target/i386/pr111820-3.c    | 16 +++++++++++
>  gcc/tree-vect-loop-manip.cc                   | 28 +++++++++++++++++--
>  gcc/tree-vect-loop.cc                         | 13 ++++++---
>  7 files changed, 90 insertions(+), 15 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c
>  create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c
>  create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-3.c
>
> diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> index 640c34fd959..913d7737dcd 100644
> --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> @@ -11,7 +11,7 @@ foo_mul (int* a, int b)
>    for (int i = 0; i != N; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
>
> @@ -23,7 +23,7 @@ foo_mul_const (int* a)
>    for (int i = 0; i != N; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
>
> @@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b)
>    for (int i = 0; i != 39; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
>
> @@ -46,6 +46,6 @@ foo_mul_peel_const (int* a)
>    for (int i = 0; i != 39; i++)
>      {
>        a[i] = b;
> -      b *= 3;
> +      b *= 4;
>      }
>  }
> diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
> index 39fdea3a69d..b2ff186e335 100644
> --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
> +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
> @@ -16,12 +16,12 @@ avx2_test (void)
>
>    __builtin_memset (epi32_exp, 0, N * sizeof (int));
>    int b = 8;
> -  v8si init = __extension__(v8si) { b, b * 3, b * 9, b * 27, b * 81, b * 243, b * 729, b * 2187 };
> +  v8si init = __extension__(v8si) { b, b * 4, b * 16, b * 64, b * 256, b * 1024, b * 4096, b * 16384 };
>
>    for (int i = 0; i != N / 8; i++)
>      {
>        memcpy (epi32_exp + i * 8, &init, 32);
> -      init *= 6561;
> +      init *= 65536;
>      }
>
>    foo_mul (epi32_dst, b);
> @@ -32,11 +32,11 @@ avx2_test (void)
>    if (__builtin_memcmp (epi32_dst, epi32_exp, 39 * 4) != 0)
>      __builtin_abort ();
>
> -  init = __extension__(v8si) { 1, 3, 9, 27, 81, 243, 729, 2187 };
> +  init = __extension__(v8si) { 1, 4, 16, 64, 256, 1024, 4096, 16384 };
>    for (int i = 0; i != N / 8; i++)
>      {
>        memcpy (epi32_exp + i * 8, &init, 32);
> -      init *= 6561;
> +      init *= 65536;
>      }
>
>    foo_mul_const (epi32_dst);
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c
> new file mode 100644
> index 00000000000..50e960c39d4
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */
> +/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f1 (void)
> +{
> +  int n = 0;
> +  while (-- n)
> +    {
> +      r_0 += r;
> +      r  *= 3;
> +    }
> +}
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c
> new file mode 100644
> index 00000000000..dbeceb228c3
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
> +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f (void)
> +{
> +  int n = 0;
> +  while (-- n)
> +    {
> +      r_0 += r ;
> +      r  *= 2;
> +    }
> +}
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-3.c b/gcc/testsuite/gcc.target/i386/pr111820-3.c
> new file mode 100644
> index 00000000000..b778f517663
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-3.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
> +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f (void)
> +{
> +  int n = 14;
> +  while (-- n)
> +    {
> +      r_0 += r ;
> +      r  *= 3;
> +    }
> +}
> diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
> index 2608c286e5d..a530088b61d 100644
> --- a/gcc/tree-vect-loop-manip.cc
> +++ b/gcc/tree-vect-loop-manip.cc
> @@ -1783,8 +1783,10 @@ iv_phi_p (stmt_vec_info stmt_info)
>  /* Return true if vectorizer can peel for nonlinear iv.  */
>  static bool
>  vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
> -                             enum vect_induction_op_type induction_type)
> +                             stmt_vec_info stmt_info)
>  {
> +  enum vect_induction_op_type induction_type
> +    = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info);
>    tree niters_skip;
>    /* Init_expr will be update by vect_update_ivs_after_vectorizer,
>       if niters or vf is unkown:
> @@ -1805,11 +1807,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
>        return false;
>      }
>
> +  /* Avoid compile time hog on vect_peel_nonlinear_iv_init.  */
> +  if (induction_type == vect_step_op_mul)
> +    {
> +      tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info);
> +      tree type = TREE_TYPE (step_expr);
> +
> +      if (wi::exact_log2 (wi::to_wide (step_expr)) == -1
> +         && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type))
> +       {
> +         if (dump_enabled_p ())
> +           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> +                            "Avoid compile time hog on"
> +                            " vect_peel_nonlinear_iv_init"
> +                            " for nonlinear induction vec_step_op_mul"
> +                            " when iteration count is too big.\n");
> +         return false;
> +       }
> +    }
> +
>    /* Also doens't support peel for neg when niter is variable.
>       ??? generate something like niter_expr & 1 ? init_expr : -init_expr?  */
>    niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo);
>    if ((niters_skip != NULL_TREE
> -       && TREE_CODE (niters_skip) != INTEGER_CST)
> +       && (TREE_CODE (niters_skip) != INTEGER_CST
> +          || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0))
>        || (!vect_use_loop_mask_for_alignment_p (loop_vinfo)
>           && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0))
>      {
> @@ -1870,7 +1892,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
>        induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info);
>        if (induction_type != vect_step_op_add)
>         {
> -         if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type))
> +         if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info))
>             return false;
>
>           continue;
> diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
> index 89bdcaa0910..7a6bbff0c8f 100644
> --- a/gcc/tree-vect-loop.cc
> +++ b/gcc/tree-vect-loop.cc
> @@ -9132,12 +9132,17 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr,
>        {
>         tree utype = unsigned_type_for (type);
>         init_expr = gimple_convert (stmts, utype, init_expr);
> -       unsigned skipn = TREE_INT_CST_LOW (skip_niters);
> +       wide_int skipn = wi::to_wide (skip_niters);
>         wide_int begin = wi::to_wide (step_expr);
> -       for (unsigned i = 0; i != skipn - 1; i++)
> -         begin = wi::mul (begin, wi::to_wide (step_expr));
> +       auto_mpz base, exp, mod, res;
> +       wi::to_mpz (begin, base, TYPE_SIGN (type));
> +       wi::to_mpz (skipn, exp, UNSIGNED);
> +       mpz_ui_pow_ui (mod, 2, TYPE_PRECISION (type));
> +       mpz_powm (res, base, exp, mod);
> +       begin = wi::from_mpz (type, res, TYPE_SIGN (type));
>         tree mult_expr = wide_int_to_tree (utype, begin);
> -       init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr);
> +       init_expr = gimple_build (stmts, MULT_EXPR, utype,
> +                                 init_expr, mult_expr);
>         init_expr = gimple_convert (stmts, type, init_expr);
>        }
>        break;
> --
> 2.31.1
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH GCC13 backport] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big.
  2023-10-20  6:30         ` Richard Biener
@ 2023-10-24 11:19           ` liuhongt
  2023-10-26 14:07             ` Richard Biener
  0 siblings, 1 reply; 9+ messages in thread
From: liuhongt @ 2023-10-24 11:19 UTC (permalink / raw)
  To: gcc-patches; +Cc: crazylht, hjl.tools

This is the backport patch for releases/gcc-13 branch, the original patch for main trunk
is at [1].
The only difference between this backport patch and [1] is GCC13 doesn't support auto_mpz,
So this patch manually use mpz_init/mpz_clear.

[1] https://gcc.gnu.org/pipermail/gcc-patches/2023-October/633661.html

Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}
Ok for backport to releases/gcc-13?

There's loop in vect_peel_nonlinear_iv_init to get init_expr *
pow (step_expr, skip_niters). When skipn_iters is too big, compile time
hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to
init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is
pow of 2, otherwise give up vectorization when skip_niters >=
TYPE_PRECISION (TREE_TYPE (init_expr)).

Also give up vectorization when niters_skip is negative which will be
used for fully masked loop.

gcc/ChangeLog:

	PR tree-optimization/111820
	PR tree-optimization/111833
	* tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give
	up vectorization for nonlinear iv vect_step_op_mul when
	step_expr is not exact_log2 and niters is greater than
	TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize
	for nagative niters_skip which will be used by fully masked
	loop.
	(vect_can_advance_ivs_p): Pass whole phi_info to
	vect_can_peel_nonlinear_iv_p.
	* tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize
	init_expr * pow (step_expr, skipn) to init_expr
	<< (log2 (step_expr) * skipn) when step_expr is exact_log2.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/pr111820-1.c: New test.
	* gcc.target/i386/pr111820-2.c: New test.
	* gcc.target/i386/pr111820-3.c: New test.
	* gcc.target/i386/pr103144-mul-1.c: Adjust testcase.
	* gcc.target/i386/pr103144-mul-2.c: Adjust testcase.
---
 .../gcc.target/i386/pr103144-mul-1.c          |  8 +++---
 .../gcc.target/i386/pr103144-mul-2.c          |  8 +++---
 gcc/testsuite/gcc.target/i386/pr111820-1.c    | 16 +++++++++++
 gcc/testsuite/gcc.target/i386/pr111820-2.c    | 16 +++++++++++
 gcc/testsuite/gcc.target/i386/pr111820-3.c    | 16 +++++++++++
 gcc/tree-vect-loop-manip.cc                   | 28 +++++++++++++++++--
 gcc/tree-vect-loop.cc                         | 21 +++++++++++---
 7 files changed, 98 insertions(+), 15 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c
 create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-3.c

diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
index 640c34fd959..913d7737dcd 100644
--- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
+++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
@@ -11,7 +11,7 @@ foo_mul (int* a, int b)
   for (int i = 0; i != N; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -23,7 +23,7 @@ foo_mul_const (int* a)
   for (int i = 0; i != N; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b)
   for (int i = 0; i != 39; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
 
@@ -46,6 +46,6 @@ foo_mul_peel_const (int* a)
   for (int i = 0; i != 39; i++)
     {
       a[i] = b;
-      b *= 3;
+      b *= 4;
     }
 }
diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
index 39fdea3a69d..b2ff186e335 100644
--- a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
+++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
@@ -16,12 +16,12 @@ avx2_test (void)
 
   __builtin_memset (epi32_exp, 0, N * sizeof (int));
   int b = 8;
-  v8si init = __extension__(v8si) { b, b * 3, b * 9, b * 27, b * 81, b * 243, b * 729, b * 2187 };
+  v8si init = __extension__(v8si) { b, b * 4, b * 16, b * 64, b * 256, b * 1024, b * 4096, b * 16384 };
 
   for (int i = 0; i != N / 8; i++)
     {
       memcpy (epi32_exp + i * 8, &init, 32);
-      init *= 6561;
+      init *= 65536;
     }
 
   foo_mul (epi32_dst, b);
@@ -32,11 +32,11 @@ avx2_test (void)
   if (__builtin_memcmp (epi32_dst, epi32_exp, 39 * 4) != 0)
     __builtin_abort ();
 
-  init = __extension__(v8si) { 1, 3, 9, 27, 81, 243, 729, 2187 };
+  init = __extension__(v8si) { 1, 4, 16, 64, 256, 1024, 4096, 16384 };
   for (int i = 0; i != N / 8; i++)
     {
       memcpy (epi32_exp + i * 8, &init, 32);
-      init *= 6561;
+      init *= 65536;
     }
 
   foo_mul_const (epi32_dst);
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c
new file mode 100644
index 00000000000..50e960c39d4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */
+/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */
+
+int r;
+int r_0;
+
+void f1 (void)
+{
+  int n = 0;
+  while (-- n)
+    {
+      r_0 += r;
+      r  *= 3;
+    }
+}
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c
new file mode 100644
index 00000000000..dbeceb228c3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
+/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
+
+int r;
+int r_0;
+
+void f (void)
+{
+  int n = 0;
+  while (-- n)
+    {
+      r_0 += r ;
+      r  *= 2;
+    }
+}
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-3.c b/gcc/testsuite/gcc.target/i386/pr111820-3.c
new file mode 100644
index 00000000000..b778f517663
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-3.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
+/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
+
+int r;
+int r_0;
+
+void f (void)
+{
+  int n = 14;
+  while (-- n)
+    {
+      r_0 += r ;
+      r  *= 3;
+    }
+}
diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
index f60fa50e8f4..767b19b15a1 100644
--- a/gcc/tree-vect-loop-manip.cc
+++ b/gcc/tree-vect-loop-manip.cc
@@ -1391,8 +1391,10 @@ iv_phi_p (stmt_vec_info stmt_info)
 /* Return true if vectorizer can peel for nonlinear iv.  */
 static bool
 vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
-			      enum vect_induction_op_type induction_type)
+			      stmt_vec_info stmt_info)
 {
+  enum vect_induction_op_type induction_type
+    = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info);
   tree niters_skip;
   /* Init_expr will be update by vect_update_ivs_after_vectorizer,
      if niters or vf is unkown:
@@ -1413,11 +1415,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
       return false;
     }
 
+  /* Avoid compile time hog on vect_peel_nonlinear_iv_init.  */
+  if (induction_type == vect_step_op_mul)
+    {
+      tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info);
+      tree type = TREE_TYPE (step_expr);
+
+      if (wi::exact_log2 (wi::to_wide (step_expr)) == -1
+	  && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type))
+	{
+	  if (dump_enabled_p ())
+	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
+			     "Avoid compile time hog on"
+			     " vect_peel_nonlinear_iv_init"
+			     " for nonlinear induction vec_step_op_mul"
+			     " when iteration count is too big.\n");
+	  return false;
+	}
+    }
+
   /* Also doens't support peel for neg when niter is variable.
      ??? generate something like niter_expr & 1 ? init_expr : -init_expr?  */
   niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo);
   if ((niters_skip != NULL_TREE
-       && TREE_CODE (niters_skip) != INTEGER_CST)
+       && (TREE_CODE (niters_skip) != INTEGER_CST
+	   || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0))
       || (!vect_use_loop_mask_for_alignment_p (loop_vinfo)
 	  && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0))
     {
@@ -1478,7 +1500,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
       induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info);
       if (induction_type != vect_step_op_add)
 	{
-	  if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type))
+	  if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info))
 	    return false;
 
 	  continue;
diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
index 92d2e0ef9be..2f098b96b1d 100644
--- a/gcc/tree-vect-loop.cc
+++ b/gcc/tree-vect-loop.cc
@@ -8728,13 +8728,26 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr,
       {
 	tree utype = unsigned_type_for (type);
 	init_expr = gimple_convert (stmts, utype, init_expr);
-	unsigned skipn = TREE_INT_CST_LOW (skip_niters);
+	wide_int skipn = wi::to_wide (skip_niters);
 	wide_int begin = wi::to_wide (step_expr);
-	for (unsigned i = 0; i != skipn - 1; i++)
-	  begin = wi::mul (begin, wi::to_wide (step_expr));
+	mpz_t base, exp, mod, res;
+	mpz_init (base);
+	mpz_init (mod);
+	mpz_init (exp);
+	mpz_init (res);
+	wi::to_mpz (begin, base, TYPE_SIGN (type));
+	wi::to_mpz (skipn, exp, UNSIGNED);
+	mpz_ui_pow_ui (mod, 2, TYPE_PRECISION (type));
+	mpz_powm (res, base, exp, mod);
+	begin = wi::from_mpz (type, res, TYPE_SIGN (type));
 	tree mult_expr = wide_int_to_tree (utype, begin);
-	init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr);
+	init_expr = gimple_build (stmts, MULT_EXPR, utype,
+				  init_expr, mult_expr);
 	init_expr = gimple_convert (stmts, type, init_expr);
+	mpz_clear (base);
+	mpz_clear (mod);
+	mpz_clear (exp);
+	mpz_clear (res);
       }
       break;
 
-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH GCC13 backport] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big.
  2023-10-24 11:19           ` [PATCH GCC13 backport] " liuhongt
@ 2023-10-26 14:07             ` Richard Biener
  0 siblings, 0 replies; 9+ messages in thread
From: Richard Biener @ 2023-10-26 14:07 UTC (permalink / raw)
  To: liuhongt; +Cc: gcc-patches, crazylht, hjl.tools



> Am 24.10.2023 um 13:22 schrieb liuhongt <hongtao.liu@intel.com>:
> 
> This is the backport patch for releases/gcc-13 branch, the original patch for main trunk
> is at [1].
> The only difference between this backport patch and [1] is GCC13 doesn't support auto_mpz,
> So this patch manually use mpz_init/mpz_clear.
> 
> [1] https://gcc.gnu.org/pipermail/gcc-patches/2023-October/633661.html
> 
> Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}
> Ok for backport to releases/gcc-13?

Ok.

Richard 

> There's loop in vect_peel_nonlinear_iv_init to get init_expr *
> pow (step_expr, skip_niters). When skipn_iters is too big, compile time
> hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to
> init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is
> pow of 2, otherwise give up vectorization when skip_niters >=
> TYPE_PRECISION (TREE_TYPE (init_expr)).
> 
> Also give up vectorization when niters_skip is negative which will be
> used for fully masked loop.
> 
> gcc/ChangeLog:
> 
>    PR tree-optimization/111820
>    PR tree-optimization/111833
>    * tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give
>    up vectorization for nonlinear iv vect_step_op_mul when
>    step_expr is not exact_log2 and niters is greater than
>    TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize
>    for nagative niters_skip which will be used by fully masked
>    loop.
>    (vect_can_advance_ivs_p): Pass whole phi_info to
>    vect_can_peel_nonlinear_iv_p.
>    * tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize
>    init_expr * pow (step_expr, skipn) to init_expr
>    << (log2 (step_expr) * skipn) when step_expr is exact_log2.
> 
> gcc/testsuite/ChangeLog:
> 
>    * gcc.target/i386/pr111820-1.c: New test.
>    * gcc.target/i386/pr111820-2.c: New test.
>    * gcc.target/i386/pr111820-3.c: New test.
>    * gcc.target/i386/pr103144-mul-1.c: Adjust testcase.
>    * gcc.target/i386/pr103144-mul-2.c: Adjust testcase.
> ---
> .../gcc.target/i386/pr103144-mul-1.c          |  8 +++---
> .../gcc.target/i386/pr103144-mul-2.c          |  8 +++---
> gcc/testsuite/gcc.target/i386/pr111820-1.c    | 16 +++++++++++
> gcc/testsuite/gcc.target/i386/pr111820-2.c    | 16 +++++++++++
> gcc/testsuite/gcc.target/i386/pr111820-3.c    | 16 +++++++++++
> gcc/tree-vect-loop-manip.cc                   | 28 +++++++++++++++++--
> gcc/tree-vect-loop.cc                         | 21 +++++++++++---
> 7 files changed, 98 insertions(+), 15 deletions(-)
> create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c
> create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c
> create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-3.c
> 
> diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> index 640c34fd959..913d7737dcd 100644
> --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
> @@ -11,7 +11,7 @@ foo_mul (int* a, int b)
>   for (int i = 0; i != N; i++)
>     {
>       a[i] = b;
> -      b *= 3;
> +      b *= 4;
>     }
> }
> 
> @@ -23,7 +23,7 @@ foo_mul_const (int* a)
>   for (int i = 0; i != N; i++)
>     {
>       a[i] = b;
> -      b *= 3;
> +      b *= 4;
>     }
> }
> 
> @@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b)
>   for (int i = 0; i != 39; i++)
>     {
>       a[i] = b;
> -      b *= 3;
> +      b *= 4;
>     }
> }
> 
> @@ -46,6 +46,6 @@ foo_mul_peel_const (int* a)
>   for (int i = 0; i != 39; i++)
>     {
>       a[i] = b;
> -      b *= 3;
> +      b *= 4;
>     }
> }
> diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
> index 39fdea3a69d..b2ff186e335 100644
> --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
> +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
> @@ -16,12 +16,12 @@ avx2_test (void)
> 
>   __builtin_memset (epi32_exp, 0, N * sizeof (int));
>   int b = 8;
> -  v8si init = __extension__(v8si) { b, b * 3, b * 9, b * 27, b * 81, b * 243, b * 729, b * 2187 };
> +  v8si init = __extension__(v8si) { b, b * 4, b * 16, b * 64, b * 256, b * 1024, b * 4096, b * 16384 };
> 
>   for (int i = 0; i != N / 8; i++)
>     {
>       memcpy (epi32_exp + i * 8, &init, 32);
> -      init *= 6561;
> +      init *= 65536;
>     }
> 
>   foo_mul (epi32_dst, b);
> @@ -32,11 +32,11 @@ avx2_test (void)
>   if (__builtin_memcmp (epi32_dst, epi32_exp, 39 * 4) != 0)
>     __builtin_abort ();
> 
> -  init = __extension__(v8si) { 1, 3, 9, 27, 81, 243, 729, 2187 };
> +  init = __extension__(v8si) { 1, 4, 16, 64, 256, 1024, 4096, 16384 };
>   for (int i = 0; i != N / 8; i++)
>     {
>       memcpy (epi32_exp + i * 8, &init, 32);
> -      init *= 6561;
> +      init *= 65536;
>     }
> 
>   foo_mul_const (epi32_dst);
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c
> new file mode 100644
> index 00000000000..50e960c39d4
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */
> +/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f1 (void)
> +{
> +  int n = 0;
> +  while (-- n)
> +    {
> +      r_0 += r;
> +      r  *= 3;
> +    }
> +}
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c
> new file mode 100644
> index 00000000000..dbeceb228c3
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
> +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f (void)
> +{
> +  int n = 0;
> +  while (-- n)
> +    {
> +      r_0 += r ;
> +      r  *= 2;
> +    }
> +}
> diff --git a/gcc/testsuite/gcc.target/i386/pr111820-3.c b/gcc/testsuite/gcc.target/i386/pr111820-3.c
> new file mode 100644
> index 00000000000..b778f517663
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/pr111820-3.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
> +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
> +
> +int r;
> +int r_0;
> +
> +void f (void)
> +{
> +  int n = 14;
> +  while (-- n)
> +    {
> +      r_0 += r ;
> +      r  *= 3;
> +    }
> +}
> diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
> index f60fa50e8f4..767b19b15a1 100644
> --- a/gcc/tree-vect-loop-manip.cc
> +++ b/gcc/tree-vect-loop-manip.cc
> @@ -1391,8 +1391,10 @@ iv_phi_p (stmt_vec_info stmt_info)
> /* Return true if vectorizer can peel for nonlinear iv.  */
> static bool
> vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
> -                  enum vect_induction_op_type induction_type)
> +                  stmt_vec_info stmt_info)
> {
> +  enum vect_induction_op_type induction_type
> +    = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info);
>   tree niters_skip;
>   /* Init_expr will be update by vect_update_ivs_after_vectorizer,
>      if niters or vf is unkown:
> @@ -1413,11 +1415,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
>       return false;
>     }
> 
> +  /* Avoid compile time hog on vect_peel_nonlinear_iv_init.  */
> +  if (induction_type == vect_step_op_mul)
> +    {
> +      tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info);
> +      tree type = TREE_TYPE (step_expr);
> +
> +      if (wi::exact_log2 (wi::to_wide (step_expr)) == -1
> +      && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type))
> +    {
> +      if (dump_enabled_p ())
> +        dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> +                 "Avoid compile time hog on"
> +                 " vect_peel_nonlinear_iv_init"
> +                 " for nonlinear induction vec_step_op_mul"
> +                 " when iteration count is too big.\n");
> +      return false;
> +    }
> +    }
> +
>   /* Also doens't support peel for neg when niter is variable.
>      ??? generate something like niter_expr & 1 ? init_expr : -init_expr?  */
>   niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo);
>   if ((niters_skip != NULL_TREE
> -       && TREE_CODE (niters_skip) != INTEGER_CST)
> +       && (TREE_CODE (niters_skip) != INTEGER_CST
> +       || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0))
>       || (!vect_use_loop_mask_for_alignment_p (loop_vinfo)
>      && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0))
>     {
> @@ -1478,7 +1500,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
>       induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info);
>       if (induction_type != vect_step_op_add)
>    {
> -      if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type))
> +      if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info))
>        return false;
> 
>      continue;
> diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
> index 92d2e0ef9be..2f098b96b1d 100644
> --- a/gcc/tree-vect-loop.cc
> +++ b/gcc/tree-vect-loop.cc
> @@ -8728,13 +8728,26 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr,
>       {
>    tree utype = unsigned_type_for (type);
>    init_expr = gimple_convert (stmts, utype, init_expr);
> -    unsigned skipn = TREE_INT_CST_LOW (skip_niters);
> +    wide_int skipn = wi::to_wide (skip_niters);
>    wide_int begin = wi::to_wide (step_expr);
> -    for (unsigned i = 0; i != skipn - 1; i++)
> -      begin = wi::mul (begin, wi::to_wide (step_expr));
> +    mpz_t base, exp, mod, res;
> +    mpz_init (base);
> +    mpz_init (mod);
> +    mpz_init (exp);
> +    mpz_init (res);
> +    wi::to_mpz (begin, base, TYPE_SIGN (type));
> +    wi::to_mpz (skipn, exp, UNSIGNED);
> +    mpz_ui_pow_ui (mod, 2, TYPE_PRECISION (type));
> +    mpz_powm (res, base, exp, mod);
> +    begin = wi::from_mpz (type, res, TYPE_SIGN (type));
>    tree mult_expr = wide_int_to_tree (utype, begin);
> -    init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr);
> +    init_expr = gimple_build (stmts, MULT_EXPR, utype,
> +                  init_expr, mult_expr);
>    init_expr = gimple_convert (stmts, type, init_expr);
> +    mpz_clear (base);
> +    mpz_clear (mod);
> +    mpz_clear (exp);
> +    mpz_clear (res);
>       }
>       break;
> 
> -- 
> 2.31.1
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-10-26 14:07 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-18  8:32 [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c There's loop in vect_peel_nonlinear_iv_init to get init_expr * pow (step_expr, skip_niters). When skipn_iters is too big, compile time hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is pow of 2, otherwise give up vectorization when skip_niters >= TYPE_PRECISION (TREE_TYPE (init_expr)) liuhongt
2023-10-18  8:43 ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65; 6800; 1c " Hongtao Liu
2023-10-18 10:50 ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c " Richard Biener
2023-10-19  6:14   ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big liuhongt
2023-10-19  7:14     ` Richard Biener
2023-10-20  2:18       ` liuhongt
2023-10-20  6:30         ` Richard Biener
2023-10-24 11:19           ` [PATCH GCC13 backport] " liuhongt
2023-10-26 14:07             ` Richard Biener

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).