From: liuhongt <hongtao.liu@intel.com>
To: gcc-patches@gcc.gnu.org
Cc: crazylht@gmail.com, hjl.tools@gmail.com
Subject: [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big.
Date: Fri, 20 Oct 2023 10:18:55 +0800 [thread overview]
Message-ID: <20231020021855.482999-1-hongtao.liu@intel.com> (raw)
In-Reply-To: <CAFiYyc0z8txR35ktHixGLhHeT3kE=CAHJF+HvriijEpg-KLaDg@mail.gmail.com>
>So with pow being available this limit shouldn't be necessary any more and
>the testcase adjustment can be avoided?
I tries, compile time still hogs on mpz_powm(3, INT_MAX), so i'll just
keep this.
>and to avoid undefined behavior with too large shift just go the gmp
>way unconditionally.
Changed.
>this comment is now resolved I think.
Removed.
>mpz_pow_ui uses unsigned long while i think we constrain known niters
>to uint64 - so I suggest to use mpz_powm instead (limiting to a possibly
>host specific limit - unsigned long - is unfortunately a no-go).
Changed.
There's loop in vect_peel_nonlinear_iv_init to get init_expr *
pow (step_expr, skip_niters). When skipn_iters is too big, compile time
hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to
init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is
pow of 2, otherwise give up vectorization when skip_niters >=
TYPE_PRECISION (TREE_TYPE (init_expr)).
Also give up vectorization when niters_skip is negative which will be
used for fully masked loop.
Bootstrapped and regtested on x86_64-linux-gnu{-m32,}.
Ok for trunk.
gcc/ChangeLog:
PR tree-optimization/111820
PR tree-optimization/111833
* tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give
up vectorization for nonlinear iv vect_step_op_mul when
step_expr is not exact_log2 and niters is greater than
TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize
for nagative niters_skip which will be used by fully masked
loop.
(vect_can_advance_ivs_p): Pass whole phi_info to
vect_can_peel_nonlinear_iv_p.
* tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize
init_expr * pow (step_expr, skipn) to init_expr
<< (log2 (step_expr) * skipn) when step_expr is exact_log2.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr111820-1.c: New test.
* gcc.target/i386/pr111820-2.c: New test.
* gcc.target/i386/pr111820-3.c: New test.
* gcc.target/i386/pr103144-mul-1.c: Adjust testcase.
* gcc.target/i386/pr103144-mul-2.c: Adjust testcase.
---
.../gcc.target/i386/pr103144-mul-1.c | 8 +++---
.../gcc.target/i386/pr103144-mul-2.c | 8 +++---
gcc/testsuite/gcc.target/i386/pr111820-1.c | 16 +++++++++++
gcc/testsuite/gcc.target/i386/pr111820-2.c | 16 +++++++++++
gcc/testsuite/gcc.target/i386/pr111820-3.c | 16 +++++++++++
gcc/tree-vect-loop-manip.cc | 28 +++++++++++++++++--
gcc/tree-vect-loop.cc | 13 ++++++---
7 files changed, 90 insertions(+), 15 deletions(-)
create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c
create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c
create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-3.c
diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
index 640c34fd959..913d7737dcd 100644
--- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
+++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c
@@ -11,7 +11,7 @@ foo_mul (int* a, int b)
for (int i = 0; i != N; i++)
{
a[i] = b;
- b *= 3;
+ b *= 4;
}
}
@@ -23,7 +23,7 @@ foo_mul_const (int* a)
for (int i = 0; i != N; i++)
{
a[i] = b;
- b *= 3;
+ b *= 4;
}
}
@@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b)
for (int i = 0; i != 39; i++)
{
a[i] = b;
- b *= 3;
+ b *= 4;
}
}
@@ -46,6 +46,6 @@ foo_mul_peel_const (int* a)
for (int i = 0; i != 39; i++)
{
a[i] = b;
- b *= 3;
+ b *= 4;
}
}
diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
index 39fdea3a69d..b2ff186e335 100644
--- a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
+++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c
@@ -16,12 +16,12 @@ avx2_test (void)
__builtin_memset (epi32_exp, 0, N * sizeof (int));
int b = 8;
- v8si init = __extension__(v8si) { b, b * 3, b * 9, b * 27, b * 81, b * 243, b * 729, b * 2187 };
+ v8si init = __extension__(v8si) { b, b * 4, b * 16, b * 64, b * 256, b * 1024, b * 4096, b * 16384 };
for (int i = 0; i != N / 8; i++)
{
memcpy (epi32_exp + i * 8, &init, 32);
- init *= 6561;
+ init *= 65536;
}
foo_mul (epi32_dst, b);
@@ -32,11 +32,11 @@ avx2_test (void)
if (__builtin_memcmp (epi32_dst, epi32_exp, 39 * 4) != 0)
__builtin_abort ();
- init = __extension__(v8si) { 1, 3, 9, 27, 81, 243, 729, 2187 };
+ init = __extension__(v8si) { 1, 4, 16, 64, 256, 1024, 4096, 16384 };
for (int i = 0; i != N / 8; i++)
{
memcpy (epi32_exp + i * 8, &init, 32);
- init *= 6561;
+ init *= 65536;
}
foo_mul_const (epi32_dst);
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c
new file mode 100644
index 00000000000..50e960c39d4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */
+/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */
+
+int r;
+int r_0;
+
+void f1 (void)
+{
+ int n = 0;
+ while (-- n)
+ {
+ r_0 += r;
+ r *= 3;
+ }
+}
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c
new file mode 100644
index 00000000000..dbeceb228c3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
+/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
+
+int r;
+int r_0;
+
+void f (void)
+{
+ int n = 0;
+ while (-- n)
+ {
+ r_0 += r ;
+ r *= 2;
+ }
+}
diff --git a/gcc/testsuite/gcc.target/i386/pr111820-3.c b/gcc/testsuite/gcc.target/i386/pr111820-3.c
new file mode 100644
index 00000000000..b778f517663
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr111820-3.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */
+/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */
+
+int r;
+int r_0;
+
+void f (void)
+{
+ int n = 14;
+ while (-- n)
+ {
+ r_0 += r ;
+ r *= 3;
+ }
+}
diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
index 2608c286e5d..a530088b61d 100644
--- a/gcc/tree-vect-loop-manip.cc
+++ b/gcc/tree-vect-loop-manip.cc
@@ -1783,8 +1783,10 @@ iv_phi_p (stmt_vec_info stmt_info)
/* Return true if vectorizer can peel for nonlinear iv. */
static bool
vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
- enum vect_induction_op_type induction_type)
+ stmt_vec_info stmt_info)
{
+ enum vect_induction_op_type induction_type
+ = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info);
tree niters_skip;
/* Init_expr will be update by vect_update_ivs_after_vectorizer,
if niters or vf is unkown:
@@ -1805,11 +1807,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo,
return false;
}
+ /* Avoid compile time hog on vect_peel_nonlinear_iv_init. */
+ if (induction_type == vect_step_op_mul)
+ {
+ tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info);
+ tree type = TREE_TYPE (step_expr);
+
+ if (wi::exact_log2 (wi::to_wide (step_expr)) == -1
+ && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type))
+ {
+ if (dump_enabled_p ())
+ dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
+ "Avoid compile time hog on"
+ " vect_peel_nonlinear_iv_init"
+ " for nonlinear induction vec_step_op_mul"
+ " when iteration count is too big.\n");
+ return false;
+ }
+ }
+
/* Also doens't support peel for neg when niter is variable.
??? generate something like niter_expr & 1 ? init_expr : -init_expr? */
niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo);
if ((niters_skip != NULL_TREE
- && TREE_CODE (niters_skip) != INTEGER_CST)
+ && (TREE_CODE (niters_skip) != INTEGER_CST
+ || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0))
|| (!vect_use_loop_mask_for_alignment_p (loop_vinfo)
&& LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0))
{
@@ -1870,7 +1892,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo)
induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info);
if (induction_type != vect_step_op_add)
{
- if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type))
+ if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info))
return false;
continue;
diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc
index 89bdcaa0910..7a6bbff0c8f 100644
--- a/gcc/tree-vect-loop.cc
+++ b/gcc/tree-vect-loop.cc
@@ -9132,12 +9132,17 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr,
{
tree utype = unsigned_type_for (type);
init_expr = gimple_convert (stmts, utype, init_expr);
- unsigned skipn = TREE_INT_CST_LOW (skip_niters);
+ wide_int skipn = wi::to_wide (skip_niters);
wide_int begin = wi::to_wide (step_expr);
- for (unsigned i = 0; i != skipn - 1; i++)
- begin = wi::mul (begin, wi::to_wide (step_expr));
+ auto_mpz base, exp, mod, res;
+ wi::to_mpz (begin, base, TYPE_SIGN (type));
+ wi::to_mpz (skipn, exp, UNSIGNED);
+ mpz_ui_pow_ui (mod, 2, TYPE_PRECISION (type));
+ mpz_powm (res, base, exp, mod);
+ begin = wi::from_mpz (type, res, TYPE_SIGN (type));
tree mult_expr = wide_int_to_tree (utype, begin);
- init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr);
+ init_expr = gimple_build (stmts, MULT_EXPR, utype,
+ init_expr, mult_expr);
init_expr = gimple_convert (stmts, type, init_expr);
}
break;
--
2.31.1
next prev parent reply other threads:[~2023-10-20 2:20 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-18 8:32 [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c There's loop in vect_peel_nonlinear_iv_init to get init_expr * pow (step_expr, skip_niters). When skipn_iters is too big, compile time hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is pow of 2, otherwise give up vectorization when skip_niters >= TYPE_PRECISION (TREE_TYPE (init_expr)) liuhongt
2023-10-18 8:43 ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65; 6800; 1c " Hongtao Liu
2023-10-18 10:50 ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65;6800;1c " Richard Biener
2023-10-19 6:14 ` [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big liuhongt
2023-10-19 7:14 ` Richard Biener
2023-10-20 2:18 ` liuhongt [this message]
2023-10-20 6:30 ` Richard Biener
2023-10-24 11:19 ` [PATCH GCC13 backport] " liuhongt
2023-10-26 14:07 ` Richard Biener
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231020021855.482999-1-hongtao.liu@intel.com \
--to=hongtao.liu@intel.com \
--cc=crazylht@gmail.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=hjl.tools@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).