* [PATCH] tree-optimization/112636 - estimate niters before header copying
@ 2024-01-11 13:44 Richard Biener
0 siblings, 0 replies; only message in thread
From: Richard Biener @ 2024-01-11 13:44 UTC (permalink / raw)
To: gcc-patches
The following avoids a mismatch between an early query for maximum
number of iterations of a loop and a late one when through ranger
we'd get iterations estimated. Instead make sure we compute niters
before querying the iteration bound.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
PR tree-optimization/112636
* tree-ssa-loop-ch.cc (ch_base::copy_headers): Call
estimate_numbers_of_iterations before querying
get_max_loop_iterations_int.
(pass_ch::execute): Initialize SCEV and loops appropriately.
* gcc.dg/pr112636.c: New testcase.
---
gcc/testsuite/gcc.dg/pr112636.c | 13 +++++++++++++
gcc/tree-ssa-loop-ch.cc | 25 ++++++++++++++-----------
2 files changed, 27 insertions(+), 11 deletions(-)
create mode 100644 gcc/testsuite/gcc.dg/pr112636.c
diff --git a/gcc/testsuite/gcc.dg/pr112636.c b/gcc/testsuite/gcc.dg/pr112636.c
new file mode 100644
index 00000000000..284ae8f5e57
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr112636.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-O -ftree-vectorize" } */
+
+int a[1], b;
+unsigned c;
+int main() {
+ while (b) {
+ if (a[c])
+ break;
+ c--;
+ }
+ return 0;
+}
diff --git a/gcc/tree-ssa-loop-ch.cc b/gcc/tree-ssa-loop-ch.cc
index 3ce5cc21df2..6c6e562d5a0 100644
--- a/gcc/tree-ssa-loop-ch.cc
+++ b/gcc/tree-ssa-loop-ch.cc
@@ -41,6 +41,8 @@ along with GCC; see the file COPYING3. If not see
#include "gimple-pretty-print.h"
#include "cfganal.h"
#include "tree-ssa-loop-manip.h"
+#include "tree-ssa-loop-niter.h"
+#include "tree-scalar-evolution.h"
/* Return path query insteance for testing ranges of statements
in headers of LOOP contained in basic block BB.
@@ -797,7 +799,16 @@ ch_base::copy_headers (function *fun)
fprintf (dump_file,
"Analyzing loop %i\n", loop->num);
+ /* If the loop is already a do-while style one (either because it was
+ written as such, or because jump threading transformed it into one),
+ we might be in fact peeling the first iteration of the loop. This
+ in general is not a good idea. Also avoid touching infinite loops. */
+ if (!loop_has_exit_edges (loop)
+ || !process_loop_p (loop))
+ continue;
+
basic_block header = loop->header;
+ estimate_numbers_of_iterations (loop);
if (!get_max_loop_iterations_int (loop))
{
if (dump_file && (dump_flags & TDF_DETAILS))
@@ -808,14 +819,6 @@ ch_base::copy_headers (function *fun)
continue;
}
- /* If the loop is already a do-while style one (either because it was
- written as such, or because jump threading transformed it into one),
- we might be in fact peeling the first iteration of the loop. This
- in general is not a good idea. Also avoid touching infinite loops. */
- if (!loop_has_exit_edges (loop)
- || !process_loop_p (loop))
- continue;
-
/* Iterate the header copying up to limit; this takes care of the cases
like while (a && b) {...}, where we want to have both of the conditions
copied. TODO -- handle while (a || b) - like cases, by not requiring
@@ -1170,12 +1173,12 @@ ch_base::copy_headers (function *fun)
unsigned int
pass_ch::execute (function *fun)
{
- loop_optimizer_init (LOOPS_HAVE_PREHEADERS
- | LOOPS_HAVE_SIMPLE_LATCHES
- | LOOPS_HAVE_RECORDED_EXITS);
+ loop_optimizer_init (LOOPS_NORMAL | LOOPS_HAVE_RECORDED_EXITS);
+ scev_initialize ();
unsigned int res = copy_headers (fun);
+ scev_finalize ();
loop_optimizer_finalize ();
return res;
}
--
2.35.3
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2024-01-11 13:49 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-11 13:44 [PATCH] tree-optimization/112636 - estimate niters before header copying Richard Biener
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).