public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [C++ PATCH] Avoid exponential compile time in cp_fold_function (PR c++/70847, PR c++/71330, PR c++/71393)
@ 2016-06-03 17:32 Jakub Jelinek
  2016-06-06 19:43 ` Jason Merrill
  0 siblings, 1 reply; 2+ messages in thread
From: Jakub Jelinek @ 2016-06-03 17:32 UTC (permalink / raw)
  To: Jason Merrill; +Cc: gcc-patches

Hi!

As mentioned in these PRs, if there are many nested SAVE_EXPRs or
TARGET_EXPRs and many of those appear two or more times in the tree,
cp_fold_function has huge compile time requirements.  Even when each
cp_fold should use a cache and once something has been folded, will
return the same tree again and again, even just walking the same huge trees
over and over and doing the cache lookups over and over takes lots of time.

The following patch fixes it by making sure that during a single
cp_fold_function, we recurse on the subtrees of each cp_fold returned tree
once, not more.

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

2016-06-03  Jakub Jelinek  <jakub@redhat.com>
	    Patrick Palka  <ppalka@gcc.gnu.org>

	PR c++/70847
	PR c++/71330
	PR c++/71393
	* cp-gimplify.c (cp_fold_r): Set *walk_subtrees = 0 and return NULL
	right after cp_fold call if cp_fold has returned the same stmt
	already in some earlier cp_fold_r call.
	(cp_fold_function): Add pset automatic variable, pass its address
	to cp_walk_tree.

	* g++.dg/opt/pr70847.C: New test.
	* g++.dg/ubsan/pr70847.C: New test.
	* g++.dg/ubsan/pr71393.C: New test.

--- gcc/cp/cp-gimplify.c.jj	2016-06-02 18:35:18.000000000 +0200
+++ gcc/cp/cp-gimplify.c	2016-06-03 15:23:47.098894612 +0200
@@ -940,6 +940,17 @@ cp_fold_r (tree *stmt_p, int *walk_subtr
 
   *stmt_p = stmt = cp_fold (*stmt_p);
 
+  if (((hash_set<tree> *) data)->add (stmt))
+    {
+      /* Don't walk subtrees of stmts we've already walked once, otherwise
+	 we can have exponential complexity with e.g. lots of nested
+	 SAVE_EXPRs or TARGET_EXPRs.  cp_fold uses a cache and will return
+	 always the same tree, which the first time cp_fold_r has been
+	 called on it had the subtrees walked.  */
+      *walk_subtrees = 0;
+      return NULL;
+    }
+
   code = TREE_CODE (stmt);
   if (code == OMP_FOR || code == OMP_SIMD || code == OMP_DISTRIBUTE
       || code == OMP_TASKLOOP || code == CILK_FOR || code == CILK_SIMD
@@ -997,7 +1008,8 @@ cp_fold_r (tree *stmt_p, int *walk_subtr
 void
 cp_fold_function (tree fndecl)
 {
-  cp_walk_tree (&DECL_SAVED_TREE (fndecl), cp_fold_r, NULL, NULL);
+  hash_set<tree> pset;
+  cp_walk_tree (&DECL_SAVED_TREE (fndecl), cp_fold_r, &pset, NULL);
 }
 
 /* Perform any pre-gimplification lowering of C++ front end trees to
--- gcc/testsuite/g++.dg/opt/pr70847.C.jj	2016-06-03 11:44:13.507026612 +0200
+++ gcc/testsuite/g++.dg/opt/pr70847.C	2016-06-03 11:43:22.000000000 +0200
@@ -0,0 +1,11 @@
+// PR c++/70847
+// { dg-do compile }
+
+struct D { virtual D& f(); };
+
+void
+g()
+{
+  D d;
+  d.f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f();
+}
--- gcc/testsuite/g++.dg/ubsan/pr70847.C.jj	2016-06-03 11:47:13.862624882 +0200
+++ gcc/testsuite/g++.dg/ubsan/pr70847.C	2016-06-03 11:43:22.000000000 +0200
@@ -0,0 +1,11 @@
+// PR c++/70847
+// { dg-do compile }
+
+struct D { virtual D& f(); };
+
+void
+g()
+{
+  D d;
+  d.f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f().f();
+}
--- gcc/testsuite/g++.dg/ubsan/pr71393.C.jj	2016-06-03 11:35:55.675656051 +0200
+++ gcc/testsuite/g++.dg/ubsan/pr71393.C	2016-06-03 11:35:26.000000000 +0200
@@ -0,0 +1,14 @@
+// PR c++/71393
+// { dg-do compile }
+// { dg-options "-fsanitize=undefined" }
+
+struct B { B &operator << (long); };
+struct A { A (); long a, b, c, d, e, f; };
+
+A::A ()
+{
+  B q;
+  q << 0 << a << 0 << b << 0 << (b / a) << 0 << c << 0 << (c / a) << 0
+    << d << 0 << (d / a) << 0 << e << 0 << (e / a) << 0 << f << 0
+    << (f / a) << 0;
+}

	Jakub

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [C++ PATCH] Avoid exponential compile time in cp_fold_function (PR c++/70847, PR c++/71330, PR c++/71393)
  2016-06-03 17:32 [C++ PATCH] Avoid exponential compile time in cp_fold_function (PR c++/70847, PR c++/71330, PR c++/71393) Jakub Jelinek
@ 2016-06-06 19:43 ` Jason Merrill
  0 siblings, 0 replies; 2+ messages in thread
From: Jason Merrill @ 2016-06-06 19:43 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: gcc-patches List

OK.

Jason

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-06-06 19:43 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-03 17:32 [C++ PATCH] Avoid exponential compile time in cp_fold_function (PR c++/70847, PR c++/71330, PR c++/71393) Jakub Jelinek
2016-06-06 19:43 ` Jason Merrill

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).