public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc r13-6667] ipa-cp: Improve updating behavior when profile counts have gone bad
@ 2023-03-14 17:57 Martin Jambor
  0 siblings, 0 replies; only message in thread
From: Martin Jambor @ 2023-03-14 17:57 UTC (permalink / raw)
  To: gcc-cvs

https://gcc.gnu.org/g:1526ecd739fc6a13329abdcbdbf7c2df57c22177

commit r13-6667-g1526ecd739fc6a13329abdcbdbf7c2df57c22177
Author: Martin Jambor <mjambor@suse.cz>
Date:   Tue Mar 14 18:53:16 2023 +0100

    ipa-cp: Improve updating behavior when profile counts have gone bad
    
    Looking into the behavior of profile count updating in PR 107925, I
    noticed that an option not considered possible was actually happening,
    and - with the guesswork in place to distribute unexplained counts -
    it simply can happen.  Currently it is handled by dropping the counts
    to local estimated zero, whereas it is probably better to leave the
    count as they are but drop the category to GUESSED_GLOBAL0 - which is
    what profile_count::combine_with_ipa_count in a similar case (or so I
    hope :-)
    
    gcc/ChangeLog:
    
    2023-02-20  Martin Jambor  <mjambor@suse.cz>
    
            PR ipa/107925
            * ipa-cp.cc (update_profiling_info): Drop counts of orig_node to
            global0 instead of zeroing when it does not have as many counts as
            it should.

Diff:
---
 gcc/ipa-cp.cc | 29 ++++++++++++++++++++++-------
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/gcc/ipa-cp.cc b/gcc/ipa-cp.cc
index 5a6b41cf2d6..6477bb840e5 100644
--- a/gcc/ipa-cp.cc
+++ b/gcc/ipa-cp.cc
@@ -4969,10 +4969,20 @@ update_profiling_info (struct cgraph_node *orig_node,
 					      false);
   new_sum = stats.count_sum;
 
+  bool orig_edges_processed = false;
   if (new_sum > orig_node_count)
     {
-      /* TODO: Perhaps this should be gcc_unreachable ()?  */
-      remainder = profile_count::zero ().guessed_local ();
+      /* TODO: Profile has alreay gone astray, keep what we have but lower it
+	 to global0 category.  */
+      remainder = orig_node->count.global0 ();
+
+      for (cgraph_edge *cs = orig_node->callees; cs; cs = cs->next_callee)
+	cs->count = cs->count.global0 ();
+      for (cgraph_edge *cs = orig_node->indirect_calls;
+	   cs;
+	   cs = cs->next_callee)
+	cs->count = cs->count.global0 ();
+      orig_edges_processed = true;
     }
   else if (stats.rec_count_sum.nonzero_p ())
     {
@@ -5070,11 +5080,16 @@ update_profiling_info (struct cgraph_node *orig_node,
   for (cgraph_edge *cs = new_node->indirect_calls; cs; cs = cs->next_callee)
     cs->count = cs->count.apply_scale (new_sum, orig_new_node_count);
 
-  profile_count::adjust_for_ipa_scaling (&remainder, &orig_node_count);
-  for (cgraph_edge *cs = orig_node->callees; cs; cs = cs->next_callee)
-    cs->count = cs->count.apply_scale (remainder, orig_node_count);
-  for (cgraph_edge *cs = orig_node->indirect_calls; cs; cs = cs->next_callee)
-    cs->count = cs->count.apply_scale (remainder, orig_node_count);
+  if (!orig_edges_processed)
+    {
+      profile_count::adjust_for_ipa_scaling (&remainder, &orig_node_count);
+      for (cgraph_edge *cs = orig_node->callees; cs; cs = cs->next_callee)
+	cs->count = cs->count.apply_scale (remainder, orig_node_count);
+      for (cgraph_edge *cs = orig_node->indirect_calls;
+	   cs;
+	   cs = cs->next_callee)
+	cs->count = cs->count.apply_scale (remainder, orig_node_count);
+    }
 
   if (dump_file)
     {

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-03-14 17:57 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-14 17:57 [gcc r13-6667] ipa-cp: Improve updating behavior when profile counts have gone bad Martin Jambor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).