public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "rguenth at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug tree-optimization/102943] [12 Regression] Jump threader compile-time hog with 521.wrf_r
Date: Thu, 10 Mar 2022 11:37:25 +0000	[thread overview]
Message-ID: <bug-102943-4-0eHMbwJYQM@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-102943-4@http.gcc.gnu.org/bugzilla/>

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102943

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
   Last reconfirmed|2022-01-18 00:00:00         |2022-3-10

--- Comment #35 from Richard Biener <rguenth at gcc dot gnu.org> ---
So I've re-measured -Ofast -march=znver2 -flto on todays trunk with release
checking (built with GCC 7, not bootstrapped) and the largest LTRANS (ltrans22
at the moment) unit still has

 tree VRP                           :  15.52 ( 20%)   0.03 (  5%)  15.57 ( 20%)
   28M (  4%)
 backwards jump threading           :  16.17 ( 21%)   0.00 (  0%)  16.15 ( 21%)
 1475k (  0%)
 TOTAL                              :  77.29          0.59         77.92       
  744M

and the 2nd largest (ltrans86 at the moment)

 alias stmt walking                 :   7.70 ( 16%)   0.03 (  8%)   7.70 ( 16%)
  703k (  0%)
 tree VRP                           :   8.25 ( 18%)   0.01 (  3%)   8.27 ( 17%)
   14M (  3%)
 backwards jump threading           :   8.79 ( 19%)   0.00 (  0%)   8.82 ( 19%)
 1645k (  0%)
 TOTAL                              :  46.97          0.38         47.38       
  438M

so it's still by far jump-threading/VRP dominating compile-times (I wonder
if we should separate "old" and "new" [E]VRP timevars).  Given that VRP
shows up as well it's more likely the underlying ranger infrastructure?

perf thrown on ltrans22 shows

Samples: 302K of event 'cycles', Event count (approx.): 331301505627            
Overhead       Samples  Command      Shared Object     Symbol                   
  10.34%         31299  lto1-ltrans  lto1              [.]
bitmap_get_aligned_chunk
   7.44%         22540  lto1-ltrans  lto1              [.] bitmap_bit_p
   3.17%          9593  lto1-ltrans  lto1              [.]
get_immediate_dominator
   2.87%          8668  lto1-ltrans  lto1              [.]
determine_value_range
   2.36%          7143  lto1-ltrans  lto1              [.]
ranger_cache::propagate_cache
   2.32%          7031  lto1-ltrans  lto1              [.] bitmap_set_bit
   2.20%          6664  lto1-ltrans  lto1              [.]
operand_compare::operand_equal_p
   1.88%          5692  lto1-ltrans  lto1              [.]
bitmap_set_aligned_chunk
   1.79%          5390  lto1-ltrans  lto1              [.]
number_of_iterations_exit_assumptions
   1.66%          5048  lto1-ltrans  lto1              [.]
get_continuation_for_phi

callgraph info in perf is a mixed bag, but maybe it helps to pinpoint things:

-   10.20%    10.18%         30364  lto1-ltrans  lto1              [.]
bitmap_get_aligned_chunk                                                       
                                                                               
                                                  #
   - 10.18% 0xffffffffffffffff                                                 
                                                                               
                                                                               
                                         #
      + 9.16% ranger_cache::propagate_cache                                    
                                                                               
                                                                               
                                         #
      + 1.01% ranger_cache::fill_block_cache               

-    7.84%     7.83%         23509  lto1-ltrans  lto1              [.]
bitmap_bit_p                                                                   
                                                                               
                                                  #
   - 6.20% 0xffffffffffffffff                                                  
                                                                               
                                                                               
                                         #
      + 1.85% fold_using_range::range_of_range_op                              
                                                                               
                                                                               
                                         #
      + 1.64% ranger_cache::range_on_edge                                      
                                                                               
                                                                               
                                         #
      + 1.29% gimple_ranger::range_of_expr            

and the most prominent get_immediate_dominator calls are from
back_propagate_equivalences which does

  FOR_EACH_IMM_USE_FAST (use_p, iter, lhs)
...
      /* Profiling has shown the domination tests here can be fairly
         expensive.  We get significant improvements by building the
         set of blocks that dominate BB.  We can then just test
         for set membership below.

         We also initialize the set lazily since often the only uses
         are going to be in the same block as DEST.  */
      if (!domby)
        {
          domby = BITMAP_ALLOC (NULL);
          basic_block bb = get_immediate_dominator (CDI_DOMINATORS, dest);
          while (bb)
            {
              bitmap_set_bit (domby, bb->index);
              bb = get_immediate_dominator (CDI_DOMINATORS, bb);
            }
        }

      /* This tests if USE_STMT does not dominate DEST.  */
      if (!bitmap_bit_p (domby, gimple_bb (use_stmt)->index))
        continue;

I think that "optimization" is flawed - a dominance check is cheap if
the DFS numbers are up-to-date:

bool
dominated_by_p (enum cdi_direction dir, const_basic_block bb1,
const_basic_block bb2)
{       
  unsigned int dir_index = dom_convert_dir_to_idx (dir);
  struct et_node *n1 = bb1->dom[dir_index], *n2 = bb2->dom[dir_index];

  gcc_checking_assert (dom_computed[dir_index]);

  if (dom_computed[dir_index] == DOM_OK)
    return (n1->dfs_num_in >= n2->dfs_num_in
            && n1->dfs_num_out <= n2->dfs_num_out);

  return et_below (n1, n2);
}

it's just the fallback that is not.  Also recoding _all_ dominators of
'dest' is expensive for a large CFG but you'll only ever need
dominators up to the definition of 'lhs' which we know will dominate
all use_stmt so if that does _not_ dominate e->dest no use will
(but I think that's always the case in the current code).  Note
the caller iterates over simple equivalences on an edge so this
bitmap is populated multiple times (but if we cache it we cannot
prune from the top).  For FP we have usually multiple equivalences
so caching pays off more than pruning for WRF.  Note this is only
a minor part of the slowness, I'm testing a patch for this part.
Note for WRF always going the "slow" dominated_by_p way is as fast
as caching.

  parent reply	other threads:[~2022-03-10 11:37 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-26 11:13 [Bug tree-optimization/102943] New: VRP " rguenth at gcc dot gnu.org
2021-10-26 11:15 ` [Bug tree-optimization/102943] [12 Regression] " rguenth at gcc dot gnu.org
2021-10-26 11:25 ` rguenth at gcc dot gnu.org
2021-10-26 11:49 ` rguenth at gcc dot gnu.org
2021-10-26 14:57 ` pinskia at gcc dot gnu.org
2021-10-26 14:58 ` marxin at gcc dot gnu.org
2021-10-26 15:06 ` marxin at gcc dot gnu.org
2021-10-30  6:31 ` aldyh at gcc dot gnu.org
2021-10-31 20:06 ` hubicka at gcc dot gnu.org
2021-11-02  7:25 ` [Bug tree-optimization/102943] [12 Regression] Jump " rguenth at gcc dot gnu.org
2021-11-02  7:29 ` aldyh at gcc dot gnu.org
2021-11-03 10:57 ` aldyh at gcc dot gnu.org
2021-11-03 10:58 ` aldyh at gcc dot gnu.org
2021-11-03 13:17 ` rguenther at suse dot de
2021-11-03 14:33 ` amacleod at redhat dot com
2021-11-03 14:42 ` rguenther at suse dot de
2021-11-04 14:40 ` cvs-commit at gcc dot gnu.org
2021-11-04 14:40 ` cvs-commit at gcc dot gnu.org
2021-11-04 14:40 ` cvs-commit at gcc dot gnu.org
2021-11-04 15:24 ` aldyh at gcc dot gnu.org
2021-11-04 17:00   ` Jan Hubicka
2021-11-04 17:00 ` hubicka at kam dot mff.cuni.cz
2021-11-05  9:08 ` aldyh at gcc dot gnu.org
2021-11-05 11:10 ` marxin at gcc dot gnu.org
2021-11-05 11:13 ` aldyh at gcc dot gnu.org
2021-11-05 11:23 ` marxin at gcc dot gnu.org
2021-11-05 17:16 ` cvs-commit at gcc dot gnu.org
2021-11-07 17:17 ` hubicka at gcc dot gnu.org
2021-11-07 18:16 ` aldyh at gcc dot gnu.org
2021-11-07 18:59   ` Jan Hubicka
2021-11-07 18:59 ` hubicka at kam dot mff.cuni.cz
2021-11-12 22:14 ` hubicka at gcc dot gnu.org
2021-11-14  9:58 ` hubicka at gcc dot gnu.org
2021-11-26 12:38 ` cvs-commit at gcc dot gnu.org
2021-11-30 10:55 ` aldyh at gcc dot gnu.org
2021-12-09 20:17 ` hubicka at gcc dot gnu.org
2022-01-03  8:47 ` rguenth at gcc dot gnu.org
2022-01-03 11:20 ` hubicka at kam dot mff.cuni.cz
2022-01-19  7:06 ` rguenth at gcc dot gnu.org
2022-03-10 11:37 ` rguenth at gcc dot gnu.org [this message]
2022-03-10 12:40 ` cvs-commit at gcc dot gnu.org
2022-03-10 13:22 ` rguenth at gcc dot gnu.org
2022-03-10 13:42 ` cvs-commit at gcc dot gnu.org
2022-03-10 13:45 ` rguenth at gcc dot gnu.org
2022-03-10 13:49 ` rguenth at gcc dot gnu.org
2022-03-10 14:01 ` amacleod at redhat dot com
2022-03-10 14:17 ` amacleod at redhat dot com
2022-03-10 14:23 ` rguenth at gcc dot gnu.org
2022-03-10 14:26 ` rguenth at gcc dot gnu.org
2022-03-10 14:33 ` amacleod at redhat dot com
2022-03-10 14:36 ` amacleod at redhat dot com
2022-03-16 19:48 ` amacleod at redhat dot com
2022-03-17 11:14 ` rguenth at gcc dot gnu.org
2022-03-17 13:05 ` amacleod at redhat dot com
2022-03-17 14:18 ` hubicka at kam dot mff.cuni.cz
2022-03-17 20:44 ` cvs-commit at gcc dot gnu.org
2022-03-23 10:40 ` rguenth at gcc dot gnu.org

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-102943-4-0eHMbwJYQM@http.gcc.gnu.org/bugzilla/ \
    --to=gcc-bugzilla@gcc.gnu.org \
    --cc=gcc-bugs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).