public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
* [Bug c/114855] New: ICE: Segfault
@ 2024-04-25 20:00 jeremy.rutman at gmail dot com
  2024-04-25 20:09 ` [Bug middle-end/114855] " pinskia at gcc dot gnu.org
                   ` (12 more replies)
  0 siblings, 13 replies; 14+ messages in thread
From: jeremy.rutman at gmail dot com @ 2024-04-25 20:00 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

            Bug ID: 114855
           Summary: ICE: Segfault
           Product: gcc
           Version: 13.2.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: c
          Assignee: unassigned at gcc dot gnu.org
          Reporter: jeremy.rutman at gmail dot com
  Target Milestone: ---

Created attachment 58041
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=58041&action=edit
output from  gcc -v

Attempt to compile some autogenerated code resulted in `cc: internal compiler
error: Segmentation fault signal terminated program cc1` . Compile command was 

gcc -v -Wall -O3 -DNDEBUG -fomit-frame-pointer -freport-bug -save-temps -c
aesDecrypt.c -o aesDecrypt.o

I put the offending source file 
aesDecrypt.c
here:
https://paste.c-net.org/ExamineLarch
and the .i file from the -save-temps 
aesDecrypt.i 
here
https://paste.c-net.org/TiredInduce
I'm not sure what the -freport-bug is doing but I used it in the compile
command anyway.  Apologies for the unwieldy size of the autogenerated code
being compiled.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
@ 2024-04-25 20:09 ` pinskia at gcc dot gnu.org
  2024-04-25 20:13 ` pinskia at gcc dot gnu.org
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: pinskia at gcc dot gnu.org @ 2024-04-25 20:09 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #1 from Andrew Pinski <pinskia at gcc dot gnu.org> ---
Worthing noting on the trunk most of the compile time seems to be in the ranger
code ...

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
  2024-04-25 20:09 ` [Bug middle-end/114855] " pinskia at gcc dot gnu.org
@ 2024-04-25 20:13 ` pinskia at gcc dot gnu.org
  2024-04-26  6:00 ` jeremy.rutman at gmail dot com
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: pinskia at gcc dot gnu.org @ 2024-04-25 20:13 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #2 from Andrew Pinski <pinskia at gcc dot gnu.org> ---
The code basically does a bunch of:

  const SWord8 s599 = s557 ? s595 : s598;
  const SWord8 s600 = s561 ? 14 : 246;
  const SWord8 s601 = s561 ? 3 : 72;
  const SWord8 s602 = s559 ? s600 : s601;
  const SWord8 s603 = s561 ? 102 : 181;
  const SWord8 s604 = s561 ? 62 : 112;
  const SWord8 s605 = s559 ? s603 : s604;
  const SWord8 s606 = s557 ? s602 : s605;
  const SWord8 s607 = s555 ? s599 : s606;
  const SWord8 s608 = s561 ? 138 : 139;


Continuously.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
  2024-04-25 20:09 ` [Bug middle-end/114855] " pinskia at gcc dot gnu.org
  2024-04-25 20:13 ` pinskia at gcc dot gnu.org
@ 2024-04-26  6:00 ` jeremy.rutman at gmail dot com
  2024-04-26 14:25 ` rguenth at gcc dot gnu.org
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: jeremy.rutman at gmail dot com @ 2024-04-26  6:00 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #3 from jeremy rutman <jeremy.rutman at gmail dot com> ---
For what it's worth, clang is able to compile the code in question. 

Ubuntu clang version 18.1.3 (1)
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
                   ` (2 preceding siblings ...)
  2024-04-26  6:00 ` jeremy.rutman at gmail dot com
@ 2024-04-26 14:25 ` rguenth at gcc dot gnu.org
  2024-04-26 19:50 ` jeremy.rutman at gmail dot com
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: rguenth at gcc dot gnu.org @ 2024-04-26 14:25 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |amacleod at redhat dot com
     Ever confirmed|0                           |1
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2024-04-26

--- Comment #4 from Richard Biener <rguenth at gcc dot gnu.org> ---
Trunk at -O1:

dominator optimization             : 495.14 ( 82%)   0.20 (  5%) 495.44 ( 81%) 
 113M (  5%)

I can confirm the segfault with the 13.2 release.  It segfaults in

#0  0x00000000009a8603 in (anonymous
namespace)::pass_waccess::check_dangling_stores (this=this@entry=0x2866fc0,
bb=0x7ffff5277480, stores=..., bbs=...)
    at /space/rguenther/src/gcc-13-branch/gcc/gimple-ssa-warn-access.cc:4535

with too deep recursion.  That was fixed by r14-4308-gf194c684a28a5d for
PR111600 and could be backported, leaving the compile-time hog.  I'll do
that next week.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
                   ` (3 preceding siblings ...)
  2024-04-26 14:25 ` rguenth at gcc dot gnu.org
@ 2024-04-26 19:50 ` jeremy.rutman at gmail dot com
  2024-04-30  9:13 ` rguenth at gcc dot gnu.org
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: jeremy.rutman at gmail dot com @ 2024-04-26 19:50 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #5 from jeremy rutman <jeremy.rutman at gmail dot com> ---
Using gcc 14.0.1 20240117 (experimental) [master r14-8187-gb00be6f1576] I was
able to compile when not using any flags:

$ /usr/lib/gcc-snapshot/bin/cc -c aesDecrypt.c -o aesDecrypt.o

But when using the flags as before 

$  /usr/lib/gcc-snapshot/bin/cc -Wall -O3 -DNDEBUG -fomit-frame-pointer -c
aesDecrypt.c -o aesDecrypt.o

the compile kept going for at least one hour on my machine before I aborted.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
                   ` (4 preceding siblings ...)
  2024-04-26 19:50 ` jeremy.rutman at gmail dot com
@ 2024-04-30  9:13 ` rguenth at gcc dot gnu.org
  2024-05-03 21:46 ` amacleod at redhat dot com
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: rguenth at gcc dot gnu.org @ 2024-04-30  9:13 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |aldyh at gcc dot gnu.org

--- Comment #6 from Richard Biener <rguenth at gcc dot gnu.org> ---
I've backported the fix for the recursion issue, only the memory/compile-time
hog issue should prevail on the branch.  comment#14 now also applies to the GCC
13 branch.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
                   ` (5 preceding siblings ...)
  2024-04-30  9:13 ` rguenth at gcc dot gnu.org
@ 2024-05-03 21:46 ` amacleod at redhat dot com
  2024-05-09 15:57 ` amacleod at redhat dot com
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: amacleod at redhat dot com @ 2024-05-03 21:46 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #7 from Andrew Macleod <amacleod at redhat dot com> ---
LOoks like the primary culprits now are:

dominator optimization             : 666.73 (  7%)   0.77 (  2%) 671.76 (  7%) 
 170M (  4%)
backwards jump threading           :7848.77 ( 85%)  21.04 ( 65%)7920.05 ( 85%) 
1332M ( 29%)

TOTAL                              :9250.99         32.58       9341.40        
4619M

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
                   ` (6 preceding siblings ...)
  2024-05-03 21:46 ` amacleod at redhat dot com
@ 2024-05-09 15:57 ` amacleod at redhat dot com
  2024-06-22 13:08 ` rguenth at gcc dot gnu.org
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: amacleod at redhat dot com @ 2024-05-09 15:57 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #8 from Andrew Macleod <amacleod at redhat dot com> ---
(In reply to Andrew Macleod from comment #7)
> LOoks like the primary culprits now are:
> 
> dominator optimization             : 666.73 (  7%)   0.77 (  2%) 671.76 ( 
> 7%)   170M (  4%)
> backwards jump threading           :7848.77 ( 85%)  21.04 ( 65%)7920.05 (
> 85%)  1332M ( 29%)
> 
> TOTAL                              :9250.99         32.58       9341.40     
> 4619M

If I turn off threading, then VRP opps up with 400, so I took a look at VRP.

The biggest problem is that this testcase has on the order of 400,000 basic
blocks, with a pattern of a block of code followed by a lot of CFG diamonds
using a number of differense ssa-names from within the block over and over.  
When we are calculating /storing imports and exports for every block, then
utilizing that info to try to find outgoing ranges that maybe we can use, it
simply adds up.

For VRP, we currently utilize different cache models depoending on the number
of block.. Im wondering if maybe this might not be a good testcase to actually
use a different VRP when the number of block are excessive.  I wroite the fast
VRP pass last year, which currently isnt being used.  I'm goign to experiment
with it to see if we turn it on for CFGs that above a threasdhold (100,000 BB?
), we enable the lower overhead fast VRP instead for all VRP passes. 

The threading issue probably needs to have some knobs added or tweaked for such
very large BBs. there would be a LOT of threading opportunities in the code I
saw, so I can see why it would be so busy.  I saw a lot fo branches to branches
using the same SSA_NAMe.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
                   ` (7 preceding siblings ...)
  2024-05-09 15:57 ` amacleod at redhat dot com
@ 2024-06-22 13:08 ` rguenth at gcc dot gnu.org
  2024-06-24 13:04 ` rguenth at gcc dot gnu.org
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: rguenth at gcc dot gnu.org @ 2024-06-22 13:08 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |rguenth at gcc dot gnu.org

--- Comment #9 from Richard Biener <rguenth at gcc dot gnu.org> ---
Note looking at -O1 is the most important thing, as we tell people to use -O1
for autogenerated code.  There I suppose comment#4 still applies and likely
this is ranger as well.  Maybe DOMs ranger use can be tuned down at -O1
(when -ftree-vrp is off).

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
                   ` (8 preceding siblings ...)
  2024-06-22 13:08 ` rguenth at gcc dot gnu.org
@ 2024-06-24 13:04 ` rguenth at gcc dot gnu.org
  2024-06-24 13:14 ` rguenth at gcc dot gnu.org
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: rguenth at gcc dot gnu.org @ 2024-06-24 13:04 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #10 from Richard Biener <rguenth at gcc dot gnu.org> ---
Created attachment 58505
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=58505&action=edit
preprocessed testcase

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
                   ` (9 preceding siblings ...)
  2024-06-24 13:04 ` rguenth at gcc dot gnu.org
@ 2024-06-24 13:14 ` rguenth at gcc dot gnu.org
  2024-06-24 14:47 ` rguenth at gcc dot gnu.org
  2024-06-25  9:23 ` rguenth at gcc dot gnu.org
  12 siblings, 0 replies; 14+ messages in thread
From: rguenth at gcc dot gnu.org @ 2024-06-24 13:14 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #11 from Richard Biener <rguenth at gcc dot gnu.org> ---
Btw, a question to the reporter - I suppose the files are machine-generated. 
Are you able to create a file of smaller size?  This one has ~200000 lines,
some with 2000 and 20000 lines would be perfect.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
                   ` (10 preceding siblings ...)
  2024-06-24 13:14 ` rguenth at gcc dot gnu.org
@ 2024-06-24 14:47 ` rguenth at gcc dot gnu.org
  2024-06-25  9:23 ` rguenth at gcc dot gnu.org
  12 siblings, 0 replies; 14+ messages in thread
From: rguenth at gcc dot gnu.org @ 2024-06-24 14:47 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #12 from Richard Biener <rguenth at gcc dot gnu.org> ---
At -O1 we have

Samples: 2M of event 'cycles:u', Event count (approx.): 2983686432518           
Overhead       Samples  Command  Shared Object     Symbol                       
  19.77%        467950  cc1      cc1               [.] bitmap_bit_p             
  12.31%        300919  cc1      cc1               [.]
wide_int_storage::operator=                                           
   6.79%        158610  cc1      cc1               [.]
gori_compute::may_recompute_p                                         
   4.84%        113100  cc1      cc1               [.]
ranger_cache::range_from_dom                                          
   3.79%         88582  cc1      cc1               [.] bitmap_set_bit           
   3.24%         75772  cc1      cc1               [.]
block_range_cache::get_bb_range                                       
   2.40%         56058  cc1      cc1               [.] get_immediate_dominator  
   2.37%         55493  cc1      cc1               [.] gori_map::exports        
   2.15%         50244  cc1      cc1               [.] gori_map::is_export_p    
   1.87%         45710  cc1      cc1               [.]
wide_int_storage::wide_int_storage                                    
   1.73%         40436  cc1      cc1               [.]
infer_range_manager::has_range_p                                      
   1.70%         39586  cc1      cc1               [.] gimple_has_side_effects  
   1.17%         28642  cc1      cc1               [.]
irange_storage::get_irange                                            
   1.13%         27004  cc1      cc1               [.]
back_jt_path_registry::adjust_paths_after_duplication  

so it's DOMs jump threader that takes the time.  Using -O1 -fno-thread-jumps
this improves a lot to

Samples: 362K of event 'cycles:u', Event count (approx.): 441041461405          
Overhead       Samples  Command  Shared Object     Symbol                       
  22.44%         78191  cc1      cc1               [.]
wide_int_storage::operator=                                           
  11.02%         38451  cc1      cc1               [.] bitmap_bit_p             
   3.55%         12318  cc1      cc1               [.]
dom_oracle::register_transitives                                      
   3.45%         12016  cc1      cc1               [.]
wide_int_storage::wide_int_storage                      

I'm going to try to collect a callgrind profile for -O1.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Bug middle-end/114855] ICE: Segfault
  2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
                   ` (11 preceding siblings ...)
  2024-06-24 14:47 ` rguenth at gcc dot gnu.org
@ 2024-06-25  9:23 ` rguenth at gcc dot gnu.org
  12 siblings, 0 replies; 14+ messages in thread
From: rguenth at gcc dot gnu.org @ 2024-06-25  9:23 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114855

--- Comment #13 from Richard Biener <rguenth at gcc dot gnu.org> ---
Most of the -O1 dom time is spent in threading using path ranger to simplify
the JT conditions.  That in turn does (for each threading from scratch?)
GORI computes with most of the time spent in range_from_dom in the
gori_compute::may_recompute_p cycle (and there doing bitmap operations).
That compute has a 'depth' --param but it looks like range_from_dom
doesn't and we have a very deep dominator tree for this testcase.

What's also oddly expensive (visible through the wide_int_storage::operator=
profile) is irange_bitmask::intersect, I suspect

  // If we have two known bits that are incompatible, the resulting
  // bit is undefined.  It is unclear whether we should set the entire
  // range to UNDEFINED, or just a subset of it.  For now, set the
  // entire bitmask to unknown (VARYING).
  if (wi::bit_and (~(m_mask | src.m_mask),
                   m_value ^ src.m_value) != 0)
    {

is quite expensive to evaluate.

It might make sense to implement a wi::not_ior_and_xor_nonzero_p special
case for this (unfortunately wide-int doesn't use expression templates).

Limiting range_from_dom like the following improves compile-time at -O1
from 600s to 200s (tested on the gcc-14 branch).  This should probably
re-use an existing ranger --param that's related or add a new one.
Note that -O -fno-thread-jumps compiles in 30s.  IIRC path-ranger uses
it's own cache it wipes between queries - I don't know how this interacts
with GORI (it hopefully shouldn't recompute things, but I don't know).

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index a33b7a73872..47117db0648 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -1668,6 +1668,7 @@ ranger_cache::range_from_dom (vrange &r, tree name,
basic_block start_bb,
   else
     bb = get_immediate_dominator (CDI_DOMINATORS, start_bb);

+  unsigned depth = 10;
   // Search until a value is found, pushing blocks which may need calculating.
   for ( ; bb; prev_bb = bb, bb = get_immediate_dominator (CDI_DOMINATORS, bb))
     {
@@ -1709,6 +1710,9 @@ ranger_cache::range_from_dom (vrange &r, tree name,
basic_block start_bb,

       if (m_on_entry.get_bb_range (r, name, bb))
        break;
+
+      if (--depth == 0)
+       break;
     }

   if (DEBUG_RANGE_CACHE)

I think we're running into several "layers" of (limited or unlimited)
recursions that compose to O (N^M) behavior here.  In other places of
the compiler we impose a global work limit to avoid this and to allow
one layer to use up the work fully when the others do not need deep
recursion.  Of course that only works if the work can be fairly
distributed.

Note instead of limiting the depth of the DOM walk above you could
also limit the number of blocks added to m_workback.

We hit the join block handling often for the testcase, I think that
when one of the pred_bb->preds has BB as src we can avoid adding
to m_workback since we know there's no edge range on that edge
and thus resolve_dom would union with VARYING?  Thus

@@ -1693,8 +1694,8 @@ ranger_cache::range_from_dom (vrange &r, tree name,
basic_block start_bb,
              edge_iterator ei;
              bool all_dom = true;
              FOR_EACH_EDGE (e, ei, prev_bb->preds)
-               if (e->src != bb
-                   && !dominated_by_p (CDI_DOMINATORS, e->src, bb))
+               if (e->src == bb
+                   || !dominated_by_p (CDI_DOMINATORS, e->src, bb))
                  {
                    all_dom = false;
                    break;

though doing this doesn't help the testcase.  But I see that
resolve_dom eventually recurses to range_from_dom which in this case
doesn't stop at the immediate dominator of prev_bb but again only
eventually at the definition of 'name'.  For the testcase we always
have only two incoming edges but in theory this leads to quadraticness?

Trying to limit this with a hack (not sure when else the stack isnt
empty upon recursion) like the following doensn't help though (in addition
to the above changes), instead it results in a slight slowdown.

@@ -1632,6 +1632,8 @@ ranger_cache::resolve_dom (vrange &r, tree name,
basic_block bb)
   m_on_entry.set_bb_range (name, bb, r);
 }

+static vec<basic_block> rfd_limit = vNULL;
+
 // Get the range of NAME from dominators of BB and return it in R.  Search the
 // dominator tree based on MODE.

@@ -1657,6 +1659,8 @@ ranger_cache::range_from_dom (vrange &r, tree name,
basic_block start_bb,
   // Range on entry to the DEF block should not be queried.
   gcc_checking_assert (start_bb != def_bb);
   unsigned start_limit = m_workback.length ();
+  if (!rfd_limit.is_empty ())
+    def_bb = get_immediate_dominator (CDI_DOMINATORS, rfd_limit.last ());

   // Default value is global range.
   get_global_range (r, name);
@@ -1736,7 +1744,11 @@ ranger_cache::range_from_dom (vrange &r, tree name,
basic_block start_bb,
          // RFD_FILL, then the cache cant be stored to, so don't try.
          // Otherwise this becomes a quadratic timed calculation.
          if (mode == RFD_FILL)
-           resolve_dom (r, name, prev_bb);
+           {
+             rfd_limit.safe_push (prev_bb);
+             resolve_dom (r, name, prev_bb);
+             rfd_limit.pop ();
+           }
          continue;
        }

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2024-06-25  9:23 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-25 20:00 [Bug c/114855] New: ICE: Segfault jeremy.rutman at gmail dot com
2024-04-25 20:09 ` [Bug middle-end/114855] " pinskia at gcc dot gnu.org
2024-04-25 20:13 ` pinskia at gcc dot gnu.org
2024-04-26  6:00 ` jeremy.rutman at gmail dot com
2024-04-26 14:25 ` rguenth at gcc dot gnu.org
2024-04-26 19:50 ` jeremy.rutman at gmail dot com
2024-04-30  9:13 ` rguenth at gcc dot gnu.org
2024-05-03 21:46 ` amacleod at redhat dot com
2024-05-09 15:57 ` amacleod at redhat dot com
2024-06-22 13:08 ` rguenth at gcc dot gnu.org
2024-06-24 13:04 ` rguenth at gcc dot gnu.org
2024-06-24 13:14 ` rguenth at gcc dot gnu.org
2024-06-24 14:47 ` rguenth at gcc dot gnu.org
2024-06-25  9:23 ` rguenth at gcc dot gnu.org

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).